From 7a6a73c1e0a9d6c741bfaa912ce67779f1c9d7c6 Mon Sep 17 00:00:00 2001 From: snyk-bot Date: Thu, 27 May 2021 05:45:13 +0000 Subject: [PATCH 01/82] fix: modules/portal/package.json to reduce vulnerabilities The following vulnerabilities are fixed with an upgrade: - https://snyk.io/vuln/SNYK-JS-WS-1296835 --- modules/portal/package.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/portal/package.json b/modules/portal/package.json index a6cc2fedc..d42088aa9 100644 --- a/modules/portal/package.json +++ b/modules/portal/package.json @@ -21,7 +21,7 @@ "grunt-copy": "^0.1.0", "grunt-injector": "^1.1.0", "moment": "^2.24.0", - "puppeteer": "^1.11.0" + "puppeteer": "^3.0.0" }, "devDependencies": { "angular-mocks": "1.6.1", From ec022d008724a2df56ab98c68f1ea266c6007076 Mon Sep 17 00:00:00 2001 From: Ryan Emerle Date: Thu, 12 Aug 2021 10:54:53 -0400 Subject: [PATCH 02/82] Update AUTHORS.md - Remove redundant maintainer list (found in README.md) - Add listed tool maintainers to contributors for continued attribution --- AUTHORS.md | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/AUTHORS.md b/AUTHORS.md index 0e4424a1e..6ac41f16e 100644 --- a/AUTHORS.md +++ b/AUTHORS.md @@ -3,21 +3,13 @@ This project would not be possible without the generous contributions of many people. Thank you! If you have contributed in any way, but do not see your name here, please open a PR to add yourself (in alphabetical order by last name)! -## Maintainers -- Paul Cleary -- Ryan Emerle -- Nima Eskandary - -## Tool Maintainers -- Mike Ball: vinyldns-cli, vinyldns-terraform -- Nathan Pierce: vinyldns-ruby - ## DNS SMEs - Joe Crowe - David Back - Hong Ye ## Contributors +- Mike Ball - Tommy Barker - Robert Barrimond - Charles Bitter @@ -41,6 +33,7 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Jon Moore - Palash Nigam - Joshulyne Park +- Nathan Pierce - Michael Pilquist - Sriram Ramakrishnan - Khalid Reid From be6424caec4412e4d8b6ba3eb014ca638d28298a Mon Sep 17 00:00:00 2001 From: Ryan Emerle Date: Thu, 12 Aug 2021 10:58:31 -0400 Subject: [PATCH 03/82] Removing ROADMAP This is severely outdated and does not reflect the current direction of VinylDNS --- ROADMAP.md | 44 -------------------------------------------- 1 file changed, 44 deletions(-) delete mode 100644 ROADMAP.md diff --git a/ROADMAP.md b/ROADMAP.md deleted file mode 100644 index 11ae562f7..000000000 --- a/ROADMAP.md +++ /dev/null @@ -1,44 +0,0 @@ -# Roadmap -What is a Roadmap in opensource? VinylDNS would like to communicate _direction_ in terms of the features and needs -expressed by the VinylDNS community. In open source, demand is driven by the community through -Github issues. As more members join the discussion, we anticipate the "plan" to change. This document will be updated regularly to reflect the changes in prioritization. - -This document is organized by priority / planned release timeframes. Reading top-down should give you a sense of the order in which new features are planned to be delivered. - -## Completed - -- **Batch Change** - users can now submit multiple changes across zones at the same time. Included in batch change are: - - **Manual Review** - the ability to manually review certain DNS changes - - **Scheduled Changes** - the ability to schedule certain DNS changes to occur at a point in time in the future (requires manual processing right now) - - **Bulk import** - allows users to bulk load DNS changes from a CSV file -- **Global ACL Rules** - allows override on Shared / Record ownership -- **Global record search** - allows users to search for records across zones -- **Backend Providers** - allow connectivity to DNS backends _other_ than DDNS, e.g. AWS Route 53 - -## Next up? - -We are currently reviewing our roadmap. Some of the features we have discussed are below. If you have features you would like to contribute, drop us a line! - -## Zone Management -Presently VinylDNS _connects to existing zones_ for management. Zone Management will allow users -to create and manage zones in the authoritative systems themselves. The following high-level features are planned: - -1. Server Groups - allow VinylDNS admins to setup Server Groups. A Server Group consists of the primary, -secondary, and other information for a specific DNS backend. Server Groups are _vendor_ specific, plugins will be -be created for specific DNS vendors -1. Quotas - restrictions defined for a specific Server Group. These include items like `maxRecordSetsPerZone`, `concurrentUpdates`and more. -1. Zone Creation - allow the creation of a sub-domain from an existing Zone. Users choose the Server Group where -the zone will live, VinylDNS creates the delegation as well as access controls for the new zone. -1. Zone Maintenance - support the modification of zone properties, like default SOA record settings. - -## Other -There are several other features that we would like to support. We will be opening up these for RFC shortly. These include: - -1. DNS SEC - There is no first-class support for DNS SEC. That feature set is being defined. -1. Record meta data - VinylDNS will allow the "tagging" of DNS records with arbitrary key-value pairs -1. DNS Global Service Load Balancing (GSLB) - Support for common DNS GSLB use cases and integration with various GSLB vendors -1. A new user interface -1. Additional automation tools -1. VinylDNS admin user experience - pull alot of things from Config into the Portal UI for simpler administration -1. Split views / zone views - From 2a1dcb3793b3bcff02bb2bb475509a7b57661385 Mon Sep 17 00:00:00 2001 From: Ryan Emerle Date: Thu, 12 Aug 2021 10:58:58 -0400 Subject: [PATCH 04/82] Update README.md - Remove ROADMAP --- README.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/README.md b/README.md index 13103f6eb..71301077b 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,6 @@ Integration is simple with first-class language support including: - [Code of Conduct](#code-of-conduct) - [Developer Guide](#developer-guide) - [Contributing](#contributing) -- [Roadmap](#roadmap) - [Contact](#contact) - [Maintainers and Contributors](#maintainers-and-contributors) - [Credits](#credits) @@ -89,9 +88,6 @@ See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for instructions on setting up Viny ## Contributing See the [Contributing Guide](CONTRIBUTING.md). -## Roadmap -See [ROADMAP.md](ROADMAP.md) for the future plans for VinylDNS. - ## Contact - [Gitter](https://gitter.im/vinyldns) - If you have any security concerns please contact the maintainers directly vinyldns-core@googlegroups.com From b2fdde5a553f7a54b3f145c4d46e87c9b8647df4 Mon Sep 17 00:00:00 2001 From: Aravindh R <61419792+Aravindh-Raju@users.noreply.github.com> Date: Fri, 27 Aug 2021 17:38:58 +0530 Subject: [PATCH 05/82] Create Messages.scala --- .../main/scala/vinyldns/core/Messages.scala | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 modules/core/src/main/scala/vinyldns/core/Messages.scala diff --git a/modules/core/src/main/scala/vinyldns/core/Messages.scala b/modules/core/src/main/scala/vinyldns/core/Messages.scala new file mode 100644 index 000000000..233cbefa3 --- /dev/null +++ b/modules/core/src/main/scala/vinyldns/core/Messages.scala @@ -0,0 +1,53 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.core + +object Messages { + + // When less than two letters or numbers is filled in Record Name Filter field in RecordSetSearch page + val RecordNameFilterError = "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." + + // When creating group with name that already exists + // s"Group with name $name already exists. Please try a different name or contact ${existingGroup.email} to be added to the group." + val GroupAlreadyExistsError = s"Group with name {TestGroup} already exists. Please try a different name or contact {test@test.com} to be added to the group." + + // When deleting a group being the admin of a zone + // s"${group.name} is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." + val ZoneAdminError = s"{TestGroup} is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." + + // When deleting a group being the owner for a record set + // s"${group.name} is the owner for a record set including $rsId. Cannot delete. Please transfer the ownership to another group before deleting. + val RecordSetOwnerError = s"{TestGroup} is the owner for a record set including {RS_ID}. Cannot delete. Please transfer the ownership to another group before deleting." + + // When deleting a group which has an ACL rule for a zone + // s"${group.name} has an ACL rule for a zone including $zId. Cannot delete. Please transfer the ownership to another group before deleting." + val ACLRuleError = s"{TestGroup} has an ACL rule for a zone including {Z_ID}. Cannot delete. Please transfer the ownership to another group before deleting." + + // When NSData field is not a positive integer + val NSDataError = "NS data must be a positive integer" + + // When importing files other than .csv + val ImportError = "Import failed. Not a valid file. File should be of ‘.csv’ type." + + // When user is not authorized to make changes to the record + // s"""User "$userName" is not authorized. Contact ${ownerType.toString.toLowerCase} owner group: + // |${ownerGroupName.getOrElse(ownerGroupId)} at ${contactEmail.getOrElse("")} to make DNS changes. + // |You must be a part of the owner group to make DNS changes.""".stripMargin .replaceAll("\n", " ") + val NotAuthorizedError = s"""User {"dummy"} is not authorized. Contact {zone} owner group: {ok-group} at + {test@test.com} to make DNS changes. You must be a part of the owner group to make DNS changes.""" + +} From c78441d6a314d88e0e0590c1e89af34165f33e9c Mon Sep 17 00:00:00 2001 From: Aravindh R Date: Mon, 30 Aug 2021 14:46:53 +0530 Subject: [PATCH 06/82] Update messages --- .../domain/membership/MembershipService.scala | 11 +-- .../domain/record/RecordSetValidations.scala | 3 +- .../vinyldns/api/route/DnsJsonProtocol.scala | 3 +- .../record/RecordSetValidationsSpec.scala | 3 +- .../api/route/VinylDNSJsonProtocolSpec.scala | 3 +- .../main/scala/vinyldns/core/Messages.scala | 73 +++++++++++++------ .../core/domain/DomainValidationErrors.scala | 10 ++- .../dns-change/dns-change-new.controller.js | 2 +- 8 files changed, 73 insertions(+), 35 deletions(-) diff --git a/modules/api/src/main/scala/vinyldns/api/domain/membership/MembershipService.scala b/modules/api/src/main/scala/vinyldns/api/domain/membership/MembershipService.scala index 0d47bace4..5eff173b3 100644 --- a/modules/api/src/main/scala/vinyldns/api/domain/membership/MembershipService.scala +++ b/modules/api/src/main/scala/vinyldns/api/domain/membership/MembershipService.scala @@ -24,6 +24,7 @@ import vinyldns.core.domain.membership.LockStatus.LockStatus import vinyldns.core.domain.zone.ZoneRepository import vinyldns.core.domain.membership._ import vinyldns.core.domain.record.RecordSetRepository +import vinyldns.core.Messages._ object MembershipService { def apply(dataAccessor: ApiDataAccessor): MembershipService = @@ -235,7 +236,7 @@ class MembershipService( .getGroupByName(name) .map { case Some(existingGroup) if existingGroup.status != GroupStatus.Deleted => - GroupAlreadyExistsError(s"Group with name $name already exists").asLeft + GroupAlreadyExistsError(GroupAlreadyExistsErrorMsg.format(name, existingGroup.email)).asLeft case _ => ().asRight } @@ -257,7 +258,7 @@ class MembershipService( .map { case Some(existingGroup) if existingGroup.status != GroupStatus.Deleted && existingGroup.id != groupId => - GroupAlreadyExistsError(s"Group with name $name already exists").asLeft + GroupAlreadyExistsError(GroupAlreadyExistsErrorMsg.format(name, existingGroup.email)).asLeft case _ => ().asRight } @@ -267,7 +268,7 @@ class MembershipService( zoneRepo .getZonesByAdminGroupId(group.id) .map { zones => - ensuring(InvalidGroupRequestError(s"${group.name} is the admin of a zone. Cannot delete."))( + ensuring(InvalidGroupRequestError(ZoneAdminError.format(group.name)))( zones.isEmpty ) } @@ -279,7 +280,7 @@ class MembershipService( .map { rsId => ensuring( InvalidGroupRequestError( - s"${group.name} is the owner for a record set including $rsId. Cannot delete." + RecordSetOwnerError.format(group.name, rsId) ) )(rsId.isEmpty) } @@ -291,7 +292,7 @@ class MembershipService( .map { zId => ensuring( InvalidGroupRequestError( - s"${group.name} has an ACL rule for a zone including $zId. Cannot delete." + ACLRuleError.format(group.name, zId) ) )(zId.isEmpty) } diff --git a/modules/api/src/main/scala/vinyldns/api/domain/record/RecordSetValidations.scala b/modules/api/src/main/scala/vinyldns/api/domain/record/RecordSetValidations.scala index add8ba515..168f27f1d 100644 --- a/modules/api/src/main/scala/vinyldns/api/domain/record/RecordSetValidations.scala +++ b/modules/api/src/main/scala/vinyldns/api/domain/record/RecordSetValidations.scala @@ -28,6 +28,7 @@ import vinyldns.core.domain.auth.AuthPrincipal import vinyldns.core.domain.membership.Group import vinyldns.core.domain.record.{RecordSet, RecordType} import vinyldns.core.domain.zone.Zone +import vinyldns.core.Messages._ import scala.util.matching.Regex @@ -316,7 +317,7 @@ object RecordSetValidations { def validRecordNameFilterLength(recordNameFilter: String): Either[Throwable, Unit] = ensuring( - InvalidRequest("recordNameFilter must contain at least two letters or numbers.") + InvalidRequest(RecordNameFilterError) ) { val searchRegex: Regex = """[a-zA-Z0-9].*[a-zA-Z0-9]+""".r searchRegex.findFirstIn(recordNameFilter).isDefined diff --git a/modules/api/src/main/scala/vinyldns/api/route/DnsJsonProtocol.scala b/modules/api/src/main/scala/vinyldns/api/route/DnsJsonProtocol.scala index 9b34f7180..8edbe5dcf 100644 --- a/modules/api/src/main/scala/vinyldns/api/route/DnsJsonProtocol.scala +++ b/modules/api/src/main/scala/vinyldns/api/route/DnsJsonProtocol.scala @@ -30,6 +30,7 @@ import vinyldns.core.domain.DomainHelpers.removeWhitespace import vinyldns.core.domain.Fqdn import vinyldns.core.domain.record._ import vinyldns.core.domain.zone._ +import vinyldns.core.Messages._ trait DnsJsonProtocol extends JsonValidation { import vinyldns.core.domain.record.RecordType._ @@ -373,7 +374,7 @@ trait DnsJsonProtocol extends JsonValidation { .required[String]("Missing NS.nsdname") .check( "NS must be less than 255 characters" -> checkDomainNameLen, - "NS data must be absolute" -> nameContainsDots + NSDataError -> nameContainsDots ) .map(Fqdn.apply) .map(NSData.apply) diff --git a/modules/api/src/test/scala/vinyldns/api/domain/record/RecordSetValidationsSpec.scala b/modules/api/src/test/scala/vinyldns/api/domain/record/RecordSetValidationsSpec.scala index e77f900a3..ac9a0df36 100644 --- a/modules/api/src/test/scala/vinyldns/api/domain/record/RecordSetValidationsSpec.scala +++ b/modules/api/src/test/scala/vinyldns/api/domain/record/RecordSetValidationsSpec.scala @@ -36,6 +36,7 @@ import vinyldns.core.TestMembershipData._ import vinyldns.core.domain.Fqdn import vinyldns.core.domain.membership.Group import vinyldns.core.domain.record._ +import vinyldns.core.Messages._ import scala.util.matching.Regex @@ -601,7 +602,7 @@ class RecordSetValidationsSpec val invalidString = "*o*" val error = leftValue(validRecordNameFilterLength(invalidString)) error shouldBe an[InvalidRequest] - error.getMessage() shouldBe "recordNameFilter must contain at least two letters or numbers." + error.getMessage() shouldBe RecordNameFilterError } } } diff --git a/modules/api/src/test/scala/vinyldns/api/route/VinylDNSJsonProtocolSpec.scala b/modules/api/src/test/scala/vinyldns/api/route/VinylDNSJsonProtocolSpec.scala index 64b8b3f8a..d566637c2 100644 --- a/modules/api/src/test/scala/vinyldns/api/route/VinylDNSJsonProtocolSpec.scala +++ b/modules/api/src/test/scala/vinyldns/api/route/VinylDNSJsonProtocolSpec.scala @@ -27,6 +27,7 @@ import vinyldns.core.domain.record._ import vinyldns.core.domain.zone.{CreateZoneInput, UpdateZoneInput, ZoneConnection} import vinyldns.core.TestRecordSetData._ import vinyldns.core.domain.Fqdn +import vinyldns.core.Messages._ class VinylDNSJsonProtocolSpec extends AnyWordSpec @@ -594,7 +595,7 @@ class VinylDNSJsonProtocolSpec ("records" -> data) val thrown = the[MappingException] thrownBy recordSetJValue.extract[RecordSet] - thrown.msg should include("NS data must be absolute") + thrown.msg should include(NSDataError) } "round trip a DS record set" in { val rs = RecordSet( diff --git a/modules/core/src/main/scala/vinyldns/core/Messages.scala b/modules/core/src/main/scala/vinyldns/core/Messages.scala index 233cbefa3..74ce1e61a 100644 --- a/modules/core/src/main/scala/vinyldns/core/Messages.scala +++ b/modules/core/src/main/scala/vinyldns/core/Messages.scala @@ -18,36 +18,65 @@ package vinyldns.core object Messages { - // When less than two letters or numbers is filled in Record Name Filter field in RecordSetSearch page - val RecordNameFilterError = "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." + // Error displayed when less than two letters or numbers is filled in Record Name Filter field in RecordSetSearch page + val RecordNameFilterError = + "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." - // When creating group with name that already exists - // s"Group with name $name already exists. Please try a different name or contact ${existingGroup.email} to be added to the group." - val GroupAlreadyExistsError = s"Group with name {TestGroup} already exists. Please try a different name or contact {test@test.com} to be added to the group." + /* + * Error displayed when attempting to create group with name that already exists + * + * Placeholders: + * 1. [string] group name + * 2. [string] group email address + */ + val GroupAlreadyExistsErrorMsg = + "Group with name %s already exists. Please try a different name or contact %s to be added to the group." - // When deleting a group being the admin of a zone - // s"${group.name} is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." - val ZoneAdminError = s"{TestGroup} is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." + /* + * Error displayed when deleting a group being the admin of a zone + * + * Placeholders: + * 1. [string] group name + */ + val ZoneAdminError = + "%s is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." - // When deleting a group being the owner for a record set - // s"${group.name} is the owner for a record set including $rsId. Cannot delete. Please transfer the ownership to another group before deleting. - val RecordSetOwnerError = s"{TestGroup} is the owner for a record set including {RS_ID}. Cannot delete. Please transfer the ownership to another group before deleting." + /* + * Error displayed when deleting a group being the owner for a record set + * + * Placeholders: + * 1. [string] group name + * 2. [string] record set id + */ + val RecordSetOwnerError = + "%s is the owner for a record set including %s. Cannot delete. Please transfer the ownership to another group before deleting." - // When deleting a group which has an ACL rule for a zone - // s"${group.name} has an ACL rule for a zone including $zId. Cannot delete. Please transfer the ownership to another group before deleting." - val ACLRuleError = s"{TestGroup} has an ACL rule for a zone including {Z_ID}. Cannot delete. Please transfer the ownership to another group before deleting." + /* + * Error displayed when deleting a group which has an ACL rule for a zone + * + * Placeholders: + * 1. [string] group name + * 2. [string] zone id + */ + val ACLRuleError = + "%s has an ACL rule for a zone including %s. Cannot delete. Please transfer the ownership to another group before deleting." - // When NSData field is not a positive integer + // Error displayed when NSData field is not a positive integer val NSDataError = "NS data must be a positive integer" - // When importing files other than .csv + // Error displayed when importing files other than .csv val ImportError = "Import failed. Not a valid file. File should be of ‘.csv’ type." - // When user is not authorized to make changes to the record - // s"""User "$userName" is not authorized. Contact ${ownerType.toString.toLowerCase} owner group: - // |${ownerGroupName.getOrElse(ownerGroupId)} at ${contactEmail.getOrElse("")} to make DNS changes. - // |You must be a part of the owner group to make DNS changes.""".stripMargin .replaceAll("\n", " ") - val NotAuthorizedError = s"""User {"dummy"} is not authorized. Contact {zone} owner group: {ok-group} at - {test@test.com} to make DNS changes. You must be a part of the owner group to make DNS changes.""" + /* + * Error displayed when user is not authorized to make changes to the record + * + * Placeholders: + * 1. [string] user name + * 2. [string] owner type + * 3. [string] owner group name | owner group id + * 4. [string] contact email + */ + val NotAuthorizedErrorMsg = + "User '%s' is not authorized. Contact %s owner group: %s at %s to make DNS changes." } diff --git a/modules/core/src/main/scala/vinyldns/core/domain/DomainValidationErrors.scala b/modules/core/src/main/scala/vinyldns/core/domain/DomainValidationErrors.scala index 591d5c7cd..91486ce79 100644 --- a/modules/core/src/main/scala/vinyldns/core/domain/DomainValidationErrors.scala +++ b/modules/core/src/main/scala/vinyldns/core/domain/DomainValidationErrors.scala @@ -19,6 +19,7 @@ package vinyldns.core.domain import vinyldns.core.domain.batch.OwnerType.OwnerType import vinyldns.core.domain.record.{RecordData, RecordSet, RecordType} import vinyldns.core.domain.record.RecordType.RecordType +import vinyldns.core.Messages._ // $COVERAGE-OFF$ sealed abstract class DomainValidationError(val isFatal: Boolean = true) { @@ -134,9 +135,12 @@ final case class UserIsNotAuthorizedError( ownerGroupName: Option[String] = None ) extends DomainValidationError { def message: String = - s"""User "$userName" is not authorized. Contact ${ownerType.toString.toLowerCase} owner group: - |${ownerGroupName.getOrElse(ownerGroupId)} at ${contactEmail.getOrElse("")}.""".stripMargin - .replaceAll("\n", " ") + NotAuthorizedErrorMsg.format( + userName, + ownerType.toString.toLowerCase, + ownerGroupName.getOrElse(ownerGroupId), + contactEmail.getOrElse("") + ) } final case class RecordNameNotUniqueInBatch(name: String, typ: RecordType) diff --git a/modules/portal/public/lib/dns-change/dns-change-new.controller.js b/modules/portal/public/lib/dns-change/dns-change-new.controller.js index 70848ee76..a0234c0ba 100644 --- a/modules/portal/public/lib/dns-change/dns-change-new.controller.js +++ b/modules/portal/public/lib/dns-change/dns-change-new.controller.js @@ -177,7 +177,7 @@ $scope.$apply() resolve($scope.newBatch.changes.length); } else { - reject("Import failed. Not a valid file."); + reject("Import failed. Not a valid file. File should be of ‘.csv’ type."); } } reader.readAsText(file); From c9d30a5082e6bda01743827bbd7c3d45680bc6ec Mon Sep 17 00:00:00 2001 From: Aravindh R Date: Mon, 30 Aug 2021 16:49:05 +0530 Subject: [PATCH 07/82] Update messages --- .../batch/create_batch_change_test.py | 84 +++++++++---------- .../main/scala/vinyldns/core/Messages.scala | 3 +- 2 files changed, 43 insertions(+), 44 deletions(-) diff --git a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py index cc470a17b..dee2013f0 100644 --- a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py @@ -1399,15 +1399,15 @@ def test_create_batch_change_with_readonly_user_fails(shared_zone_test_context): errors = dummy_client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(errors[0], input_name="relative.ok.", record_data="4.5.6.7", - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com.']) + error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(errors[1], input_name="delete.ok.", change_type="DeleteRecordSet", record_data="4.5.6.7", - error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com.']) + error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(errors[2], input_name="update.ok.", record_data="1.2.3.4", - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com.']) + error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(errors[3], input_name="update.ok.", change_type="DeleteRecordSet", record_data=None, - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com.']) + error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) finally: clear_ok_acl_rules(shared_zone_test_context) clear_recordset_list(to_delete, ok_client) @@ -1504,7 +1504,7 @@ def test_a_recordtype_add_checks(shared_zone_test_context): "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[9], input_name="user-add-unauthorized.dummy.", record_data="1.2.3.4", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -1644,16 +1644,16 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): 'Record "non-existent.ok." Does Not Exist: cannot delete a record that does not exist.']) assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, ttl=300, - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[14], input_name=rs_update_dummy_with_owner_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[15], input_name=rs_update_dummy_with_owner_fqdn, ttl=300, - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) finally: # Clean up updates @@ -1756,7 +1756,7 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn+ "\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[9], input_name="user-add-unauthorized.dummy.", record_type="AAAA", record_data="1::1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -1881,13 +1881,13 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): record_type="AAAA", record_data="1::1") assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, record_type="AAAA", record_data="1::1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: # Clean up updates @@ -2042,7 +2042,7 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_reverse_fqdn + "\" and type \"PTR\" conflicts with this record."]) assert_failed_change_in_error_response(response[16], input_name="user-add-unauthorized.dummy.", record_type="CNAME", record_data="test.com.", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -2173,13 +2173,13 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): record_type="CNAME", record_data="test.com.") assert_failed_change_in_error_response(response[13], input_name="delete-unauthorized3.dummy.", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[14], input_name="update-unauthorized3.dummy.", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[15], input_name="update-unauthorized3.dummy.", record_type="CNAME", ttl=300, record_data="test.com.", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com.']) + error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) assert_successful_change_in_error_response(response[16], input_name="existing-cname2.parent.com.", record_type="CNAME", change_type="DeleteRecordSet") assert_failed_change_in_error_response(response[17], input_name="existing-cname2.parent.com.", @@ -2235,19 +2235,19 @@ def test_ptr_recordtype_auth_checks(shared_zone_test_context): assert_failed_change_in_error_response(errors[0], input_name="192.0.2.5", record_type="PTR", record_data="not.authorized.ipv4.ptr.base.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com."]) + error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[1], input_name="192.0.2.193", record_type="PTR", record_data="not.authorized.ipv4.ptr.classless.delegation.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com."]) + error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[2], input_name="fd69:27cc:fe91:1000::1234", record_type="PTR", record_data="not.authorized.ipv6.ptr.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com."]) + error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[3], input_name="192.0.2.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com."]) + error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[4], input_name="fd69:27cc:fe91:1000::1234", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com."]) + error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, ok_client) @@ -2721,7 +2721,7 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[7], input_name="user-add-unauthorized.dummy.", record_type="TXT", record_data="test", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -2832,13 +2832,13 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): record_data="test") assert_failed_change_in_error_response(response[9], input_name=rs_delete_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[10], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data="test", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[11], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: # Clean up updates @@ -2944,7 +2944,7 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[10], input_name="user-add-unauthorized.dummy.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -3068,13 +3068,13 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): record_data={"preference": 1000, "exchange": "foo.bar."}) assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."}, - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) finally: # Clean up updates @@ -3107,16 +3107,16 @@ def test_user_validation_ownership(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(response[0], input_name="add-test-batch.non.test.shared.", record_data="1.1.1.1", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[1], input_name="update-test-batch.non.test.shared.", change_type="DeleteRecordSet", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[2], input_name="update-test-batch.non.test.shared.", record_data="1.1.1.1", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[3], input_name="delete-test-batch.non.test.shared.", change_type="DeleteRecordSet", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_successful_change_in_error_response(response[4], input_name="add-test-batch.shared.") assert_successful_change_in_error_response(response[5], input_name="update-test-batch.shared.", @@ -3144,16 +3144,16 @@ def test_user_validation_shared(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(response[0], input_name="add-test-batch.non.test.shared.", record_data="1.1.1.1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[1], input_name="update-test-batch.non.test.shared.", change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[2], input_name="update-test-batch.non.test.shared.", record_data="1.1.1.1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) assert_failed_change_in_error_response(response[3], input_name="delete-test-batch.non.test.shared.", change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email."]) + error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) def test_create_batch_change_does_not_save_owner_group_id_for_non_shared_zone(shared_zone_test_context): @@ -3636,7 +3636,7 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ assert_failed_change_in_error_response(response[0], input_name=shared_delete_fqdn, change_type="DeleteRecordSet", - error_messages=['User "list-group-user" is not authorized. Contact record owner group: record-ownergroup at test@test.com.']) + error_messages=['User "list-group-user" is not authorized. Contact record owner group: record-ownergroup at test@test.com to make DNS changes.']) finally: if create_rs: @@ -3824,7 +3824,7 @@ def test_create_batch_with_irrelevant_global_acl_rule_applied_fails(shared_zone_ response = test_user_client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(response[0], input_name=a_fqdn, record_type="A", change_type="Add", record_data="192.0.2.45", - error_messages=['User "testuser" is not authorized. Contact record owner group: testSharedZoneGroup at email.']) + error_messages=['User "testuser" is not authorized. Contact record owner group: testSharedZoneGroup at email to make DNS changes.']) finally: if create_a_rs: @@ -3957,7 +3957,7 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): assert_successful_change_in_error_response(response[3], input_name=txt_update_fqdn, record_type="TXT", record_data="test", change_type="DeleteRecordSet") assert_successful_change_in_error_response(response[4], input_name=txt_update_fqdn, record_type="TXT", record_data="updated text") assert_failed_change_in_error_response(response[5], input_name=txt_delete_fqdn, record_type="TXT", record_data="test", change_type="DeleteRecordSet", - error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com.']) + error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) finally: clear_ok_acl_rules(shared_zone_test_context) diff --git a/modules/core/src/main/scala/vinyldns/core/Messages.scala b/modules/core/src/main/scala/vinyldns/core/Messages.scala index 74ce1e61a..b5b3d6ef2 100644 --- a/modules/core/src/main/scala/vinyldns/core/Messages.scala +++ b/modules/core/src/main/scala/vinyldns/core/Messages.scala @@ -76,7 +76,6 @@ object Messages { * 3. [string] owner group name | owner group id * 4. [string] contact email */ - val NotAuthorizedErrorMsg = - "User '%s' is not authorized. Contact %s owner group: %s at %s to make DNS changes." + val NotAuthorizedErrorMsg = "User \"%s\" is not authorized. Contact %s owner group: %s at %s to make DNS changes." } From ce4095ca0c1f7cacdd03b40cc51611eabee56d0b Mon Sep 17 00:00:00 2001 From: Aravindh R <61419792+Aravindh-Raju@users.noreply.github.com> Date: Mon, 30 Aug 2021 16:58:49 +0530 Subject: [PATCH 08/82] Update Messages.scala --- .../src/main/scala/vinyldns/core/Messages.scala | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/modules/core/src/main/scala/vinyldns/core/Messages.scala b/modules/core/src/main/scala/vinyldns/core/Messages.scala index b5b3d6ef2..4275b1c88 100644 --- a/modules/core/src/main/scala/vinyldns/core/Messages.scala +++ b/modules/core/src/main/scala/vinyldns/core/Messages.scala @@ -19,8 +19,7 @@ package vinyldns.core object Messages { // Error displayed when less than two letters or numbers is filled in Record Name Filter field in RecordSetSearch page - val RecordNameFilterError = - "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." + val RecordNameFilterError = "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." /* * Error displayed when attempting to create group with name that already exists @@ -29,8 +28,7 @@ object Messages { * 1. [string] group name * 2. [string] group email address */ - val GroupAlreadyExistsErrorMsg = - "Group with name %s already exists. Please try a different name or contact %s to be added to the group." + val GroupAlreadyExistsErrorMsg = "Group with name %s already exists. Please try a different name or contact %s to be added to the group." /* * Error displayed when deleting a group being the admin of a zone @@ -38,8 +36,7 @@ object Messages { * Placeholders: * 1. [string] group name */ - val ZoneAdminError = - "%s is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." + val ZoneAdminError = "%s is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." /* * Error displayed when deleting a group being the owner for a record set @@ -48,8 +45,7 @@ object Messages { * 1. [string] group name * 2. [string] record set id */ - val RecordSetOwnerError = - "%s is the owner for a record set including %s. Cannot delete. Please transfer the ownership to another group before deleting." + val RecordSetOwnerError = "%s is the owner for a record set including %s. Cannot delete. Please transfer the ownership to another group before deleting." /* * Error displayed when deleting a group which has an ACL rule for a zone @@ -58,8 +54,7 @@ object Messages { * 1. [string] group name * 2. [string] zone id */ - val ACLRuleError = - "%s has an ACL rule for a zone including %s. Cannot delete. Please transfer the ownership to another group before deleting." + val ACLRuleError = "%s has an ACL rule for a zone including %s. Cannot delete. Please transfer the ownership to another group before deleting." // Error displayed when NSData field is not a positive integer val NSDataError = "NS data must be a positive integer" From d0695ba9e72f6ec81fd91f26888365121744bb09 Mon Sep 17 00:00:00 2001 From: Ryan Emerle Date: Wed, 1 Sep 2021 09:33:08 -0400 Subject: [PATCH 09/82] Update issue templates --- .github/ISSUE_TEMPLATE/bug_report.md | 3 +++ .github/ISSUE_TEMPLATE/feature_request.md | 3 +++ .github/ISSUE_TEMPLATE/maintenance-request.md | 3 +++ 3 files changed, 9 insertions(+) diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index d94ba7ac2..13b98dbb4 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,6 +1,9 @@ --- name: Bug report about: Create a report to help us improve +title: '' +labels: status/needs-label +assignees: '' --- diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 066b2d920..5b34c2991 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -1,6 +1,9 @@ --- name: Feature request about: Suggest an idea for this project +title: '' +labels: status/needs-label +assignees: '' --- diff --git a/.github/ISSUE_TEMPLATE/maintenance-request.md b/.github/ISSUE_TEMPLATE/maintenance-request.md index fca91f282..29214cefb 100644 --- a/.github/ISSUE_TEMPLATE/maintenance-request.md +++ b/.github/ISSUE_TEMPLATE/maintenance-request.md @@ -1,6 +1,9 @@ --- name: Maintenance request about: Suggest an upgrade, refactoring, code move, new library +title: '' +labels: status/needs-label +assignees: '' --- From 731e2bc87371cad9e5541caaef2edb2324c5faff Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Tue, 28 Sep 2021 12:32:51 -0400 Subject: [PATCH 10/82] WIP - Functional Test Updates - Update tests to Python 3.x - Setup partitions to allow for parallel testing - Partition bind zones - Update `docker/api/docker.conf` to include partitioned zones - Replace AWS request signer with upgraded `boto3` signer - Replace launcher script with one that instantiates the virtualenv - Add `--enable-safety_check` to check for modifications to zone data - Add `--resolver-ip` to allow for specification of a different resolver for the tests versus what gets sent to the API - This is helpful when the tests are not running in the same network as the API - Ex: `./run.sh --dns-ip=172.19.0.4 --resolver-ip=127.0.0.1:19001` where --- docker/api/docker.conf | 55 +- .../bind9/etc/_template/named.partition.conf | 186 ++ docker/bind9/etc/named.conf.local | 199 +- docker/bind9/etc/named.partition1.conf | 186 ++ docker/bind9/etc/named.partition2.conf | 186 ++ docker/bind9/etc/named.partition3.conf | 186 ++ docker/bind9/etc/named.partition4.conf | 186 ++ .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 12 + .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 13 + .../bind9/zones/_template/10.10.in-addr.arpa | 10 + .../_template/192^30.2.0.192.in-addr.arpa | 11 + .../zones/_template/2.0.192.in-addr.arpa | 15 + .../zones/_template/child.parent.com.hosts | 9 + .../zones/_template/dskey.example.com.hosts | 9 + docker/bind9/zones/_template/dummy.hosts | 15 + .../bind9/zones/_template/example.com.hosts | 10 + .../bind9/zones/_template/invalid-zone.hosts | 17 + .../bind9/zones/_template/list-records.hosts | 38 + .../list-zones-test-searched-1.hosts | 8 + .../list-zones-test-searched-2.hosts | 8 + .../list-zones-test-searched-3.hosts | 8 + .../list-zones-test-unfiltered-1.hosts | 8 + .../list-zones-test-unfiltered-2.hosts | 8 + .../zones/_template/non.test.shared.hosts | 13 + docker/bind9/zones/_template/not.loaded.hosts | 9 + docker/bind9/zones/_template/ok.hosts | 16 + docker/bind9/zones/_template/old-shared.hosts | 14 + .../bind9/zones/_template/old-vinyldns2.hosts | 14 + .../bind9/zones/_template/old-vinyldns3.hosts | 14 + .../zones/_template/one-time-shared.hosts | 8 + docker/bind9/zones/_template/one-time.hosts | 14 + docker/bind9/zones/_template/open.hosts | 8 + docker/bind9/zones/_template/parent.com.hosts | 15 + docker/bind9/zones/_template/shared.hosts | 16 + docker/bind9/zones/_template/sync-test.hosts | 17 + .../zones/_template/system-test-history.hosts | 14 + .../bind9/zones/_template/system-test.hosts | 16 + docker/bind9/zones/_template/vinyldns.hosts | 14 + .../zone.requires.review.hosts} | 6 +- docker/bind9/zones/child.parent.com.hosts | 9 - docker/bind9/zones/dskey.example.com.hosts | 9 - docker/bind9/zones/example.com.hosts | 10 - .../zones/list-zones-test-searched-1.hosts | 8 - .../zones/list-zones-test-searched-2.hosts | 8 - .../zones/list-zones-test-searched-3.hosts | 8 - .../zones/list-zones-test-unfiltered-1.hosts | 8 - .../zones/list-zones-test-unfiltered-2.hosts | 8 - docker/bind9/zones/one-time-shared.hosts | 8 - docker/bind9/zones/open.hosts | 8 - .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../zones/{ => partition1}/10.10.in-addr.arpa | 4 +- .../partition1/192^30.2.0.192.in-addr.arpa | 11 + .../zones/partition1/2.0.192.in-addr.arpa | 15 + .../zones/partition1/child.parent.com.hosts | 9 + .../zones/partition1/dskey.example.com.hosts | 9 + .../bind9/zones/{ => partition1}/dummy.hosts | 4 +- .../bind9/zones/partition1/example.com.hosts | 10 + .../zones/{ => partition1}/invalid-zone.hosts | 10 +- .../zones/{ => partition1}/list-records.hosts | 4 +- .../list-zones-test-searched-1.hosts | 8 + .../list-zones-test-searched-2.hosts | 8 + .../list-zones-test-searched-3.hosts | 8 + .../list-zones-test-unfiltered-1.hosts | 8 + .../list-zones-test-unfiltered-2.hosts | 8 + .../zones/partition1/non.test.shared.hosts | 13 + .../zones/{ => partition1}/not.loaded.hosts | 4 +- docker/bind9/zones/{ => partition1}/ok.hosts | 4 +- .../old-shared.hosts} | 4 +- .../zones/partition1/old-vinyldns2.hosts | 14 + .../zones/partition1/old-vinyldns3.hosts | 14 + .../zones/partition1/one-time-shared.hosts | 8 + .../zones/{ => partition1}/one-time.hosts | 4 +- docker/bind9/zones/partition1/open.hosts | 8 + .../zones/{ => partition1}/parent.com.hosts | 6 +- .../bind9/zones/{ => partition1}/shared.hosts | 4 +- docker/bind9/zones/partition1/sync-test.hosts | 17 + .../system-test-history.hosts | 4 +- .../zones/{ => partition1}/system-test.hosts | 4 +- .../vinyldns.hosts} | 4 +- .../zone.requires.review.hosts | 4 +- .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 12 + .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 13 + .../bind9/zones/partition2/10.10.in-addr.arpa | 10 + .../192^30.2.0.192.in-addr.arpa | 0 .../{ => partition2}/2.0.192.in-addr.arpa | 0 .../zones/partition2/child.parent.com.hosts | 9 + .../zones/partition2/dskey.example.com.hosts | 9 + docker/bind9/zones/partition2/dummy.hosts | 15 + .../bind9/zones/partition2/example.com.hosts | 10 + .../bind9/zones/partition2/invalid-zone.hosts | 17 + .../bind9/zones/partition2/list-records.hosts | 38 + .../list-zones-test-searched-1.hosts | 8 + .../list-zones-test-searched-2.hosts | 8 + .../list-zones-test-searched-3.hosts | 8 + .../list-zones-test-unfiltered-1.hosts | 8 + .../list-zones-test-unfiltered-2.hosts | 8 + .../zones/partition2/non.test.shared.hosts | 13 + .../bind9/zones/partition2/not.loaded.hosts | 9 + docker/bind9/zones/partition2/ok.hosts | 16 + .../old-shared.hosts} | 4 +- .../zones/partition2/old-vinyldns2.hosts | 14 + .../zones/partition2/old-vinyldns3.hosts | 14 + .../zones/partition2/one-time-shared.hosts | 8 + .../one-time.hosts} | 4 +- docker/bind9/zones/partition2/open.hosts | 8 + .../bind9/zones/partition2/parent.com.hosts | 15 + docker/bind9/zones/partition2/shared.hosts | 16 + docker/bind9/zones/partition2/sync-test.hosts | 17 + .../partition2/system-test-history.hosts | 14 + .../bind9/zones/partition2/system-test.hosts | 16 + docker/bind9/zones/partition2/vinyldns.hosts | 14 + .../partition2/zone.requires.review.hosts | 11 + .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 12 + .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 13 + .../bind9/zones/partition3/10.10.in-addr.arpa | 10 + .../partition3/192^30.2.0.192.in-addr.arpa | 11 + .../zones/partition3/2.0.192.in-addr.arpa | 15 + .../zones/partition3/child.parent.com.hosts | 9 + .../zones/partition3/dskey.example.com.hosts | 9 + docker/bind9/zones/partition3/dummy.hosts | 15 + .../bind9/zones/partition3/example.com.hosts | 10 + .../bind9/zones/partition3/invalid-zone.hosts | 17 + .../bind9/zones/partition3/list-records.hosts | 38 + .../list-zones-test-searched-1.hosts | 8 + .../list-zones-test-searched-2.hosts | 8 + .../list-zones-test-searched-3.hosts | 8 + .../list-zones-test-unfiltered-1.hosts | 8 + .../list-zones-test-unfiltered-2.hosts | 8 + .../zones/partition3/non.test.shared.hosts | 13 + .../bind9/zones/partition3/not.loaded.hosts | 9 + docker/bind9/zones/partition3/ok.hosts | 16 + .../bind9/zones/partition3/old-shared.hosts | 14 + .../zones/partition3/old-vinyldns2.hosts | 14 + .../zones/partition3/old-vinyldns3.hosts | 14 + .../zones/partition3/one-time-shared.hosts | 8 + docker/bind9/zones/partition3/one-time.hosts | 14 + docker/bind9/zones/partition3/open.hosts | 8 + .../bind9/zones/partition3/parent.com.hosts | 15 + docker/bind9/zones/partition3/shared.hosts | 16 + docker/bind9/zones/partition3/sync-test.hosts | 17 + .../partition3/system-test-history.hosts | 14 + .../bind9/zones/partition3/system-test.hosts | 16 + docker/bind9/zones/partition3/vinyldns.hosts | 14 + .../partition3/zone.requires.review.hosts | 11 + .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 12 + .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 13 + .../bind9/zones/partition4/10.10.in-addr.arpa | 10 + .../partition4/192^30.2.0.192.in-addr.arpa | 11 + .../zones/partition4/2.0.192.in-addr.arpa | 15 + .../zones/partition4/child.parent.com.hosts | 9 + .../zones/partition4/dskey.example.com.hosts | 9 + docker/bind9/zones/partition4/dummy.hosts | 15 + .../bind9/zones/partition4/example.com.hosts | 10 + .../bind9/zones/partition4/invalid-zone.hosts | 17 + .../bind9/zones/partition4/list-records.hosts | 38 + .../list-zones-test-searched-1.hosts | 8 + .../list-zones-test-searched-2.hosts | 8 + .../list-zones-test-searched-3.hosts | 8 + .../list-zones-test-unfiltered-1.hosts | 8 + .../list-zones-test-unfiltered-2.hosts | 8 + .../zones/partition4/non.test.shared.hosts | 13 + .../bind9/zones/partition4/not.loaded.hosts | 9 + docker/bind9/zones/partition4/ok.hosts | 16 + .../bind9/zones/partition4/old-shared.hosts | 14 + .../zones/partition4/old-vinyldns2.hosts | 14 + .../zones/partition4/old-vinyldns3.hosts | 14 + .../zones/partition4/one-time-shared.hosts | 8 + docker/bind9/zones/partition4/one-time.hosts | 14 + docker/bind9/zones/partition4/open.hosts | 8 + .../bind9/zones/partition4/parent.com.hosts | 15 + docker/bind9/zones/partition4/shared.hosts | 16 + docker/bind9/zones/partition4/sync-test.hosts | 17 + .../partition4/system-test-history.hosts | 14 + .../bind9/zones/partition4/system-test.hosts | 16 + docker/bind9/zones/partition4/vinyldns.hosts | 14 + .../partition4/zone.requires.review.hosts | 11 + docker/bind9/zones/sync-test.hosts | 17 - modules/api/functional_test/.gitignore | 2 + modules/api/functional_test/__init__.py | 0 .../api/functional_test/aws_request_signer.py | 34 + modules/api/functional_test/bootstrap.sh | 12 - .../functional_test/boto_request_signer.py | 103 - modules/api/functional_test/conftest.py | 165 +- .../live_tests/authentication_test.py | 18 +- .../batch/approve_batch_change_test.py | 132 +- .../batch/cancel_batch_change_test.py | 62 +- .../batch/create_batch_change_test.py | 2504 +++++++++-------- .../live_tests/batch/get_batch_change_test.py | 72 +- .../batch/list_batch_change_summaries_test.py | 71 +- .../batch/reject_batch_change_test.py | 59 +- .../functional_test/live_tests/conftest.py | 36 +- .../live_tests/internal/status_test.py | 44 +- .../list_batch_summaries_test_context.py | 80 +- .../live_tests/list_groups_test_context.py | 28 +- .../list_recordsets_test_context.py | 109 +- .../live_tests/list_zones_test_context.py | 85 +- .../membership/create_group_test.py | 172 +- .../membership/delete_group_test.py | 102 +- .../membership/get_group_changes_test.py | 130 +- .../live_tests/membership/get_group_test.py | 60 +- .../membership/list_group_admins_test.py | 54 +- .../membership/list_group_members_test.py | 530 ++-- .../membership/list_my_groups_test.py | 120 +- .../membership/update_group_test.py | 562 ++-- .../live_tests/production_verify_test.py | 64 +- .../recordsets/create_recordset_test.py | 1601 +++++------ .../recordsets/delete_recordset_test.py | 494 ++-- .../recordsets/get_recordset_test.py | 146 +- .../recordsets/list_recordset_changes_test.py | 98 +- .../recordsets/list_recordsets_test.py | 226 +- .../recordsets/update_recordset_test.py | 1268 ++++----- .../live_tests/shared_zone_test_context.py | 1082 +++---- .../functional_test/live_tests/test_data.py | 148 +- .../live_tests/zones/create_zone_test.py | 532 ++-- .../live_tests/zones/delete_zone_test.py | 84 +- .../live_tests/zones/get_zone_test.py | 100 +- .../zones/list_zone_changes_test.py | 57 +- .../live_tests/zones/list_zones_test.py | 112 +- .../live_tests/zones/sync_zone_test.py | 345 ++- .../live_tests/zones/update_zone_test.py | 700 +++-- .../perf_tests/uat_sync_test.py | 103 +- modules/api/functional_test/pytest.ini | 3 +- modules/api/functional_test/pytest.sh | 32 + modules/api/functional_test/requirements.txt | 25 +- modules/api/functional_test/run.py | 23 - modules/api/functional_test/run.sh | 13 + modules/api/functional_test/utils.py | 270 +- .../api/functional_test/vinyldns_context.py | 26 +- .../api/functional_test/vinyldns_python.py | 461 ++- modules/api/functional_test/zone_inject.py | 48 - 231 files changed, 9487 insertions(+), 7034 deletions(-) create mode 100644 docker/bind9/etc/_template/named.partition.conf create mode 100644 docker/bind9/etc/named.partition1.conf create mode 100644 docker/bind9/etc/named.partition2.conf create mode 100644 docker/bind9/etc/named.partition3.conf create mode 100644 docker/bind9/etc/named.partition4.conf create mode 100644 docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/_template/10.10.in-addr.arpa create mode 100644 docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/_template/2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/_template/child.parent.com.hosts create mode 100644 docker/bind9/zones/_template/dskey.example.com.hosts create mode 100644 docker/bind9/zones/_template/dummy.hosts create mode 100644 docker/bind9/zones/_template/example.com.hosts create mode 100644 docker/bind9/zones/_template/invalid-zone.hosts create mode 100644 docker/bind9/zones/_template/list-records.hosts create mode 100644 docker/bind9/zones/_template/list-zones-test-searched-1.hosts create mode 100644 docker/bind9/zones/_template/list-zones-test-searched-2.hosts create mode 100644 docker/bind9/zones/_template/list-zones-test-searched-3.hosts create mode 100644 docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts create mode 100644 docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts create mode 100644 docker/bind9/zones/_template/non.test.shared.hosts create mode 100644 docker/bind9/zones/_template/not.loaded.hosts create mode 100644 docker/bind9/zones/_template/ok.hosts create mode 100644 docker/bind9/zones/_template/old-shared.hosts create mode 100644 docker/bind9/zones/_template/old-vinyldns2.hosts create mode 100644 docker/bind9/zones/_template/old-vinyldns3.hosts create mode 100644 docker/bind9/zones/_template/one-time-shared.hosts create mode 100644 docker/bind9/zones/_template/one-time.hosts create mode 100644 docker/bind9/zones/_template/open.hosts create mode 100644 docker/bind9/zones/_template/parent.com.hosts create mode 100644 docker/bind9/zones/_template/shared.hosts create mode 100644 docker/bind9/zones/_template/sync-test.hosts create mode 100644 docker/bind9/zones/_template/system-test-history.hosts create mode 100644 docker/bind9/zones/_template/system-test.hosts create mode 100644 docker/bind9/zones/_template/vinyldns.hosts rename docker/bind9/zones/{non.test.shared.hosts => _template/zone.requires.review.hosts} (50%) mode change 100755 => 100644 delete mode 100644 docker/bind9/zones/child.parent.com.hosts delete mode 100644 docker/bind9/zones/dskey.example.com.hosts delete mode 100644 docker/bind9/zones/example.com.hosts delete mode 100644 docker/bind9/zones/list-zones-test-searched-1.hosts delete mode 100644 docker/bind9/zones/list-zones-test-searched-2.hosts delete mode 100644 docker/bind9/zones/list-zones-test-searched-3.hosts delete mode 100755 docker/bind9/zones/list-zones-test-unfiltered-1.hosts delete mode 100755 docker/bind9/zones/list-zones-test-unfiltered-2.hosts delete mode 100755 docker/bind9/zones/one-time-shared.hosts delete mode 100644 docker/bind9/zones/open.hosts rename docker/bind9/zones/{ => partition1}/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) mode change 100755 => 100644 rename docker/bind9/zones/{ => partition1}/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) mode change 100755 => 100644 rename docker/bind9/zones/{ => partition1}/10.10.in-addr.arpa (53%) mode change 100755 => 100644 create mode 100644 docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition1/2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition1/child.parent.com.hosts create mode 100644 docker/bind9/zones/partition1/dskey.example.com.hosts rename docker/bind9/zones/{ => partition1}/dummy.hosts (77%) create mode 100644 docker/bind9/zones/partition1/example.com.hosts rename docker/bind9/zones/{ => partition1}/invalid-zone.hosts (51%) rename docker/bind9/zones/{ => partition1}/list-records.hosts (93%) create mode 100644 docker/bind9/zones/partition1/list-zones-test-searched-1.hosts create mode 100644 docker/bind9/zones/partition1/list-zones-test-searched-2.hosts create mode 100644 docker/bind9/zones/partition1/list-zones-test-searched-3.hosts create mode 100644 docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts create mode 100644 docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts create mode 100644 docker/bind9/zones/partition1/non.test.shared.hosts rename docker/bind9/zones/{ => partition1}/not.loaded.hosts (51%) rename docker/bind9/zones/{ => partition1}/ok.hosts (80%) mode change 100755 => 100644 rename docker/bind9/zones/{old-vinyldns3.hosts => partition1/old-shared.hosts} (72%) mode change 100755 => 100644 create mode 100644 docker/bind9/zones/partition1/old-vinyldns2.hosts create mode 100644 docker/bind9/zones/partition1/old-vinyldns3.hosts create mode 100644 docker/bind9/zones/partition1/one-time-shared.hosts rename docker/bind9/zones/{ => partition1}/one-time.hosts (74%) mode change 100755 => 100644 create mode 100644 docker/bind9/zones/partition1/open.hosts rename docker/bind9/zones/{ => partition1}/parent.com.hosts (71%) mode change 100755 => 100644 rename docker/bind9/zones/{ => partition1}/shared.hosts (80%) mode change 100755 => 100644 create mode 100644 docker/bind9/zones/partition1/sync-test.hosts rename docker/bind9/zones/{ => partition1}/system-test-history.hosts (70%) mode change 100755 => 100644 rename docker/bind9/zones/{ => partition1}/system-test.hosts (79%) mode change 100755 => 100644 rename docker/bind9/zones/{old-shared.hosts => partition1/vinyldns.hosts} (73%) mode change 100755 => 100644 rename docker/bind9/zones/{ => partition1}/zone.requires.review.hosts (62%) create mode 100644 docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition2/10.10.in-addr.arpa rename docker/bind9/zones/{ => partition2}/192^30.2.0.192.in-addr.arpa (100%) rename docker/bind9/zones/{ => partition2}/2.0.192.in-addr.arpa (100%) create mode 100644 docker/bind9/zones/partition2/child.parent.com.hosts create mode 100644 docker/bind9/zones/partition2/dskey.example.com.hosts create mode 100644 docker/bind9/zones/partition2/dummy.hosts create mode 100644 docker/bind9/zones/partition2/example.com.hosts create mode 100644 docker/bind9/zones/partition2/invalid-zone.hosts create mode 100644 docker/bind9/zones/partition2/list-records.hosts create mode 100644 docker/bind9/zones/partition2/list-zones-test-searched-1.hosts create mode 100644 docker/bind9/zones/partition2/list-zones-test-searched-2.hosts create mode 100644 docker/bind9/zones/partition2/list-zones-test-searched-3.hosts create mode 100644 docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts create mode 100644 docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts create mode 100644 docker/bind9/zones/partition2/non.test.shared.hosts create mode 100644 docker/bind9/zones/partition2/not.loaded.hosts create mode 100644 docker/bind9/zones/partition2/ok.hosts rename docker/bind9/zones/{old-vinyldns2.hosts => partition2/old-shared.hosts} (72%) mode change 100755 => 100644 create mode 100644 docker/bind9/zones/partition2/old-vinyldns2.hosts create mode 100644 docker/bind9/zones/partition2/old-vinyldns3.hosts create mode 100644 docker/bind9/zones/partition2/one-time-shared.hosts rename docker/bind9/zones/{vinyldns.hosts => partition2/one-time.hosts} (74%) create mode 100644 docker/bind9/zones/partition2/open.hosts create mode 100644 docker/bind9/zones/partition2/parent.com.hosts create mode 100644 docker/bind9/zones/partition2/shared.hosts create mode 100644 docker/bind9/zones/partition2/sync-test.hosts create mode 100644 docker/bind9/zones/partition2/system-test-history.hosts create mode 100644 docker/bind9/zones/partition2/system-test.hosts create mode 100644 docker/bind9/zones/partition2/vinyldns.hosts create mode 100644 docker/bind9/zones/partition2/zone.requires.review.hosts create mode 100644 docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition3/10.10.in-addr.arpa create mode 100644 docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition3/2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition3/child.parent.com.hosts create mode 100644 docker/bind9/zones/partition3/dskey.example.com.hosts create mode 100644 docker/bind9/zones/partition3/dummy.hosts create mode 100644 docker/bind9/zones/partition3/example.com.hosts create mode 100644 docker/bind9/zones/partition3/invalid-zone.hosts create mode 100644 docker/bind9/zones/partition3/list-records.hosts create mode 100644 docker/bind9/zones/partition3/list-zones-test-searched-1.hosts create mode 100644 docker/bind9/zones/partition3/list-zones-test-searched-2.hosts create mode 100644 docker/bind9/zones/partition3/list-zones-test-searched-3.hosts create mode 100644 docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts create mode 100644 docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts create mode 100644 docker/bind9/zones/partition3/non.test.shared.hosts create mode 100644 docker/bind9/zones/partition3/not.loaded.hosts create mode 100644 docker/bind9/zones/partition3/ok.hosts create mode 100644 docker/bind9/zones/partition3/old-shared.hosts create mode 100644 docker/bind9/zones/partition3/old-vinyldns2.hosts create mode 100644 docker/bind9/zones/partition3/old-vinyldns3.hosts create mode 100644 docker/bind9/zones/partition3/one-time-shared.hosts create mode 100644 docker/bind9/zones/partition3/one-time.hosts create mode 100644 docker/bind9/zones/partition3/open.hosts create mode 100644 docker/bind9/zones/partition3/parent.com.hosts create mode 100644 docker/bind9/zones/partition3/shared.hosts create mode 100644 docker/bind9/zones/partition3/sync-test.hosts create mode 100644 docker/bind9/zones/partition3/system-test-history.hosts create mode 100644 docker/bind9/zones/partition3/system-test.hosts create mode 100644 docker/bind9/zones/partition3/vinyldns.hosts create mode 100644 docker/bind9/zones/partition3/zone.requires.review.hosts create mode 100644 docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa create mode 100644 docker/bind9/zones/partition4/10.10.in-addr.arpa create mode 100644 docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition4/2.0.192.in-addr.arpa create mode 100644 docker/bind9/zones/partition4/child.parent.com.hosts create mode 100644 docker/bind9/zones/partition4/dskey.example.com.hosts create mode 100644 docker/bind9/zones/partition4/dummy.hosts create mode 100644 docker/bind9/zones/partition4/example.com.hosts create mode 100644 docker/bind9/zones/partition4/invalid-zone.hosts create mode 100644 docker/bind9/zones/partition4/list-records.hosts create mode 100644 docker/bind9/zones/partition4/list-zones-test-searched-1.hosts create mode 100644 docker/bind9/zones/partition4/list-zones-test-searched-2.hosts create mode 100644 docker/bind9/zones/partition4/list-zones-test-searched-3.hosts create mode 100644 docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts create mode 100644 docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts create mode 100644 docker/bind9/zones/partition4/non.test.shared.hosts create mode 100644 docker/bind9/zones/partition4/not.loaded.hosts create mode 100644 docker/bind9/zones/partition4/ok.hosts create mode 100644 docker/bind9/zones/partition4/old-shared.hosts create mode 100644 docker/bind9/zones/partition4/old-vinyldns2.hosts create mode 100644 docker/bind9/zones/partition4/old-vinyldns3.hosts create mode 100644 docker/bind9/zones/partition4/one-time-shared.hosts create mode 100644 docker/bind9/zones/partition4/one-time.hosts create mode 100644 docker/bind9/zones/partition4/open.hosts create mode 100644 docker/bind9/zones/partition4/parent.com.hosts create mode 100644 docker/bind9/zones/partition4/shared.hosts create mode 100644 docker/bind9/zones/partition4/sync-test.hosts create mode 100644 docker/bind9/zones/partition4/system-test-history.hosts create mode 100644 docker/bind9/zones/partition4/system-test.hosts create mode 100644 docker/bind9/zones/partition4/vinyldns.hosts create mode 100644 docker/bind9/zones/partition4/zone.requires.review.hosts delete mode 100755 docker/bind9/zones/sync-test.hosts create mode 100755 modules/api/functional_test/.gitignore create mode 100755 modules/api/functional_test/__init__.py create mode 100644 modules/api/functional_test/aws_request_signer.py delete mode 100755 modules/api/functional_test/bootstrap.sh delete mode 100644 modules/api/functional_test/boto_request_signer.py create mode 100644 modules/api/functional_test/pytest.sh delete mode 100755 modules/api/functional_test/run.py create mode 100644 modules/api/functional_test/run.sh delete mode 100644 modules/api/functional_test/zone_inject.py diff --git a/docker/api/docker.conf b/docker/api/docker.conf index a0ec1d680..9a851577a 100644 --- a/docker/api/docker.conf +++ b/docker/api/docker.conf @@ -132,6 +132,15 @@ vinyldns { sync-delay = 10000 + approved-name-servers = [ + "172.17.42.1.", + "ns1.parent.com." + "ns1.parent.com1." + "ns1.parent.com2." + "ns1.parent.com3." + "ns1.parent.com4." + ] + crypto { type = "vinyldns.core.crypto.NoOpCrypto" } @@ -221,13 +230,57 @@ vinyldns { "needs-review.*" ] ip-list = [ + "192.0.1.254", + "192.0.1.255", "192.0.2.254", "192.0.2.255", + "192.0.3.254", + "192.0.3.255", + "192.0.4.254", + "192.0.4.255", "fd69:27cc:fe91:0:0:0:ffff:1", - "fd69:27cc:fe91:0:0:0:ffff:2" + "fd69:27cc:fe91:0:0:0:ffff:2", + "fd69:27cc:fe92:0:0:0:ffff:1", + "fd69:27cc:fe92:0:0:0:ffff:2", + "fd69:27cc:fe93:0:0:0:ffff:1", + "fd69:27cc:fe93:0:0:0:ffff:2", + "fd69:27cc:fe94:0:0:0:ffff:1", + "fd69:27cc:fe94:0:0:0:ffff:2" ] zone-name-list = [ "zone.requires.review." + "zone.requires.review1." + "zone.requires.review2." + "zone.requires.review3." + "zone.requires.review4." + ] + } + + # FQDNs / IPs that cannot be modified via VinylDNS + # regex-list used for all record types except PTR + # ip-list used exclusively for PTR records + high-value-domains = { + regex-list = [ + "high-value-domain.*" # for testing + ] + ip-list = [ + # using reverse zones in the vinyldns/bind9 docker image for testing + "192.0.1.252", + "192.0.1.253", + "192.0.2.252", + "192.0.2.253", + "192.0.3.252", + "192.0.3.253", + "192.0.4.252", + "192.0.4.253", + "fd69:27cc:fe91:0:0:0:0:ffff", + "fd69:27cc:fe91:0:0:0:ffff:0", + "fd69:27cc:fe92:0:0:0:0:ffff", + "fd69:27cc:fe92:0:0:0:ffff:0", + "fd69:27cc:fe93:0:0:0:0:ffff", + "fd69:27cc:fe93:0:0:0:ffff:0", + "fd69:27cc:fe94:0:0:0:0:ffff", + "fd69:27cc:fe94:0:0:0:ffff:0" ] } diff --git a/docker/bind9/etc/_template/named.partition.conf b/docker/bind9/etc/_template/named.partition.conf new file mode 100644 index 000000000..bc957bed4 --- /dev/null +++ b/docker/bind9/etc/_template/named.partition.conf @@ -0,0 +1,186 @@ +zone "vinyldns{partition}" { + type master; + file "/var/bind/partition{partition}/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns2{partition}" { + type master; + file "/var/bind/partition{partition}/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns3{partition}" { + type master; + file "/var/bind/partition{partition}/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy{partition}" { + type master; + file "/var/bind/partition{partition}/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok{partition}" { + type master; + file "/var/bind/partition{partition}/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared{partition}" { + type master; + file "/var/bind/partition{partition}/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "non.test.shared{partition}" { + type master; + file "/var/bind/partition{partition}/non.test.shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test{partition}" { + type master; + file "/var/bind/partition{partition}/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history{partition}" { + type master; + file "/var/bind/partition{partition}/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "{partition}.10.in-addr.arpa" { + type master; + file "/var/bind/partition{partition}/10.10.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "{partition}.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition{partition}/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.{partition}.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition{partition}/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition{partition}/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "0.0.0.{partition}.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition{partition}/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time{partition}" { + type master; + file "/var/bind/partition{partition}/one-time.hosts"; + allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; + }; + +zone "sync-test{partition}" { + type master; + file "/var/bind/partition{partition}/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone{partition}" { + type master; + file "/var/bind/partition{partition}/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-1{partition}" { + type master; + file "/var/bind/partition{partition}/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-2{partition}" { + type master; + file "/var/bind/partition{partition}/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-3{partition}" { + type master; + file "/var/bind/partition{partition}/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-1{partition}" { + type master; + file "/var/bind/partition{partition}/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-2{partition}" { + type master; + file "/var/bind/partition{partition}/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared{partition}" { + type master; + file "/var/bind/partition{partition}/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com{partition}" { + type master; + file "/var/bind/partition{partition}/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com{partition}" { + type master; + file "/var/bind/partition{partition}/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "example.com{partition}" { + type master; + file "/var/bind/partition{partition}/example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dskey.example.com{partition}" { + type master; + file "/var/bind/partition{partition}/dskey.example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "not.loaded{partition}" { + type master; + file "/var/bind/partition{partition}/not.loaded.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "zone.requires.review{partition}" { + type master; + file "/var/bind/partition{partition}/zone.requires.review.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-records{partition}" { + type master; + file "/var/bind/partition{partition}/list-records.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "open{partition}" { + type master; + file "/var/bind/partition{partition}/open.hosts"; + allow-update { any; }; + allow-transfer { any; }; + }; diff --git a/docker/bind9/etc/named.conf.local b/docker/bind9/etc/named.conf.local index f3efc1b6c..008d4a9be 100755 --- a/docker/bind9/etc/named.conf.local +++ b/docker/bind9/etc/named.conf.local @@ -1,10 +1,3 @@ -// -// Do any local configuration here -// - -// Consider adding the 1918 zones here, if they are not used in your -// organization -//include "/etc/bind/zones.rfc1918"; key "vinyldns." { algorithm hmac-md5; @@ -36,192 +29,10 @@ key "vinyldns-sha512." { secret "xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA=="; }; -// Consider adding the 1918 zones here, if they are not used in your -// organization +// Consider adding the 1918 zones here, if they are not used in your organization //include "/etc/bind/zones.rfc1918"; -zone "vinyldns" { - type master; - file "/var/bind/vinyldns.hosts"; - allow-update { key "vinyldns."; }; - }; -zone "old-vinyldns2" { - type master; - file "/var/bind/old-vinyldns2.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "old-vinyldns3" { - type master; - file "/var/bind/old-vinyldns3.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "dummy" { - type master; - file "/var/bind/dummy.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "ok" { - type master; - file "/var/bind/ok.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "shared" { - type master; - file "/var/bind/shared.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "non.test.shared" { - type master; - file "/var/bind/non.test.shared.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "system-test" { - type master; - file "/var/bind/system-test.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "system-test-history" { - type master; - file "/var/bind/system-test-history.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "10.10.in-addr.arpa" { - type master; - file "/var/bind/10.10.in-addr.arpa"; - allow-update { key "vinyldns."; }; - }; - -zone "2.0.192.in-addr.arpa" { - type master; - file "/var/bind/2.0.192.in-addr.arpa"; - allow-update { key "vinyldns."; }; - }; - -zone "192/30.2.0.192.in-addr.arpa" { - type master; - file "/var/bind/192^30.2.0.192.in-addr.arpa"; - allow-update { key "vinyldns."; }; - }; - -zone "1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { - type master; - file "/var/bind/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; - allow-update { key "vinyldns."; }; - }; - -zone "0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { - type master; - file "/var/bind/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; - allow-update { key "vinyldns."; }; - }; - -zone "one-time" { - type master; - file "/var/bind/one-time.hosts"; - allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; - }; - -zone "sync-test" { - type master; - file "/var/bind/sync-test.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "invalid-zone" { - type master; - file "/var/bind/invalid-zone.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-zones-test-searched-1" { - type master; - file "/var/bind/list-zones-test-searched-1.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-zones-test-searched-2" { - type master; - file "/var/bind/list-zones-test-searched-2.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-zones-test-searched-3" { - type master; - file "/var/bind/list-zones-test-searched-3.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-zones-test-unfiltered-1" { - type master; - file "/var/bind/list-zones-test-unfiltered-1.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-zones-test-unfiltered-2" { - type master; - file "/var/bind/list-zones-test-unfiltered-2.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "one-time-shared" { - type master; - file "/var/bind/one-time-shared.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "parent.com" { - type master; - file "/var/bind/parent.com.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "child.parent.com" { - type master; - file "/var/bind/child.parent.com.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "example.com" { - type master; - file "/var/bind/example.com.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "dskey.example.com" { - type master; - file "/var/bind/dskey.example.com.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "not.loaded" { - type master; - file "/var/bind/not.loaded.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "zone.requires.review" { - type master; - file "/var/bind/zone.requires.review.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "list-records" { - type master; - file "/var/bind/list-records.hosts"; - allow-update { key "vinyldns."; }; - }; - -zone "open" { - type master; - file "/var/bind/open.hosts"; - allow-update { any; }; - allow-transfer { any; }; - }; \ No newline at end of file +include "/var/cache/bind/config/named.partition1.conf"; +//include "/var/cache/bind/config/named.partition2.conf"; +//include "/var/cache/bind/config/named.partition3.conf"; +//include "/var/cache/bind/config/named.partition4.conf"; diff --git a/docker/bind9/etc/named.partition1.conf b/docker/bind9/etc/named.partition1.conf new file mode 100644 index 000000000..6f2d543a3 --- /dev/null +++ b/docker/bind9/etc/named.partition1.conf @@ -0,0 +1,186 @@ +zone "vinyldns1" { + type master; + file "/var/bind/partition1/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns21" { + type master; + file "/var/bind/partition1/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns31" { + type master; + file "/var/bind/partition1/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy1" { + type master; + file "/var/bind/partition1/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok1" { + type master; + file "/var/bind/partition1/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared1" { + type master; + file "/var/bind/partition1/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "non.test.shared1" { + type master; + file "/var/bind/partition1/non.test.shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test1" { + type master; + file "/var/bind/partition1/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history1" { + type master; + file "/var/bind/partition1/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "1.10.in-addr.arpa" { + type master; + file "/var/bind/partition1/10.10.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "1.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition1/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.1.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition1/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time1" { + type master; + file "/var/bind/partition1/one-time.hosts"; + allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; + }; + +zone "sync-test1" { + type master; + file "/var/bind/partition1/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone1" { + type master; + file "/var/bind/partition1/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-11" { + type master; + file "/var/bind/partition1/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-21" { + type master; + file "/var/bind/partition1/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-31" { + type master; + file "/var/bind/partition1/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-11" { + type master; + file "/var/bind/partition1/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-21" { + type master; + file "/var/bind/partition1/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared1" { + type master; + file "/var/bind/partition1/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com1" { + type master; + file "/var/bind/partition1/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com1" { + type master; + file "/var/bind/partition1/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "example.com1" { + type master; + file "/var/bind/partition1/example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dskey.example.com1" { + type master; + file "/var/bind/partition1/dskey.example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "not.loaded1" { + type master; + file "/var/bind/partition1/not.loaded.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "zone.requires.review1" { + type master; + file "/var/bind/partition1/zone.requires.review.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-records1" { + type master; + file "/var/bind/partition1/list-records.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "open1" { + type master; + file "/var/bind/partition1/open.hosts"; + allow-update { any; }; + allow-transfer { any; }; + }; diff --git a/docker/bind9/etc/named.partition2.conf b/docker/bind9/etc/named.partition2.conf new file mode 100644 index 000000000..d297d4e4a --- /dev/null +++ b/docker/bind9/etc/named.partition2.conf @@ -0,0 +1,186 @@ +zone "vinyldns2" { + type master; + file "/var/bind/partition2/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns22" { + type master; + file "/var/bind/partition2/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns32" { + type master; + file "/var/bind/partition2/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy2" { + type master; + file "/var/bind/partition2/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok2" { + type master; + file "/var/bind/partition2/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared2" { + type master; + file "/var/bind/partition2/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "non.test.shared2" { + type master; + file "/var/bind/partition2/non.test.shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test2" { + type master; + file "/var/bind/partition2/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history2" { + type master; + file "/var/bind/partition2/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "2.10.in-addr.arpa" { + type master; + file "/var/bind/partition2/10.10.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "2.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition2/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.2.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition2/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "0.0.0.2.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time2" { + type master; + file "/var/bind/partition2/one-time.hosts"; + allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; + }; + +zone "sync-test2" { + type master; + file "/var/bind/partition2/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone2" { + type master; + file "/var/bind/partition2/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-12" { + type master; + file "/var/bind/partition2/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-22" { + type master; + file "/var/bind/partition2/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-32" { + type master; + file "/var/bind/partition2/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-12" { + type master; + file "/var/bind/partition2/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-22" { + type master; + file "/var/bind/partition2/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared2" { + type master; + file "/var/bind/partition2/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com2" { + type master; + file "/var/bind/partition2/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com2" { + type master; + file "/var/bind/partition2/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "example.com2" { + type master; + file "/var/bind/partition2/example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dskey.example.com2" { + type master; + file "/var/bind/partition2/dskey.example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "not.loaded2" { + type master; + file "/var/bind/partition2/not.loaded.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "zone.requires.review2" { + type master; + file "/var/bind/partition2/zone.requires.review.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-records2" { + type master; + file "/var/bind/partition2/list-records.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "open2" { + type master; + file "/var/bind/partition2/open.hosts"; + allow-update { any; }; + allow-transfer { any; }; + }; diff --git a/docker/bind9/etc/named.partition3.conf b/docker/bind9/etc/named.partition3.conf new file mode 100644 index 000000000..308d61cca --- /dev/null +++ b/docker/bind9/etc/named.partition3.conf @@ -0,0 +1,186 @@ +zone "vinyldns3" { + type master; + file "/var/bind/partition3/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns23" { + type master; + file "/var/bind/partition3/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns33" { + type master; + file "/var/bind/partition3/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy3" { + type master; + file "/var/bind/partition3/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok3" { + type master; + file "/var/bind/partition3/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared3" { + type master; + file "/var/bind/partition3/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "non.test.shared3" { + type master; + file "/var/bind/partition3/non.test.shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test3" { + type master; + file "/var/bind/partition3/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history3" { + type master; + file "/var/bind/partition3/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "3.10.in-addr.arpa" { + type master; + file "/var/bind/partition3/10.10.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "3.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition3/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.3.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition3/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "0.0.0.3.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time3" { + type master; + file "/var/bind/partition3/one-time.hosts"; + allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; + }; + +zone "sync-test3" { + type master; + file "/var/bind/partition3/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone3" { + type master; + file "/var/bind/partition3/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-13" { + type master; + file "/var/bind/partition3/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-23" { + type master; + file "/var/bind/partition3/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-33" { + type master; + file "/var/bind/partition3/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-13" { + type master; + file "/var/bind/partition3/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-23" { + type master; + file "/var/bind/partition3/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared3" { + type master; + file "/var/bind/partition3/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com3" { + type master; + file "/var/bind/partition3/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com3" { + type master; + file "/var/bind/partition3/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "example.com3" { + type master; + file "/var/bind/partition3/example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dskey.example.com3" { + type master; + file "/var/bind/partition3/dskey.example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "not.loaded3" { + type master; + file "/var/bind/partition3/not.loaded.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "zone.requires.review3" { + type master; + file "/var/bind/partition3/zone.requires.review.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-records3" { + type master; + file "/var/bind/partition3/list-records.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "open3" { + type master; + file "/var/bind/partition3/open.hosts"; + allow-update { any; }; + allow-transfer { any; }; + }; diff --git a/docker/bind9/etc/named.partition4.conf b/docker/bind9/etc/named.partition4.conf new file mode 100644 index 000000000..b69d4a4a4 --- /dev/null +++ b/docker/bind9/etc/named.partition4.conf @@ -0,0 +1,186 @@ +zone "vinyldns4" { + type master; + file "/var/bind/partition4/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns24" { + type master; + file "/var/bind/partition4/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns34" { + type master; + file "/var/bind/partition4/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy4" { + type master; + file "/var/bind/partition4/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok4" { + type master; + file "/var/bind/partition4/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared4" { + type master; + file "/var/bind/partition4/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "non.test.shared4" { + type master; + file "/var/bind/partition4/non.test.shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test4" { + type master; + file "/var/bind/partition4/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history4" { + type master; + file "/var/bind/partition4/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "4.10.in-addr.arpa" { + type master; + file "/var/bind/partition4/10.10.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "4.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition4/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.4.0.192.in-addr.arpa" { + type master; + file "/var/bind/partition4/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "0.0.0.4.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time4" { + type master; + file "/var/bind/partition4/one-time.hosts"; + allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; }; + }; + +zone "sync-test4" { + type master; + file "/var/bind/partition4/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone4" { + type master; + file "/var/bind/partition4/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-14" { + type master; + file "/var/bind/partition4/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-24" { + type master; + file "/var/bind/partition4/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-34" { + type master; + file "/var/bind/partition4/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-14" { + type master; + file "/var/bind/partition4/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-24" { + type master; + file "/var/bind/partition4/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared4" { + type master; + file "/var/bind/partition4/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com4" { + type master; + file "/var/bind/partition4/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com4" { + type master; + file "/var/bind/partition4/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "example.com4" { + type master; + file "/var/bind/partition4/example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dskey.example.com4" { + type master; + file "/var/bind/partition4/dskey.example.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "not.loaded4" { + type master; + file "/var/bind/partition4/not.loaded.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "zone.requires.review4" { + type master; + file "/var/bind/partition4/zone.requires.review.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-records4" { + type master; + file "/var/bind/partition4/list-records.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "open4" { + type master; + file "/var/bind/partition4/open.hosts"; + allow-update { any; }; + allow-transfer { any; }; + }; diff --git a/docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..46ca61a46 --- /dev/null +++ b/docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,12 @@ +$ttl 38400 +0.0.0.1.{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +0.0.0.1.{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..4eee859bd --- /dev/null +++ b/docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,13 @@ +$ttl 38400 +{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +0.0.0.1 IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/_template/10.10.in-addr.arpa b/docker/bind9/zones/_template/10.10.in-addr.arpa new file mode 100644 index 000000000..67b80e585 --- /dev/null +++ b/docker/bind9/zones/_template/10.10.in-addr.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +{partition}.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +{partition}.10.in-addr.arpa. IN NS 172.17.42.1. +24.0 IN PTR www.vinyl. +25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa new file mode 100644 index 000000000..85f09bceb --- /dev/null +++ b/docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa @@ -0,0 +1,11 @@ +$ttl 38400 +192/30.{partition}.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +192/30.{partition}.0.192.in-addr.arpa. IN NS 172.17.42.1. +192 IN PTR portal.vinyldns. +194 IN PTR mail.vinyldns. +195 IN PTR test.vinyldns. diff --git a/docker/bind9/zones/_template/2.0.192.in-addr.arpa b/docker/bind9/zones/_template/2.0.192.in-addr.arpa new file mode 100644 index 000000000..c457c2575 --- /dev/null +++ b/docker/bind9/zones/_template/2.0.192.in-addr.arpa @@ -0,0 +1,15 @@ +$ttl 38400 +{partition}.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +{partition}.0.192.in-addr.arpa. IN NS 172.17.42.1. +192/30 IN NS 172.17.42.1. +192 IN CNAME 192.192/30.2.0.192.in-addr.arpa. +193 IN CNAME 193.192/30.2.0.192.in-addr.arpa. +194 IN CNAME 194.192/30.2.0.192.in-addr.arpa. +195 IN CNAME 195.192/30.2.0.192.in-addr.arpa. +253 IN PTR high.value.domain.ip4. +255 IN PTR needs.review.domain.ip4 diff --git a/docker/bind9/zones/_template/child.parent.com.hosts b/docker/bind9/zones/_template/child.parent.com.hosts new file mode 100644 index 000000000..478bea811 --- /dev/null +++ b/docker/bind9/zones/_template/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com{partition}. +@ IN SOA ns1.parent.com{partition}. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com{partition}. diff --git a/docker/bind9/zones/_template/dskey.example.com.hosts b/docker/bind9/zones/_template/dskey.example.com.hosts new file mode 100644 index 000000000..2e2fabc90 --- /dev/null +++ b/docker/bind9/zones/_template/dskey.example.com.hosts @@ -0,0 +1,9 @@ +$TTL 1h +$ORIGIN dskey.example.com{partition}. +@ IN SOA ns1.parent.com{partition}. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dskey.example.com{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/dummy.hosts b/docker/bind9/zones/_template/dummy.hosts new file mode 100644 index 000000000..8c72fe987 --- /dev/null +++ b/docker/bind9/zones/_template/dummy.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +dummy{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dummy{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +non-approved-delegation IN NS 7.7.7.7 diff --git a/docker/bind9/zones/_template/example.com.hosts b/docker/bind9/zones/_template/example.com.hosts new file mode 100644 index 000000000..83b4aad2d --- /dev/null +++ b/docker/bind9/zones/_template/example.com.hosts @@ -0,0 +1,10 @@ +$TTL 1h +$ORIGIN example.com{partition}. +@ IN SOA ns1.parent.com{partition}. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +example.com{partition}. IN NS 172.17.42.1. +dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/invalid-zone.hosts b/docker/bind9/zones/_template/invalid-zone.hosts new file mode 100644 index 000000000..d3e2f1efe --- /dev/null +++ b/docker/bind9/zones/_template/invalid-zone.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +invalid-zone{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +invalid-zone{partition}. IN NS 172.17.42.1. +invalid-zone{partition}. IN NS not-approved.thing.com. +invalid.child.invalid-zone{partition}. IN NS 172.17.42.1. +dotted.host.invalid-zone{partition}. IN A 1.2.3.4 +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/list-records.hosts b/docker/bind9/zones/_template/list-records.hosts new file mode 100644 index 000000000..d5d8b44c6 --- /dev/null +++ b/docker/bind9/zones/_template/list-records.hosts @@ -0,0 +1,38 @@ +$ttl 38400 +list-records{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-records{partition}. IN NS 172.17.42.1. +00-test-list-recordsets-0-A IN A 10.1.1.1 +00-test-list-recordsets-0-A IN A 10.2.2.2 +00-test-list-recordsets-0-CNAME IN CNAME cname1. +00-test-list-recordsets-1-A IN A 10.1.1.1 +00-test-list-recordsets-1-A IN A 10.2.2.2 +00-test-list-recordsets-1-CNAME IN CNAME cname1. +00-test-list-recordsets-2-A IN A 10.1.1.1 +00-test-list-recordsets-2-A IN A 10.2.2.2 +00-test-list-recordsets-2-CNAME IN CNAME cname1. +00-test-list-recordsets-3-A IN A 10.1.1.1 +00-test-list-recordsets-3-A IN A 10.2.2.2 +00-test-list-recordsets-3-CNAME IN CNAME cname1. +00-test-list-recordsets-4-A IN A 10.1.1.1 +00-test-list-recordsets-4-A IN A 10.2.2.2 +00-test-list-recordsets-4-CNAME IN CNAME cname1. +00-test-list-recordsets-5-A IN A 10.1.1.1 +00-test-list-recordsets-5-A IN A 10.2.2.2 +00-test-list-recordsets-5-CNAME IN CNAME cname1. +00-test-list-recordsets-6-A IN A 10.1.1.1 +00-test-list-recordsets-6-A IN A 10.2.2.2 +00-test-list-recordsets-6-CNAME IN CNAME cname1. +00-test-list-recordsets-7-A IN A 10.1.1.1 +00-test-list-recordsets-7-A IN A 10.2.2.2 +00-test-list-recordsets-7-CNAME IN CNAME cname1. +00-test-list-recordsets-8-A IN A 10.1.1.1 +00-test-list-recordsets-8-A IN A 10.2.2.2 +00-test-list-recordsets-8-CNAME IN CNAME cname1. +00-test-list-recordsets-9-A IN A 10.1.1.1 +00-test-list-recordsets-9-A IN A 10.2.2.2 +00-test-list-recordsets-9-CNAME IN CNAME cname1. diff --git a/docker/bind9/zones/_template/list-zones-test-searched-1.hosts b/docker/bind9/zones/_template/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..a9de7daa1 --- /dev/null +++ b/docker/bind9/zones/_template/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-1{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-1{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/list-zones-test-searched-2.hosts b/docker/bind9/zones/_template/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..881467a92 --- /dev/null +++ b/docker/bind9/zones/_template/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-2{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-2{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/list-zones-test-searched-3.hosts b/docker/bind9/zones/_template/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..9b6513f1f --- /dev/null +++ b/docker/bind9/zones/_template/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-3{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-3{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts new file mode 100644 index 000000000..dd9dd12b6 --- /dev/null +++ b/docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-1{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-1{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts new file mode 100644 index 000000000..8469e24d0 --- /dev/null +++ b/docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-2{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-2{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/non.test.shared.hosts b/docker/bind9/zones/_template/non.test.shared.hosts new file mode 100644 index 000000000..462e42081 --- /dev/null +++ b/docker/bind9/zones/_template/non.test.shared.hosts @@ -0,0 +1,13 @@ +$ttl 38400 +non.test.shared{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +non.test.shared{partition}. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 +delete-test IN A 4.4.4.4 +update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/_template/not.loaded.hosts b/docker/bind9/zones/_template/not.loaded.hosts new file mode 100644 index 000000000..ccb468620 --- /dev/null +++ b/docker/bind9/zones/_template/not.loaded.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +not.loaded{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +not.loaded{partition}. IN NS 172.17.42.1. +foo IN A 1.1.1.1 diff --git a/docker/bind9/zones/_template/ok.hosts b/docker/bind9/zones/_template/ok.hosts new file mode 100644 index 000000000..35f75b587 --- /dev/null +++ b/docker/bind9/zones/_template/ok.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +ok{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +ok{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +dotted.a IN A 7.7.7.7 +dottedc.name IN CNAME test.example.com diff --git a/docker/bind9/zones/_template/old-shared.hosts b/docker/bind9/zones/_template/old-shared.hosts new file mode 100644 index 000000000..026d01349 --- /dev/null +++ b/docker/bind9/zones/_template/old-shared.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-shared{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-shared{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/old-vinyldns2.hosts b/docker/bind9/zones/_template/old-vinyldns2.hosts new file mode 100644 index 000000000..a1d20aa6d --- /dev/null +++ b/docker/bind9/zones/_template/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns2{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns2{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/old-vinyldns3.hosts b/docker/bind9/zones/_template/old-vinyldns3.hosts new file mode 100644 index 000000000..277635157 --- /dev/null +++ b/docker/bind9/zones/_template/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns3{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns3{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/one-time-shared.hosts b/docker/bind9/zones/_template/one-time-shared.hosts new file mode 100644 index 000000000..0286e5ded --- /dev/null +++ b/docker/bind9/zones/_template/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/one-time.hosts b/docker/bind9/zones/_template/one-time.hosts new file mode 100644 index 000000000..df2fd08fd --- /dev/null +++ b/docker/bind9/zones/_template/one-time.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +one-time{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/open.hosts b/docker/bind9/zones/_template/open.hosts new file mode 100644 index 000000000..f4661cd33 --- /dev/null +++ b/docker/bind9/zones/_template/open.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +open{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +open{partition}. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/_template/parent.com.hosts b/docker/bind9/zones/_template/parent.com.hosts new file mode 100644 index 000000000..b364ffa3d --- /dev/null +++ b/docker/bind9/zones/_template/parent.com.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +$ORIGIN parent.com{partition}. +@ IN SOA ns1.parent.com{partition}. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +parent.com{partition}. IN NS ns1.parent.com{partition}. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +already-exists IN A 6.6.6.6 +ns1 IN A 172.17.42.1 diff --git a/docker/bind9/zones/_template/shared.hosts b/docker/bind9/zones/_template/shared.hosts new file mode 100644 index 000000000..2d809bb35 --- /dev/null +++ b/docker/bind9/zones/_template/shared.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +shared{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +shared{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/_template/sync-test.hosts b/docker/bind9/zones/_template/sync-test.hosts new file mode 100644 index 000000000..8866d3d60 --- /dev/null +++ b/docker/bind9/zones/_template/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/_template/system-test-history.hosts b/docker/bind9/zones/_template/system-test-history.hosts new file mode 100644 index 000000000..d656b3d9b --- /dev/null +++ b/docker/bind9/zones/_template/system-test-history.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test-history{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test-history{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/_template/system-test.hosts b/docker/bind9/zones/_template/system-test.hosts new file mode 100644 index 000000000..f98d54779 --- /dev/null +++ b/docker/bind9/zones/_template/system-test.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +system-test{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +high-value-domain IN A 1.1.1.1 +high-VALUE-domain-UPPER-CASE IN A 1.1.1.1 diff --git a/docker/bind9/zones/_template/vinyldns.hosts b/docker/bind9/zones/_template/vinyldns.hosts new file mode 100644 index 000000000..931b22b92 --- /dev/null +++ b/docker/bind9/zones/_template/vinyldns.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +vinyldns{partition}. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +vinyldns{partition}. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/non.test.shared.hosts b/docker/bind9/zones/_template/zone.requires.review.hosts old mode 100755 new mode 100644 similarity index 50% rename from docker/bind9/zones/non.test.shared.hosts rename to docker/bind9/zones/_template/zone.requires.review.hosts index e270c8093..715a30e18 --- a/docker/bind9/zones/non.test.shared.hosts +++ b/docker/bind9/zones/_template/zone.requires.review.hosts @@ -1,13 +1,11 @@ $ttl 38400 -non.test.shared. IN SOA 172.17.42.1. admin.test.com. ( +zone.requires.review{partition}. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -non.test.shared. IN NS 172.17.42.1. +zone.requires.review{partition}. IN NS 172.17.42.1. @ IN A 1.1.1.1 delete-test-batch IN A 2.2.2.2 update-test-batch IN A 3.3.3.3 -delete-test IN A 4.4.4.4 -update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/child.parent.com.hosts b/docker/bind9/zones/child.parent.com.hosts deleted file mode 100644 index a74630542..000000000 --- a/docker/bind9/zones/child.parent.com.hosts +++ /dev/null @@ -1,9 +0,0 @@ -$ttl 38400 -$ORIGIN child.parent.com. -@ IN SOA ns1.parent.com. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -@ IN NS ns1.parent.com. diff --git a/docker/bind9/zones/dskey.example.com.hosts b/docker/bind9/zones/dskey.example.com.hosts deleted file mode 100644 index a730ac305..000000000 --- a/docker/bind9/zones/dskey.example.com.hosts +++ /dev/null @@ -1,9 +0,0 @@ -$TTL 1h -$ORIGIN dskey.example.com. -@ IN SOA ns1.parent.com. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -dskey.example.com. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/example.com.hosts b/docker/bind9/zones/example.com.hosts deleted file mode 100644 index 7e8175fd8..000000000 --- a/docker/bind9/zones/example.com.hosts +++ /dev/null @@ -1,10 +0,0 @@ -$TTL 1h -$ORIGIN example.com. -@ IN SOA ns1.parent.com. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -example.com. IN NS 172.17.42.1. -dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-searched-1.hosts b/docker/bind9/zones/list-zones-test-searched-1.hosts deleted file mode 100644 index c2cf966f7..000000000 --- a/docker/bind9/zones/list-zones-test-searched-1.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -list-zones-test-searched-1. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -list-zones-test-searched-1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-searched-2.hosts b/docker/bind9/zones/list-zones-test-searched-2.hosts deleted file mode 100644 index b531d2a19..000000000 --- a/docker/bind9/zones/list-zones-test-searched-2.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -list-zones-test-searched-2. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -list-zones-test-searched-2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-searched-3.hosts b/docker/bind9/zones/list-zones-test-searched-3.hosts deleted file mode 100644 index 33e76e90f..000000000 --- a/docker/bind9/zones/list-zones-test-searched-3.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -list-zones-test-searched-3. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -list-zones-test-searched-3. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/list-zones-test-unfiltered-1.hosts deleted file mode 100755 index 9205eec0d..000000000 --- a/docker/bind9/zones/list-zones-test-unfiltered-1.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -list-zones-test-unfiltered-1. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -list-zones-test-unfiltered-1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/list-zones-test-unfiltered-2.hosts deleted file mode 100755 index dfdb66493..000000000 --- a/docker/bind9/zones/list-zones-test-unfiltered-2.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -list-zones-test-unfiltered-2. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -list-zones-test-unfiltered-2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/one-time-shared.hosts b/docker/bind9/zones/one-time-shared.hosts deleted file mode 100755 index 654f01557..000000000 --- a/docker/bind9/zones/one-time-shared.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -one-time-shared. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -one-time-shared. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/open.hosts b/docker/bind9/zones/open.hosts deleted file mode 100644 index 48f994103..000000000 --- a/docker/bind9/zones/open.hosts +++ /dev/null @@ -1,8 +0,0 @@ -$ttl 38400 -open. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -open. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa old mode 100755 new mode 100644 similarity index 100% rename from docker/bind9/zones/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to docker/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa old mode 100755 new mode 100644 similarity index 100% rename from docker/bind9/zones/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to docker/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/10.10.in-addr.arpa b/docker/bind9/zones/partition1/10.10.in-addr.arpa old mode 100755 new mode 100644 similarity index 53% rename from docker/bind9/zones/10.10.in-addr.arpa rename to docker/bind9/zones/partition1/10.10.in-addr.arpa index 07c5b3f05..3b11d2099 --- a/docker/bind9/zones/10.10.in-addr.arpa +++ b/docker/bind9/zones/partition1/10.10.in-addr.arpa @@ -1,10 +1,10 @@ $ttl 38400 -10.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( +1.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( 1439234395 10800 3600 604800 38400 ) -10.10.in-addr.arpa. IN NS 172.17.42.1. +1.10.in-addr.arpa. IN NS 172.17.42.1. 24.0 IN PTR www.vinyl. 25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa new file mode 100644 index 000000000..f95c2de07 --- /dev/null +++ b/docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa @@ -0,0 +1,11 @@ +$ttl 38400 +192/30.1.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +192/30.1.0.192.in-addr.arpa. IN NS 172.17.42.1. +192 IN PTR portal.vinyldns. +194 IN PTR mail.vinyldns. +195 IN PTR test.vinyldns. diff --git a/docker/bind9/zones/partition1/2.0.192.in-addr.arpa b/docker/bind9/zones/partition1/2.0.192.in-addr.arpa new file mode 100644 index 000000000..e9c799534 --- /dev/null +++ b/docker/bind9/zones/partition1/2.0.192.in-addr.arpa @@ -0,0 +1,15 @@ +$ttl 38400 +1.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +1.0.192.in-addr.arpa. IN NS 172.17.42.1. +192/30 IN NS 172.17.42.1. +192 IN CNAME 192.192/30.2.0.192.in-addr.arpa. +193 IN CNAME 193.192/30.2.0.192.in-addr.arpa. +194 IN CNAME 194.192/30.2.0.192.in-addr.arpa. +195 IN CNAME 195.192/30.2.0.192.in-addr.arpa. +253 IN PTR high.value.domain.ip4. +255 IN PTR needs.review.domain.ip4 diff --git a/docker/bind9/zones/partition1/child.parent.com.hosts b/docker/bind9/zones/partition1/child.parent.com.hosts new file mode 100644 index 000000000..b43e247c9 --- /dev/null +++ b/docker/bind9/zones/partition1/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com1. +@ IN SOA ns1.parent.com1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com1. diff --git a/docker/bind9/zones/partition1/dskey.example.com.hosts b/docker/bind9/zones/partition1/dskey.example.com.hosts new file mode 100644 index 000000000..1e9300b6f --- /dev/null +++ b/docker/bind9/zones/partition1/dskey.example.com.hosts @@ -0,0 +1,9 @@ +$TTL 1h +$ORIGIN dskey.example.com1. +@ IN SOA ns1.parent.com1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dskey.example.com1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/dummy.hosts b/docker/bind9/zones/partition1/dummy.hosts similarity index 77% rename from docker/bind9/zones/dummy.hosts rename to docker/bind9/zones/partition1/dummy.hosts index e6a53c3f1..804ff27f0 100644 --- a/docker/bind9/zones/dummy.hosts +++ b/docker/bind9/zones/partition1/dummy.hosts @@ -1,11 +1,11 @@ $ttl 38400 -dummy. IN SOA 172.17.42.1. admin.test.com. ( +dummy1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -dummy. IN NS 172.17.42.1. +dummy1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition1/example.com.hosts b/docker/bind9/zones/partition1/example.com.hosts new file mode 100644 index 000000000..93994fa0b --- /dev/null +++ b/docker/bind9/zones/partition1/example.com.hosts @@ -0,0 +1,10 @@ +$TTL 1h +$ORIGIN example.com1. +@ IN SOA ns1.parent.com1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +example.com1. IN NS 172.17.42.1. +dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/invalid-zone.hosts b/docker/bind9/zones/partition1/invalid-zone.hosts similarity index 51% rename from docker/bind9/zones/invalid-zone.hosts rename to docker/bind9/zones/partition1/invalid-zone.hosts index 47eae6943..bbb9bd122 100644 --- a/docker/bind9/zones/invalid-zone.hosts +++ b/docker/bind9/zones/partition1/invalid-zone.hosts @@ -1,14 +1,14 @@ $ttl 38400 -invalid-zone. IN SOA 172.17.42.1. admin.test.com. ( +invalid-zone1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -invalid-zone. IN NS 172.17.42.1. -invalid-zone. IN NS not-approved.thing.com. -invalid.child.invalid-zone. IN NS 172.17.42.1. -dotted.host.invalid-zone. IN A 1.2.3.4 +invalid-zone1. IN NS 172.17.42.1. +invalid-zone1. IN NS not-approved.thing.com. +invalid.child.invalid-zone1. IN NS 172.17.42.1. +dotted.host.invalid-zone1. IN A 1.2.3.4 jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/list-records.hosts b/docker/bind9/zones/partition1/list-records.hosts similarity index 93% rename from docker/bind9/zones/list-records.hosts rename to docker/bind9/zones/partition1/list-records.hosts index f50a10fea..f17446273 100644 --- a/docker/bind9/zones/list-records.hosts +++ b/docker/bind9/zones/partition1/list-records.hosts @@ -1,11 +1,11 @@ $ttl 38400 -list-records. IN SOA 172.17.42.1. admin.test.com. ( +list-records1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -list-records. IN NS 172.17.42.1. +list-records1. IN NS 172.17.42.1. 00-test-list-recordsets-0-A IN A 10.1.1.1 00-test-list-recordsets-0-A IN A 10.2.2.2 00-test-list-recordsets-0-CNAME IN CNAME cname1. diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-1.hosts b/docker/bind9/zones/partition1/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..bf7c6835f --- /dev/null +++ b/docker/bind9/zones/partition1/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-11. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-11. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-2.hosts b/docker/bind9/zones/partition1/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..45e746668 --- /dev/null +++ b/docker/bind9/zones/partition1/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-21. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-21. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-3.hosts b/docker/bind9/zones/partition1/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..ee4224234 --- /dev/null +++ b/docker/bind9/zones/partition1/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-31. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-31. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts new file mode 100644 index 000000000..0ee4fecd5 --- /dev/null +++ b/docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-11. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-11. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts new file mode 100644 index 000000000..e59eecd2f --- /dev/null +++ b/docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-21. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-21. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition1/non.test.shared.hosts b/docker/bind9/zones/partition1/non.test.shared.hosts new file mode 100644 index 000000000..6a99fb0c5 --- /dev/null +++ b/docker/bind9/zones/partition1/non.test.shared.hosts @@ -0,0 +1,13 @@ +$ttl 38400 +non.test.shared1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +non.test.shared1. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 +delete-test IN A 4.4.4.4 +update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/not.loaded.hosts b/docker/bind9/zones/partition1/not.loaded.hosts similarity index 51% rename from docker/bind9/zones/not.loaded.hosts rename to docker/bind9/zones/partition1/not.loaded.hosts index 4f0a93779..117e55ccc 100644 --- a/docker/bind9/zones/not.loaded.hosts +++ b/docker/bind9/zones/partition1/not.loaded.hosts @@ -1,9 +1,9 @@ $ttl 38400 -not.loaded. IN SOA 172.17.42.1. admin.test.com. ( +not.loaded1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -not.loaded. IN NS 172.17.42.1. +not.loaded1. IN NS 172.17.42.1. foo IN A 1.1.1.1 diff --git a/docker/bind9/zones/ok.hosts b/docker/bind9/zones/partition1/ok.hosts old mode 100755 new mode 100644 similarity index 80% rename from docker/bind9/zones/ok.hosts rename to docker/bind9/zones/partition1/ok.hosts index aaa985c5e..c8748a430 --- a/docker/bind9/zones/ok.hosts +++ b/docker/bind9/zones/partition1/ok.hosts @@ -1,11 +1,11 @@ $ttl 38400 -ok. IN SOA 172.17.42.1. admin.test.com. ( +ok1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -ok. IN NS 172.17.42.1. +ok1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/old-vinyldns3.hosts b/docker/bind9/zones/partition1/old-shared.hosts old mode 100755 new mode 100644 similarity index 72% rename from docker/bind9/zones/old-vinyldns3.hosts rename to docker/bind9/zones/partition1/old-shared.hosts index 5d514886a..487dd6bbe --- a/docker/bind9/zones/old-vinyldns3.hosts +++ b/docker/bind9/zones/partition1/old-shared.hosts @@ -1,11 +1,11 @@ $ttl 38400 -old-vinyldns3. IN SOA 172.17.42.1. admin.test.com. ( +old-shared1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -old-vinyldns3. IN NS 172.17.42.1. +old-shared1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition1/old-vinyldns2.hosts b/docker/bind9/zones/partition1/old-vinyldns2.hosts new file mode 100644 index 000000000..e0ffb3bbf --- /dev/null +++ b/docker/bind9/zones/partition1/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns21. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns21. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition1/old-vinyldns3.hosts b/docker/bind9/zones/partition1/old-vinyldns3.hosts new file mode 100644 index 000000000..f98e656d8 --- /dev/null +++ b/docker/bind9/zones/partition1/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns31. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns31. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition1/one-time-shared.hosts b/docker/bind9/zones/partition1/one-time-shared.hosts new file mode 100644 index 000000000..df3f87abe --- /dev/null +++ b/docker/bind9/zones/partition1/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/one-time.hosts b/docker/bind9/zones/partition1/one-time.hosts old mode 100755 new mode 100644 similarity index 74% rename from docker/bind9/zones/one-time.hosts rename to docker/bind9/zones/partition1/one-time.hosts index df072413e..abe1f028f --- a/docker/bind9/zones/one-time.hosts +++ b/docker/bind9/zones/partition1/one-time.hosts @@ -1,11 +1,11 @@ $ttl 38400 -one-time. IN SOA 172.17.42.1. admin.test.com. ( +one-time1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -one-time. IN NS 172.17.42.1. +one-time1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition1/open.hosts b/docker/bind9/zones/partition1/open.hosts new file mode 100644 index 000000000..f72115af3 --- /dev/null +++ b/docker/bind9/zones/partition1/open.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +open1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +open1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/parent.com.hosts b/docker/bind9/zones/partition1/parent.com.hosts old mode 100755 new mode 100644 similarity index 71% rename from docker/bind9/zones/parent.com.hosts rename to docker/bind9/zones/partition1/parent.com.hosts index c3dc749f6..e93a37057 --- a/docker/bind9/zones/parent.com.hosts +++ b/docker/bind9/zones/partition1/parent.com.hosts @@ -1,12 +1,12 @@ $ttl 38400 -$ORIGIN parent.com. -@ IN SOA ns1.parent.com. admin.test.com. ( +$ORIGIN parent.com1. +@ IN SOA ns1.parent.com1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -parent.com. IN NS ns1.parent.com. +parent.com1. IN NS ns1.parent.com1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/shared.hosts b/docker/bind9/zones/partition1/shared.hosts old mode 100755 new mode 100644 similarity index 80% rename from docker/bind9/zones/shared.hosts rename to docker/bind9/zones/partition1/shared.hosts index d9115a129..5be10b61d --- a/docker/bind9/zones/shared.hosts +++ b/docker/bind9/zones/partition1/shared.hosts @@ -1,11 +1,11 @@ $ttl 38400 -shared. IN SOA 172.17.42.1. admin.test.com. ( +shared1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -shared. IN NS 172.17.42.1. +shared1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition1/sync-test.hosts b/docker/bind9/zones/partition1/sync-test.hosts new file mode 100644 index 000000000..54e597099 --- /dev/null +++ b/docker/bind9/zones/partition1/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test1. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/system-test-history.hosts b/docker/bind9/zones/partition1/system-test-history.hosts old mode 100755 new mode 100644 similarity index 70% rename from docker/bind9/zones/system-test-history.hosts rename to docker/bind9/zones/partition1/system-test-history.hosts index 1408efda6..6c7c73058 --- a/docker/bind9/zones/system-test-history.hosts +++ b/docker/bind9/zones/partition1/system-test-history.hosts @@ -1,11 +1,11 @@ $ttl 38400 -system-test-history. IN SOA 172.17.42.1. admin.test.com. ( +system-test-history1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -system-test-history. IN NS 172.17.42.1. +system-test-history1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/system-test.hosts b/docker/bind9/zones/partition1/system-test.hosts old mode 100755 new mode 100644 similarity index 79% rename from docker/bind9/zones/system-test.hosts rename to docker/bind9/zones/partition1/system-test.hosts index 75a819a33..0a138f9ed --- a/docker/bind9/zones/system-test.hosts +++ b/docker/bind9/zones/partition1/system-test.hosts @@ -1,11 +1,11 @@ $ttl 38400 -system-test. IN SOA 172.17.42.1. admin.test.com. ( +system-test1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -system-test. IN NS 172.17.42.1. +system-test1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/old-shared.hosts b/docker/bind9/zones/partition1/vinyldns.hosts old mode 100755 new mode 100644 similarity index 73% rename from docker/bind9/zones/old-shared.hosts rename to docker/bind9/zones/partition1/vinyldns.hosts index a7c06b6d1..c5fc44e94 --- a/docker/bind9/zones/old-shared.hosts +++ b/docker/bind9/zones/partition1/vinyldns.hosts @@ -1,11 +1,11 @@ $ttl 38400 -old-shared. IN SOA 172.17.42.1. admin.test.com. ( +vinyldns1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -old-shared. IN NS 172.17.42.1. +vinyldns1. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/zone.requires.review.hosts b/docker/bind9/zones/partition1/zone.requires.review.hosts similarity index 62% rename from docker/bind9/zones/zone.requires.review.hosts rename to docker/bind9/zones/partition1/zone.requires.review.hosts index b1deedda4..e1522d7e1 100644 --- a/docker/bind9/zones/zone.requires.review.hosts +++ b/docker/bind9/zones/partition1/zone.requires.review.hosts @@ -1,11 +1,11 @@ $ttl 38400 -zone.requires.review. IN SOA 172.17.42.1. admin.test.com. ( +zone.requires.review1. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -zone.requires.review. IN NS 172.17.42.1. +zone.requires.review1. IN NS 172.17.42.1. @ IN A 1.1.1.1 delete-test-batch IN A 2.2.2.2 update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..1aed1cd53 --- /dev/null +++ b/docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,12 @@ +$ttl 38400 +0.0.0.1.2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +0.0.0.1.2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..861e849ea --- /dev/null +++ b/docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,13 @@ +$ttl 38400 +2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +0.0.0.1 IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition2/10.10.in-addr.arpa b/docker/bind9/zones/partition2/10.10.in-addr.arpa new file mode 100644 index 000000000..2031c6c82 --- /dev/null +++ b/docker/bind9/zones/partition2/10.10.in-addr.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +2.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +2.10.in-addr.arpa. IN NS 172.17.42.1. +24.0 IN PTR www.vinyl. +25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/192^30.2.0.192.in-addr.arpa rename to docker/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/2.0.192.in-addr.arpa b/docker/bind9/zones/partition2/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/2.0.192.in-addr.arpa rename to docker/bind9/zones/partition2/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition2/child.parent.com.hosts b/docker/bind9/zones/partition2/child.parent.com.hosts new file mode 100644 index 000000000..a1a1177b3 --- /dev/null +++ b/docker/bind9/zones/partition2/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com2. +@ IN SOA ns1.parent.com2. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com2. diff --git a/docker/bind9/zones/partition2/dskey.example.com.hosts b/docker/bind9/zones/partition2/dskey.example.com.hosts new file mode 100644 index 000000000..e35faa9b5 --- /dev/null +++ b/docker/bind9/zones/partition2/dskey.example.com.hosts @@ -0,0 +1,9 @@ +$TTL 1h +$ORIGIN dskey.example.com2. +@ IN SOA ns1.parent.com2. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dskey.example.com2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/dummy.hosts b/docker/bind9/zones/partition2/dummy.hosts new file mode 100644 index 000000000..50346d691 --- /dev/null +++ b/docker/bind9/zones/partition2/dummy.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +dummy2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dummy2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +non-approved-delegation IN NS 7.7.7.7 diff --git a/docker/bind9/zones/partition2/example.com.hosts b/docker/bind9/zones/partition2/example.com.hosts new file mode 100644 index 000000000..1fcafb230 --- /dev/null +++ b/docker/bind9/zones/partition2/example.com.hosts @@ -0,0 +1,10 @@ +$TTL 1h +$ORIGIN example.com2. +@ IN SOA ns1.parent.com2. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +example.com2. IN NS 172.17.42.1. +dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/invalid-zone.hosts b/docker/bind9/zones/partition2/invalid-zone.hosts new file mode 100644 index 000000000..b4ba68241 --- /dev/null +++ b/docker/bind9/zones/partition2/invalid-zone.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +invalid-zone2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +invalid-zone2. IN NS 172.17.42.1. +invalid-zone2. IN NS not-approved.thing.com. +invalid.child.invalid-zone2. IN NS 172.17.42.1. +dotted.host.invalid-zone2. IN A 1.2.3.4 +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition2/list-records.hosts b/docker/bind9/zones/partition2/list-records.hosts new file mode 100644 index 000000000..6d3b0ac6b --- /dev/null +++ b/docker/bind9/zones/partition2/list-records.hosts @@ -0,0 +1,38 @@ +$ttl 38400 +list-records2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-records2. IN NS 172.17.42.1. +00-test-list-recordsets-0-A IN A 10.1.1.1 +00-test-list-recordsets-0-A IN A 10.2.2.2 +00-test-list-recordsets-0-CNAME IN CNAME cname1. +00-test-list-recordsets-1-A IN A 10.1.1.1 +00-test-list-recordsets-1-A IN A 10.2.2.2 +00-test-list-recordsets-1-CNAME IN CNAME cname1. +00-test-list-recordsets-2-A IN A 10.1.1.1 +00-test-list-recordsets-2-A IN A 10.2.2.2 +00-test-list-recordsets-2-CNAME IN CNAME cname1. +00-test-list-recordsets-3-A IN A 10.1.1.1 +00-test-list-recordsets-3-A IN A 10.2.2.2 +00-test-list-recordsets-3-CNAME IN CNAME cname1. +00-test-list-recordsets-4-A IN A 10.1.1.1 +00-test-list-recordsets-4-A IN A 10.2.2.2 +00-test-list-recordsets-4-CNAME IN CNAME cname1. +00-test-list-recordsets-5-A IN A 10.1.1.1 +00-test-list-recordsets-5-A IN A 10.2.2.2 +00-test-list-recordsets-5-CNAME IN CNAME cname1. +00-test-list-recordsets-6-A IN A 10.1.1.1 +00-test-list-recordsets-6-A IN A 10.2.2.2 +00-test-list-recordsets-6-CNAME IN CNAME cname1. +00-test-list-recordsets-7-A IN A 10.1.1.1 +00-test-list-recordsets-7-A IN A 10.2.2.2 +00-test-list-recordsets-7-CNAME IN CNAME cname1. +00-test-list-recordsets-8-A IN A 10.1.1.1 +00-test-list-recordsets-8-A IN A 10.2.2.2 +00-test-list-recordsets-8-CNAME IN CNAME cname1. +00-test-list-recordsets-9-A IN A 10.1.1.1 +00-test-list-recordsets-9-A IN A 10.2.2.2 +00-test-list-recordsets-9-CNAME IN CNAME cname1. diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-1.hosts b/docker/bind9/zones/partition2/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..300315754 --- /dev/null +++ b/docker/bind9/zones/partition2/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-12. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-12. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-2.hosts b/docker/bind9/zones/partition2/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..475e9770d --- /dev/null +++ b/docker/bind9/zones/partition2/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-22. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-22. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-3.hosts b/docker/bind9/zones/partition2/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..7539a0099 --- /dev/null +++ b/docker/bind9/zones/partition2/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-32. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-32. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts new file mode 100644 index 000000000..1da508d1f --- /dev/null +++ b/docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-12. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-12. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts new file mode 100644 index 000000000..dc7931fc5 --- /dev/null +++ b/docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-22. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-22. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/non.test.shared.hosts b/docker/bind9/zones/partition2/non.test.shared.hosts new file mode 100644 index 000000000..591b49fbe --- /dev/null +++ b/docker/bind9/zones/partition2/non.test.shared.hosts @@ -0,0 +1,13 @@ +$ttl 38400 +non.test.shared2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +non.test.shared2. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 +delete-test IN A 4.4.4.4 +update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/partition2/not.loaded.hosts b/docker/bind9/zones/partition2/not.loaded.hosts new file mode 100644 index 000000000..ffec07633 --- /dev/null +++ b/docker/bind9/zones/partition2/not.loaded.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +not.loaded2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +not.loaded2. IN NS 172.17.42.1. +foo IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition2/ok.hosts b/docker/bind9/zones/partition2/ok.hosts new file mode 100644 index 000000000..382a2be90 --- /dev/null +++ b/docker/bind9/zones/partition2/ok.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +ok2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +ok2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +dotted.a IN A 7.7.7.7 +dottedc.name IN CNAME test.example.com diff --git a/docker/bind9/zones/old-vinyldns2.hosts b/docker/bind9/zones/partition2/old-shared.hosts old mode 100755 new mode 100644 similarity index 72% rename from docker/bind9/zones/old-vinyldns2.hosts rename to docker/bind9/zones/partition2/old-shared.hosts index 5fdc55ce9..10fb245f8 --- a/docker/bind9/zones/old-vinyldns2.hosts +++ b/docker/bind9/zones/partition2/old-shared.hosts @@ -1,11 +1,11 @@ $ttl 38400 -old-vinyldns2. IN SOA 172.17.42.1. admin.test.com. ( +old-shared2. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -old-vinyldns2. IN NS 172.17.42.1. +old-shared2. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition2/old-vinyldns2.hosts b/docker/bind9/zones/partition2/old-vinyldns2.hosts new file mode 100644 index 000000000..25bf67302 --- /dev/null +++ b/docker/bind9/zones/partition2/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns22. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns22. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition2/old-vinyldns3.hosts b/docker/bind9/zones/partition2/old-vinyldns3.hosts new file mode 100644 index 000000000..7bc3c4018 --- /dev/null +++ b/docker/bind9/zones/partition2/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns32. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns32. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition2/one-time-shared.hosts b/docker/bind9/zones/partition2/one-time-shared.hosts new file mode 100644 index 000000000..b2f69ddf8 --- /dev/null +++ b/docker/bind9/zones/partition2/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/vinyldns.hosts b/docker/bind9/zones/partition2/one-time.hosts similarity index 74% rename from docker/bind9/zones/vinyldns.hosts rename to docker/bind9/zones/partition2/one-time.hosts index 905211823..25a326d85 100644 --- a/docker/bind9/zones/vinyldns.hosts +++ b/docker/bind9/zones/partition2/one-time.hosts @@ -1,11 +1,11 @@ $ttl 38400 -vinyldns. IN SOA 172.17.42.1. admin.test.com. ( +one-time2. IN SOA 172.17.42.1. admin.test.com. ( 1439234395 10800 3600 604800 38400 ) -vinyldns. IN NS 172.17.42.1. +one-time2. IN NS 172.17.42.1. jenkins IN A 10.1.1.1 foo IN A 2.2.2.2 test IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition2/open.hosts b/docker/bind9/zones/partition2/open.hosts new file mode 100644 index 000000000..e7225f2a8 --- /dev/null +++ b/docker/bind9/zones/partition2/open.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +open2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +open2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition2/parent.com.hosts b/docker/bind9/zones/partition2/parent.com.hosts new file mode 100644 index 000000000..957a98326 --- /dev/null +++ b/docker/bind9/zones/partition2/parent.com.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +$ORIGIN parent.com2. +@ IN SOA ns1.parent.com2. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +parent.com2. IN NS ns1.parent.com2. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +already-exists IN A 6.6.6.6 +ns1 IN A 172.17.42.1 diff --git a/docker/bind9/zones/partition2/shared.hosts b/docker/bind9/zones/partition2/shared.hosts new file mode 100644 index 000000000..a7ca73f1a --- /dev/null +++ b/docker/bind9/zones/partition2/shared.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +shared2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +shared2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition2/sync-test.hosts b/docker/bind9/zones/partition2/sync-test.hosts new file mode 100644 index 000000000..01622aaee --- /dev/null +++ b/docker/bind9/zones/partition2/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition2/system-test-history.hosts b/docker/bind9/zones/partition2/system-test-history.hosts new file mode 100644 index 000000000..1ddb9ee42 --- /dev/null +++ b/docker/bind9/zones/partition2/system-test-history.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test-history2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test-history2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition2/system-test.hosts b/docker/bind9/zones/partition2/system-test.hosts new file mode 100644 index 000000000..691894863 --- /dev/null +++ b/docker/bind9/zones/partition2/system-test.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +system-test2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +high-value-domain IN A 1.1.1.1 +high-VALUE-domain-UPPER-CASE IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition2/vinyldns.hosts b/docker/bind9/zones/partition2/vinyldns.hosts new file mode 100644 index 000000000..e934beda3 --- /dev/null +++ b/docker/bind9/zones/partition2/vinyldns.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +vinyldns2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +vinyldns2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition2/zone.requires.review.hosts b/docker/bind9/zones/partition2/zone.requires.review.hosts new file mode 100644 index 000000000..90a970f45 --- /dev/null +++ b/docker/bind9/zones/partition2/zone.requires.review.hosts @@ -0,0 +1,11 @@ +$ttl 38400 +zone.requires.review2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +zone.requires.review2. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..f3f7799b6 --- /dev/null +++ b/docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,12 @@ +$ttl 38400 +0.0.0.1.3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +0.0.0.1.3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..8e97b0dbc --- /dev/null +++ b/docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,13 @@ +$ttl 38400 +3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +0.0.0.1 IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition3/10.10.in-addr.arpa b/docker/bind9/zones/partition3/10.10.in-addr.arpa new file mode 100644 index 000000000..34fe12f4f --- /dev/null +++ b/docker/bind9/zones/partition3/10.10.in-addr.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +3.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +3.10.in-addr.arpa. IN NS 172.17.42.1. +24.0 IN PTR www.vinyl. +25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa new file mode 100644 index 000000000..bcda0b5d2 --- /dev/null +++ b/docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa @@ -0,0 +1,11 @@ +$ttl 38400 +192/30.3.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +192/30.3.0.192.in-addr.arpa. IN NS 172.17.42.1. +192 IN PTR portal.vinyldns. +194 IN PTR mail.vinyldns. +195 IN PTR test.vinyldns. diff --git a/docker/bind9/zones/partition3/2.0.192.in-addr.arpa b/docker/bind9/zones/partition3/2.0.192.in-addr.arpa new file mode 100644 index 000000000..03cb1e7e6 --- /dev/null +++ b/docker/bind9/zones/partition3/2.0.192.in-addr.arpa @@ -0,0 +1,15 @@ +$ttl 38400 +3.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +3.0.192.in-addr.arpa. IN NS 172.17.42.1. +192/30 IN NS 172.17.42.1. +192 IN CNAME 192.192/30.2.0.192.in-addr.arpa. +193 IN CNAME 193.192/30.2.0.192.in-addr.arpa. +194 IN CNAME 194.192/30.2.0.192.in-addr.arpa. +195 IN CNAME 195.192/30.2.0.192.in-addr.arpa. +253 IN PTR high.value.domain.ip4. +255 IN PTR needs.review.domain.ip4 diff --git a/docker/bind9/zones/partition3/child.parent.com.hosts b/docker/bind9/zones/partition3/child.parent.com.hosts new file mode 100644 index 000000000..5411a8138 --- /dev/null +++ b/docker/bind9/zones/partition3/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com3. +@ IN SOA ns1.parent.com3. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com3. diff --git a/docker/bind9/zones/partition3/dskey.example.com.hosts b/docker/bind9/zones/partition3/dskey.example.com.hosts new file mode 100644 index 000000000..fd759aa84 --- /dev/null +++ b/docker/bind9/zones/partition3/dskey.example.com.hosts @@ -0,0 +1,9 @@ +$TTL 1h +$ORIGIN dskey.example.com3. +@ IN SOA ns1.parent.com3. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dskey.example.com3. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/dummy.hosts b/docker/bind9/zones/partition3/dummy.hosts new file mode 100644 index 000000000..a79b6a4f9 --- /dev/null +++ b/docker/bind9/zones/partition3/dummy.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +dummy3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dummy3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +non-approved-delegation IN NS 7.7.7.7 diff --git a/docker/bind9/zones/partition3/example.com.hosts b/docker/bind9/zones/partition3/example.com.hosts new file mode 100644 index 000000000..6eac59f8d --- /dev/null +++ b/docker/bind9/zones/partition3/example.com.hosts @@ -0,0 +1,10 @@ +$TTL 1h +$ORIGIN example.com3. +@ IN SOA ns1.parent.com3. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +example.com3. IN NS 172.17.42.1. +dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/invalid-zone.hosts b/docker/bind9/zones/partition3/invalid-zone.hosts new file mode 100644 index 000000000..f9063bfc4 --- /dev/null +++ b/docker/bind9/zones/partition3/invalid-zone.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +invalid-zone3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +invalid-zone3. IN NS 172.17.42.1. +invalid-zone3. IN NS not-approved.thing.com. +invalid.child.invalid-zone3. IN NS 172.17.42.1. +dotted.host.invalid-zone3. IN A 1.2.3.4 +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/list-records.hosts b/docker/bind9/zones/partition3/list-records.hosts new file mode 100644 index 000000000..48c26a6be --- /dev/null +++ b/docker/bind9/zones/partition3/list-records.hosts @@ -0,0 +1,38 @@ +$ttl 38400 +list-records3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-records3. IN NS 172.17.42.1. +00-test-list-recordsets-0-A IN A 10.1.1.1 +00-test-list-recordsets-0-A IN A 10.2.2.2 +00-test-list-recordsets-0-CNAME IN CNAME cname1. +00-test-list-recordsets-1-A IN A 10.1.1.1 +00-test-list-recordsets-1-A IN A 10.2.2.2 +00-test-list-recordsets-1-CNAME IN CNAME cname1. +00-test-list-recordsets-2-A IN A 10.1.1.1 +00-test-list-recordsets-2-A IN A 10.2.2.2 +00-test-list-recordsets-2-CNAME IN CNAME cname1. +00-test-list-recordsets-3-A IN A 10.1.1.1 +00-test-list-recordsets-3-A IN A 10.2.2.2 +00-test-list-recordsets-3-CNAME IN CNAME cname1. +00-test-list-recordsets-4-A IN A 10.1.1.1 +00-test-list-recordsets-4-A IN A 10.2.2.2 +00-test-list-recordsets-4-CNAME IN CNAME cname1. +00-test-list-recordsets-5-A IN A 10.1.1.1 +00-test-list-recordsets-5-A IN A 10.2.2.2 +00-test-list-recordsets-5-CNAME IN CNAME cname1. +00-test-list-recordsets-6-A IN A 10.1.1.1 +00-test-list-recordsets-6-A IN A 10.2.2.2 +00-test-list-recordsets-6-CNAME IN CNAME cname1. +00-test-list-recordsets-7-A IN A 10.1.1.1 +00-test-list-recordsets-7-A IN A 10.2.2.2 +00-test-list-recordsets-7-CNAME IN CNAME cname1. +00-test-list-recordsets-8-A IN A 10.1.1.1 +00-test-list-recordsets-8-A IN A 10.2.2.2 +00-test-list-recordsets-8-CNAME IN CNAME cname1. +00-test-list-recordsets-9-A IN A 10.1.1.1 +00-test-list-recordsets-9-A IN A 10.2.2.2 +00-test-list-recordsets-9-CNAME IN CNAME cname1. diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-1.hosts b/docker/bind9/zones/partition3/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..5b43f9d48 --- /dev/null +++ b/docker/bind9/zones/partition3/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-13. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-13. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-2.hosts b/docker/bind9/zones/partition3/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..e29044539 --- /dev/null +++ b/docker/bind9/zones/partition3/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-23. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-23. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-3.hosts b/docker/bind9/zones/partition3/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..1fbd2cd17 --- /dev/null +++ b/docker/bind9/zones/partition3/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-33. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-33. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts new file mode 100644 index 000000000..d70b6fda2 --- /dev/null +++ b/docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-13. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-13. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts new file mode 100644 index 000000000..e3f969a25 --- /dev/null +++ b/docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-23. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-23. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/non.test.shared.hosts b/docker/bind9/zones/partition3/non.test.shared.hosts new file mode 100644 index 000000000..f71303352 --- /dev/null +++ b/docker/bind9/zones/partition3/non.test.shared.hosts @@ -0,0 +1,13 @@ +$ttl 38400 +non.test.shared3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +non.test.shared3. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 +delete-test IN A 4.4.4.4 +update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/partition3/not.loaded.hosts b/docker/bind9/zones/partition3/not.loaded.hosts new file mode 100644 index 000000000..cc50178c2 --- /dev/null +++ b/docker/bind9/zones/partition3/not.loaded.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +not.loaded3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +not.loaded3. IN NS 172.17.42.1. +foo IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition3/ok.hosts b/docker/bind9/zones/partition3/ok.hosts new file mode 100644 index 000000000..9690ef8bf --- /dev/null +++ b/docker/bind9/zones/partition3/ok.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +ok3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +ok3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +dotted.a IN A 7.7.7.7 +dottedc.name IN CNAME test.example.com diff --git a/docker/bind9/zones/partition3/old-shared.hosts b/docker/bind9/zones/partition3/old-shared.hosts new file mode 100644 index 000000000..e30a3874f --- /dev/null +++ b/docker/bind9/zones/partition3/old-shared.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-shared3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-shared3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/old-vinyldns2.hosts b/docker/bind9/zones/partition3/old-vinyldns2.hosts new file mode 100644 index 000000000..90b8b34d3 --- /dev/null +++ b/docker/bind9/zones/partition3/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns23. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns23. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/old-vinyldns3.hosts b/docker/bind9/zones/partition3/old-vinyldns3.hosts new file mode 100644 index 000000000..04844dcf8 --- /dev/null +++ b/docker/bind9/zones/partition3/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns33. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns33. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/one-time-shared.hosts b/docker/bind9/zones/partition3/one-time-shared.hosts new file mode 100644 index 000000000..6bd47a8b8 --- /dev/null +++ b/docker/bind9/zones/partition3/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared3. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/one-time.hosts b/docker/bind9/zones/partition3/one-time.hosts new file mode 100644 index 000000000..05b7506e3 --- /dev/null +++ b/docker/bind9/zones/partition3/one-time.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +one-time3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/open.hosts b/docker/bind9/zones/partition3/open.hosts new file mode 100644 index 000000000..3ca4ab7c6 --- /dev/null +++ b/docker/bind9/zones/partition3/open.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +open3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +open3. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition3/parent.com.hosts b/docker/bind9/zones/partition3/parent.com.hosts new file mode 100644 index 000000000..33b348661 --- /dev/null +++ b/docker/bind9/zones/partition3/parent.com.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +$ORIGIN parent.com3. +@ IN SOA ns1.parent.com3. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +parent.com3. IN NS ns1.parent.com3. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +already-exists IN A 6.6.6.6 +ns1 IN A 172.17.42.1 diff --git a/docker/bind9/zones/partition3/shared.hosts b/docker/bind9/zones/partition3/shared.hosts new file mode 100644 index 000000000..38610bdc8 --- /dev/null +++ b/docker/bind9/zones/partition3/shared.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +shared3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +shared3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition3/sync-test.hosts b/docker/bind9/zones/partition3/sync-test.hosts new file mode 100644 index 000000000..0fc832f5d --- /dev/null +++ b/docker/bind9/zones/partition3/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition3/system-test-history.hosts b/docker/bind9/zones/partition3/system-test-history.hosts new file mode 100644 index 000000000..dd1357713 --- /dev/null +++ b/docker/bind9/zones/partition3/system-test-history.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test-history3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test-history3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/system-test.hosts b/docker/bind9/zones/partition3/system-test.hosts new file mode 100644 index 000000000..eee3f457e --- /dev/null +++ b/docker/bind9/zones/partition3/system-test.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +system-test3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +high-value-domain IN A 1.1.1.1 +high-VALUE-domain-UPPER-CASE IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition3/vinyldns.hosts b/docker/bind9/zones/partition3/vinyldns.hosts new file mode 100644 index 000000000..a890cdb81 --- /dev/null +++ b/docker/bind9/zones/partition3/vinyldns.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +vinyldns3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +vinyldns3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition3/zone.requires.review.hosts b/docker/bind9/zones/partition3/zone.requires.review.hosts new file mode 100644 index 000000000..28fa8cd08 --- /dev/null +++ b/docker/bind9/zones/partition3/zone.requires.review.hosts @@ -0,0 +1,11 @@ +$ttl 38400 +zone.requires.review3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +zone.requires.review3. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..dc4531ba7 --- /dev/null +++ b/docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,12 @@ +$ttl 38400 +0.0.0.1.4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +0.0.0.1.4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100644 index 000000000..bd78df2a1 --- /dev/null +++ b/docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,13 @@ +$ttl 38400 +4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +0.0.0.1 IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. +0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR high.value.domain.ip6. +2.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR needs.review.domain.ip6. diff --git a/docker/bind9/zones/partition4/10.10.in-addr.arpa b/docker/bind9/zones/partition4/10.10.in-addr.arpa new file mode 100644 index 000000000..c25fe5314 --- /dev/null +++ b/docker/bind9/zones/partition4/10.10.in-addr.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +4.10.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +4.10.in-addr.arpa. IN NS 172.17.42.1. +24.0 IN PTR www.vinyl. +25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa new file mode 100644 index 000000000..cd0622f26 --- /dev/null +++ b/docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa @@ -0,0 +1,11 @@ +$ttl 38400 +192/30.4.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +192/30.4.0.192.in-addr.arpa. IN NS 172.17.42.1. +192 IN PTR portal.vinyldns. +194 IN PTR mail.vinyldns. +195 IN PTR test.vinyldns. diff --git a/docker/bind9/zones/partition4/2.0.192.in-addr.arpa b/docker/bind9/zones/partition4/2.0.192.in-addr.arpa new file mode 100644 index 000000000..3763763f2 --- /dev/null +++ b/docker/bind9/zones/partition4/2.0.192.in-addr.arpa @@ -0,0 +1,15 @@ +$ttl 38400 +4.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +4.0.192.in-addr.arpa. IN NS 172.17.42.1. +192/30 IN NS 172.17.42.1. +192 IN CNAME 192.192/30.2.0.192.in-addr.arpa. +193 IN CNAME 193.192/30.2.0.192.in-addr.arpa. +194 IN CNAME 194.192/30.2.0.192.in-addr.arpa. +195 IN CNAME 195.192/30.2.0.192.in-addr.arpa. +253 IN PTR high.value.domain.ip4. +255 IN PTR needs.review.domain.ip4 diff --git a/docker/bind9/zones/partition4/child.parent.com.hosts b/docker/bind9/zones/partition4/child.parent.com.hosts new file mode 100644 index 000000000..510870422 --- /dev/null +++ b/docker/bind9/zones/partition4/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com4. +@ IN SOA ns1.parent.com4. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com4. diff --git a/docker/bind9/zones/partition4/dskey.example.com.hosts b/docker/bind9/zones/partition4/dskey.example.com.hosts new file mode 100644 index 000000000..dc31f190f --- /dev/null +++ b/docker/bind9/zones/partition4/dskey.example.com.hosts @@ -0,0 +1,9 @@ +$TTL 1h +$ORIGIN dskey.example.com4. +@ IN SOA ns1.parent.com4. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dskey.example.com4. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/dummy.hosts b/docker/bind9/zones/partition4/dummy.hosts new file mode 100644 index 000000000..518db4e05 --- /dev/null +++ b/docker/bind9/zones/partition4/dummy.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +dummy4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dummy4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +non-approved-delegation IN NS 7.7.7.7 diff --git a/docker/bind9/zones/partition4/example.com.hosts b/docker/bind9/zones/partition4/example.com.hosts new file mode 100644 index 000000000..357c435dd --- /dev/null +++ b/docker/bind9/zones/partition4/example.com.hosts @@ -0,0 +1,10 @@ +$TTL 1h +$ORIGIN example.com4. +@ IN SOA ns1.parent.com4. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +example.com4. IN NS 172.17.42.1. +dskey IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/invalid-zone.hosts b/docker/bind9/zones/partition4/invalid-zone.hosts new file mode 100644 index 000000000..67ac92d7e --- /dev/null +++ b/docker/bind9/zones/partition4/invalid-zone.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +invalid-zone4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +invalid-zone4. IN NS 172.17.42.1. +invalid-zone4. IN NS not-approved.thing.com. +invalid.child.invalid-zone4. IN NS 172.17.42.1. +dotted.host.invalid-zone4. IN A 1.2.3.4 +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/list-records.hosts b/docker/bind9/zones/partition4/list-records.hosts new file mode 100644 index 000000000..9e2b58fd4 --- /dev/null +++ b/docker/bind9/zones/partition4/list-records.hosts @@ -0,0 +1,38 @@ +$ttl 38400 +list-records4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-records4. IN NS 172.17.42.1. +00-test-list-recordsets-0-A IN A 10.1.1.1 +00-test-list-recordsets-0-A IN A 10.2.2.2 +00-test-list-recordsets-0-CNAME IN CNAME cname1. +00-test-list-recordsets-1-A IN A 10.1.1.1 +00-test-list-recordsets-1-A IN A 10.2.2.2 +00-test-list-recordsets-1-CNAME IN CNAME cname1. +00-test-list-recordsets-2-A IN A 10.1.1.1 +00-test-list-recordsets-2-A IN A 10.2.2.2 +00-test-list-recordsets-2-CNAME IN CNAME cname1. +00-test-list-recordsets-3-A IN A 10.1.1.1 +00-test-list-recordsets-3-A IN A 10.2.2.2 +00-test-list-recordsets-3-CNAME IN CNAME cname1. +00-test-list-recordsets-4-A IN A 10.1.1.1 +00-test-list-recordsets-4-A IN A 10.2.2.2 +00-test-list-recordsets-4-CNAME IN CNAME cname1. +00-test-list-recordsets-5-A IN A 10.1.1.1 +00-test-list-recordsets-5-A IN A 10.2.2.2 +00-test-list-recordsets-5-CNAME IN CNAME cname1. +00-test-list-recordsets-6-A IN A 10.1.1.1 +00-test-list-recordsets-6-A IN A 10.2.2.2 +00-test-list-recordsets-6-CNAME IN CNAME cname1. +00-test-list-recordsets-7-A IN A 10.1.1.1 +00-test-list-recordsets-7-A IN A 10.2.2.2 +00-test-list-recordsets-7-CNAME IN CNAME cname1. +00-test-list-recordsets-8-A IN A 10.1.1.1 +00-test-list-recordsets-8-A IN A 10.2.2.2 +00-test-list-recordsets-8-CNAME IN CNAME cname1. +00-test-list-recordsets-9-A IN A 10.1.1.1 +00-test-list-recordsets-9-A IN A 10.2.2.2 +00-test-list-recordsets-9-CNAME IN CNAME cname1. diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-1.hosts b/docker/bind9/zones/partition4/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..08f2def7b --- /dev/null +++ b/docker/bind9/zones/partition4/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-14. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-14. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-2.hosts b/docker/bind9/zones/partition4/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..eac40d747 --- /dev/null +++ b/docker/bind9/zones/partition4/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-24. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-24. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-3.hosts b/docker/bind9/zones/partition4/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..418d82843 --- /dev/null +++ b/docker/bind9/zones/partition4/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-34. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-34. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts new file mode 100644 index 000000000..4b68e3b88 --- /dev/null +++ b/docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-14. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-14. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts new file mode 100644 index 000000000..3f93f6183 --- /dev/null +++ b/docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-24. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-24. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/non.test.shared.hosts b/docker/bind9/zones/partition4/non.test.shared.hosts new file mode 100644 index 000000000..180a85f22 --- /dev/null +++ b/docker/bind9/zones/partition4/non.test.shared.hosts @@ -0,0 +1,13 @@ +$ttl 38400 +non.test.shared4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +non.test.shared4. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 +delete-test IN A 4.4.4.4 +update-test IN A 5.5.5.5 diff --git a/docker/bind9/zones/partition4/not.loaded.hosts b/docker/bind9/zones/partition4/not.loaded.hosts new file mode 100644 index 000000000..510738a33 --- /dev/null +++ b/docker/bind9/zones/partition4/not.loaded.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +not.loaded4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +not.loaded4. IN NS 172.17.42.1. +foo IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition4/ok.hosts b/docker/bind9/zones/partition4/ok.hosts new file mode 100644 index 000000000..ff6b2e917 --- /dev/null +++ b/docker/bind9/zones/partition4/ok.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +ok4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +ok4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +dotted.a IN A 7.7.7.7 +dottedc.name IN CNAME test.example.com diff --git a/docker/bind9/zones/partition4/old-shared.hosts b/docker/bind9/zones/partition4/old-shared.hosts new file mode 100644 index 000000000..84a666607 --- /dev/null +++ b/docker/bind9/zones/partition4/old-shared.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-shared4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-shared4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/old-vinyldns2.hosts b/docker/bind9/zones/partition4/old-vinyldns2.hosts new file mode 100644 index 000000000..05ae0ff9b --- /dev/null +++ b/docker/bind9/zones/partition4/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns24. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns24. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/old-vinyldns3.hosts b/docker/bind9/zones/partition4/old-vinyldns3.hosts new file mode 100644 index 000000000..633881d7d --- /dev/null +++ b/docker/bind9/zones/partition4/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns34. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns34. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/one-time-shared.hosts b/docker/bind9/zones/partition4/one-time-shared.hosts new file mode 100644 index 000000000..156cee37a --- /dev/null +++ b/docker/bind9/zones/partition4/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared4. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/one-time.hosts b/docker/bind9/zones/partition4/one-time.hosts new file mode 100644 index 000000000..e62427e5a --- /dev/null +++ b/docker/bind9/zones/partition4/one-time.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +one-time4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/open.hosts b/docker/bind9/zones/partition4/open.hosts new file mode 100644 index 000000000..7870b2dd4 --- /dev/null +++ b/docker/bind9/zones/partition4/open.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +open4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +open4. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/partition4/parent.com.hosts b/docker/bind9/zones/partition4/parent.com.hosts new file mode 100644 index 000000000..2d47b276a --- /dev/null +++ b/docker/bind9/zones/partition4/parent.com.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +$ORIGIN parent.com4. +@ IN SOA ns1.parent.com4. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +parent.com4. IN NS ns1.parent.com4. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +already-exists IN A 6.6.6.6 +ns1 IN A 172.17.42.1 diff --git a/docker/bind9/zones/partition4/shared.hosts b/docker/bind9/zones/partition4/shared.hosts new file mode 100644 index 000000000..c4b9a4c8f --- /dev/null +++ b/docker/bind9/zones/partition4/shared.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +shared4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +shared4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/partition4/sync-test.hosts b/docker/bind9/zones/partition4/sync-test.hosts new file mode 100644 index 000000000..535849f62 --- /dev/null +++ b/docker/bind9/zones/partition4/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition4/system-test-history.hosts b/docker/bind9/zones/partition4/system-test-history.hosts new file mode 100644 index 000000000..d72deaf42 --- /dev/null +++ b/docker/bind9/zones/partition4/system-test-history.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test-history4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test-history4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/system-test.hosts b/docker/bind9/zones/partition4/system-test.hosts new file mode 100644 index 000000000..877aa936b --- /dev/null +++ b/docker/bind9/zones/partition4/system-test.hosts @@ -0,0 +1,16 @@ +$ttl 38400 +system-test4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +high-value-domain IN A 1.1.1.1 +high-VALUE-domain-UPPER-CASE IN A 1.1.1.1 diff --git a/docker/bind9/zones/partition4/vinyldns.hosts b/docker/bind9/zones/partition4/vinyldns.hosts new file mode 100644 index 000000000..66d785b41 --- /dev/null +++ b/docker/bind9/zones/partition4/vinyldns.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +vinyldns4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +vinyldns4. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/partition4/zone.requires.review.hosts b/docker/bind9/zones/partition4/zone.requires.review.hosts new file mode 100644 index 000000000..4879ca584 --- /dev/null +++ b/docker/bind9/zones/partition4/zone.requires.review.hosts @@ -0,0 +1,11 @@ +$ttl 38400 +zone.requires.review4. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +zone.requires.review4. IN NS 172.17.42.1. +@ IN A 1.1.1.1 +delete-test-batch IN A 2.2.2.2 +update-test-batch IN A 3.3.3.3 diff --git a/docker/bind9/zones/sync-test.hosts b/docker/bind9/zones/sync-test.hosts deleted file mode 100755 index 72024b633..000000000 --- a/docker/bind9/zones/sync-test.hosts +++ /dev/null @@ -1,17 +0,0 @@ -$ttl 38400 -sync-test. IN SOA 172.17.42.1. admin.test.com. ( - 1439234395 - 10800 - 3600 - 604800 - 38400 ) -sync-test. IN NS 172.17.42.1. -jenkins IN A 10.1.1.1 -foo IN A 2.2.2.2 -test IN A 3.3.3.3 -test IN A 4.4.4.4 -@ IN A 5.5.5.5 -already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 -_sip._tcp IN SRV 10 60 5060 foo.sync-test. -existing.dotted IN A 9.9.9.9 diff --git a/modules/api/functional_test/.gitignore b/modules/api/functional_test/.gitignore new file mode 100755 index 000000000..1b1553d23 --- /dev/null +++ b/modules/api/functional_test/.gitignore @@ -0,0 +1,2 @@ +.venv_win +.pytest_cache \ No newline at end of file diff --git a/modules/api/functional_test/__init__.py b/modules/api/functional_test/__init__.py new file mode 100755 index 000000000..e69de29bb diff --git a/modules/api/functional_test/aws_request_signer.py b/modules/api/functional_test/aws_request_signer.py new file mode 100644 index 000000000..088d15d4c --- /dev/null +++ b/modules/api/functional_test/aws_request_signer.py @@ -0,0 +1,34 @@ +from urllib.parse import urljoin + +import boto3 +from botocore.auth import SigV4Auth +from botocore.awsrequest import AWSRequest +from botocore.compat import HTTPHeaders + +REGION_NAME = "us-east-1" +SERVICE_NAME = "VinylDNS" + + +class AwsSigV4RequestSigner(object): + def __init__(self, index_url: str, access_key: str, secret_access_key: str): + self.url = index_url + self.boto_session = boto3.Session( + region_name=REGION_NAME, + aws_access_key_id=access_key, + aws_secret_access_key=secret_access_key) + + def sign_request_headers(self, method: str, path: str, headers: dict, body: str, params: object = None) -> HTTPHeaders: + """ + Construct the request headers, including the signature + + :param method: The HTTP method + :param path: The URL path + :param headers: The request headers + :param body: The request body + :param params: The query parameters + :return: + """ + request = AWSRequest(method=method, url=urljoin(self.url, path), auth_path=path, data=body, params=params, headers=headers) + SigV4Auth(self.boto_session.get_credentials(), SERVICE_NAME, REGION_NAME).add_auth(request) + + return request.headers diff --git a/modules/api/functional_test/bootstrap.sh b/modules/api/functional_test/bootstrap.sh deleted file mode 100755 index 0c43fe630..000000000 --- a/modules/api/functional_test/bootstrap.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -e - -if [ ! -d "./.virtualenv" ]; then - echo "Creating virtualenv..." - virtualenv --clear --python="$(which python2.7)" ./.virtualenv -fi - -if ! diff ./requirements.txt ./.virtualenv/requirements.txt &> /dev/null; then - echo "Installing dependencies..." - .virtualenv/bin/python ./.virtualenv/bin/pip install --index-url https://pypi.python.org/simple/ -r ./requirements.txt - cp ./requirements.txt ./.virtualenv/ -fi diff --git a/modules/api/functional_test/boto_request_signer.py b/modules/api/functional_test/boto_request_signer.py deleted file mode 100644 index f6cbc53a3..000000000 --- a/modules/api/functional_test/boto_request_signer.py +++ /dev/null @@ -1,103 +0,0 @@ -import logging - -from datetime import datetime -from hashlib import sha256 - -from boto.dynamodb2.layer1 import DynamoDBConnection - -import requests.compat as urlparse - -logger = logging.getLogger(__name__) - -__all__ = [u'BotoRequestSigner'] - - -class BotoRequestSigner(object): - - def __init__(self, index_url, access_key, secret_access_key): - url = urlparse.urlparse(index_url) - self.boto_connection = DynamoDBConnection( - host = url.hostname, - port = url.port, - aws_access_key_id = access_key, - aws_secret_access_key = secret_access_key, - is_secure = False) - - @staticmethod - def canonical_date(headers): - """Derive canonical date (ISO 8601 string) from headers if possible, - or synthesize it if no usable header exists.""" - iso_format = u'%Y%m%dT%H%M%SZ' - http_format = u'%a, %d %b %Y %H:%M:%S GMT' - - def try_parse(date_string, format): - if date_string is None: - return None - try: - return datetime.strptime(date_string, format) - except ValueError: - return None - - amz_date = try_parse(headers.get(u'X-Amz-Date'), iso_format) - http_date = try_parse(headers.get(u'Date'), http_format) - fallback_date = datetime.utcnow() - - date = next(d for d in [amz_date, http_date, fallback_date] if d is not None) - return date.strftime(iso_format) - - def build_auth_header(self, method, path, headers, body, params=None): - """Construct an Authorization header, using boto.""" - - request = self.boto_connection.build_base_http_request( - method=method, - path=path, - auth_path=path, - headers=headers, - data=body, - params=params or {}) - - auth_handler = self.boto_connection._auth_handler - - timestamp = BotoRequestSigner.canonical_date(headers) - request.timestamp = timestamp[0:8] - - request.region_name = u'us-east-1' - request.service_name = u'VinylDNS' - - credential_scope = u'/'.join([request.timestamp, request.region_name, request.service_name, u'aws4_request']) - - canonical_request = auth_handler.canonical_request(request) - split_request = canonical_request.split('\n') - - if params != {} and split_request[2] == '': - split_request[2] = self.generate_canonical_query_string(params) - canonical_request = '\n'.join(split_request) - hashed_request = sha256(canonical_request.encode(u'utf-8')).hexdigest() - - string_to_sign = u'\n'.join([u'AWS4-HMAC-SHA256', timestamp, credential_scope, hashed_request]) - signature = auth_handler.signature(request, string_to_sign) - headers_to_sign = auth_handler.headers_to_sign(request) - - auth_header = u','.join([ - u'AWS4-HMAC-SHA256 Credential=%s' % auth_handler.scope(request), - u'SignedHeaders=%s' % auth_handler.signed_headers(headers_to_sign), - u'Signature=%s' % signature]) - - return auth_header - - @staticmethod - def generate_canonical_query_string(params): - """ - Using in place of canonical_query_string from boto/auth.py to support POST requests with query parameters - """ - post_params = [] - for param in sorted(params): - value = params[param].encode('utf-8') - import urllib - try: - post_params.append('%s=%s' % (urllib.parse.quote(param, safe='-_.~'), - urllib.parse.quote(value, safe='-_.~'))) - except: - post_params.append('%s=%s' % (urllib.quote(param, safe='-_.~'), - urllib.quote(value, safe='-_.~'))) - return '&'.join(post_params) diff --git a/modules/api/functional_test/conftest.py b/modules/api/functional_test/conftest.py index fb0f04a96..2e89919d1 100644 --- a/modules/api/functional_test/conftest.py +++ b/modules/api/functional_test/conftest.py @@ -1,103 +1,116 @@ +import ipaddress +import logging import os +import ssl +import sys + +import _pytest.config +import pytest from vinyldns_context import VinylDNSTestContext +logger = logging.getLogger(__name__) +logging.basicConfig( + level=os.environ.get("VINYL_LOG_LEVEL") or logging.INFO, + format="%(asctime)s [%(levelname)s] %(message)s", + handlers=[ + logging.StreamHandler(stream=sys.stderr) + ] +) +config_context = {} -def pytest_addoption(parser): + +def pytest_addoption(parser: _pytest.config.argparsing.Parser) -> None: """ Adds additional options that we can parse when we run the tests, stores them in the parser / py.test context """ - parser.addoption("--url", dest="url", action="store", default="http://localhost:9000", - help="URL for application to root") - parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001", - help="The ip address for the dns server to use for the tests") - parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.", - help="The zone name that will be used for testing") - parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.", - help="The name of the key used to sign updates for the zone") - parser.addoption("--dns-key", dest="dns_key", action="store", default="nzisn+4G2ldMn0q1CV3vsg==", - help="The tsig key") + parser.addoption("--url", dest="url", action="store", default="http://localhost:9000", help="URL for application to root") + parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001", help="The ip address for the dns name server to update") + parser.addoption("--resolver-ip", dest="resolver_ip", action="store", help="The ip address for the dns server to use for the tests during resolution. This is usually the same as `--dns-ip`") + parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.", help="The zone name that will be used for testing") + parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.", help="The name of the key used to sign updates for the zone") + parser.addoption("--dns-key", dest="dns_key", action="store", default="nzisn+4G2ldMn0q1CV3vsg==", help="The TSIG key") + parser.addoption("--dns-key-algo", dest="dns_key_algo", action="store", default="HMAC-MD5", help="The TSIG key algorithm") # optional - parser.addoption("--basic-auth", dest="basic_auth_creds", - help="Basic auth credentials in 'user:pass' format") - parser.addoption("--basic-auth-realm", dest="basic_auth_realm", - help="Basic auth realm to use with credentials supplied by \"-b\"") - parser.addoption("--iauth-creds", dest="iauth_creds", - help="Intermediary auth (codebig style) in 'key:secret' format") - parser.addoption("--oauth-creds", dest="oauth_creds", - help="OAuth credentials in consumer:secret format") - parser.addoption("--environment", dest="cim_env", action="store", default="test", - help="CIM_ENV that we are testing against.") - parser.addoption("--teardown", dest="teardown", action="store", default="True", - help="True | False - Whether to teardown the test fixture, or leave it for another run") + parser.addoption("--basic-auth", dest="basic_auth_creds", help="Basic auth credentials in `user:pass` format") + parser.addoption("--basic-auth-realm", dest="basic_auth_realm", help="Basic auth realm to use with credentials supplied by `-b`") + parser.addoption("--iauth-creds", dest="iauth_creds", help="Intermediary auth in `key:secret` format") + parser.addoption("--oauth-creds", dest="oauth_creds", help="OAuth credentials in `consumer:secret` format") + parser.addoption("--environment", dest="environment", action="store", default="test", help="Environment that we are testing against") + parser.addoption("--teardown", dest="teardown", action="store", default="True", help="True to teardown the test fixture; false to leave it for another run") + parser.addoption("--enable-safety_check", dest="enable_safety_check", action="store_true", + help="If provided, enable object mutation safety checks; otherwise safety checks are disable. " + "This is a handy development tool to catch rogue tests mutating data which can affect other tests.") -def pytest_configure(config): +def pytest_configure(config: _pytest.config.Config) -> None: """ Loads the test context since we are no longer using run.py """ + logger.info("Starting configuration") # Monkey patch ssl so we do not verify ssl certs - import ssl - try: - _create_unverified_https_context = ssl._create_unverified_context - except AttributeError: - # Legacy Python that doesn't verify HTTPS certificates by default - pass - else: - # Handle target environment that doesn't support HTTPS verification - ssl._create_default_https_context = _create_unverified_https_context + _create_unverified_https_context = ssl._create_unverified_context - url = config.getoption("url", default="http://localhost:9000/") - if not url.endswith('/'): - url += '/' + # Handle target environment that doesn't support HTTPS verification + ssl._create_default_https_context = _create_unverified_https_context - import sys - sys.dont_write_bytecode = True + url = config.getoption("url") + if not url.endswith("/"): + url += "/" - VinylDNSTestContext.configure(config.getoption("dns_ip"), - config.getoption("dns_zone"), - config.getoption("dns_key_name"), - config.getoption("dns_key"), - config.getoption("url"), - config.getoption("teardown")) + # Define markers + config.addinivalue_line("markers", "serial") + config.addinivalue_line("markers", "skip_production") + config.addinivalue_line("markers", "manual_batch_review") - from shared_zone_test_context import SharedZoneTestContext - if not hasattr(config, 'workerinput'): - print 'Master, standing up the test fixture...' - # use the fixture file if it exists - if os.path.isfile('tmp.out'): - print 'Fixture file found, assuming the fixture file' - SharedZoneTestContext('tmp.out') - else: - print 'No fixture file found, loading a new test fixture' - ctx = SharedZoneTestContext() - ctx.out_fixture_file("tmp.out") - else: - print 'This is a worker' + name_server_ip = retrieve_resolver(config.getoption("dns_ip")) + VinylDNSTestContext.configure(name_server_ip=name_server_ip, + resolver_ip=retrieve_resolver(config.getoption("resolver_ip", name_server_ip) or name_server_ip), + zone=config.getoption("dns_zone"), + key_name=config.getoption("dns_key_name"), + key=config.getoption("dns_key"), + url=url, + teardown=config.getoption("teardown").lower() == "true", + key_algo=config.getoption("dns_key_algo"), + enable_safety_check=config.getoption("enable_safety_check")) -def pytest_unconfigure(config): - # this attribute is only set on workers - print 'Master exiting...' - if not hasattr(config, 'workerinput') and VinylDNSTestContext.teardown: - print 'Master cleaning up...' - from shared_zone_test_context import SharedZoneTestContext - ctx = SharedZoneTestContext('tmp.out') - ctx.tear_down() - os.remove('tmp.out') - else: - print 'Worker exiting...' - - -def pytest_report_header(config): +def pytest_report_header(config: _pytest.config.Config) -> str: """ Overrides the test result header like we do in pyfunc test """ - header = "Testing against CIM_ENV " + config.getoption("cim_env") - header += "\r\nURL: " + config.getoption("url") - header += "\r\nRunning from directory " + os.getcwd() - header += '\r\nTest shim directory ' + os.path.dirname(__file__) - header += "\r\nDNS IP: " + config.getoption("dns_ip") + logger.debug("testing!") + header = "Testing against environment " + config.getoption("environment") + header += "\nURL: " + config.getoption("url") + header += "\nRunning from directory " + os.getcwd() + header += "\nTest shim directory " + os.path.dirname(__file__) + header += "\nDNS IP: " + config.getoption("dns_ip") return header + + +def retrieve_resolver(resolver_name: str) -> str: + """ + Retrieves the ip address of the DNS resolver when given a hostname + :param resolver_name: The name/ip of the resolver + :return: The IP address, and optionally port, of the resolver + """ + parts = resolver_name.split(":") + resolver_address = parts[0] + try: + ipaddress.ip_address(parts[0]) + return resolver_name + except ValueError: + logger.warning("`--dns_ip` is set to `%s`, which isn't a valid ip/port combination (hostname?)", resolver_name) + try: + import socket + resolver_address = socket.gethostbyname(parts[0]) + resolver_address = [resolver_address] + parts[1:] + resolver_address = ":".join(resolver_address) + logger.warning("Translating `%s` resolver to `%s`", resolver_name, resolver_address) + except: + logger.error("Cannot translate `%s` into a usable resolver address", resolver_name) + pytest.exit(1) + + return resolver_address diff --git a/modules/api/functional_test/live_tests/authentication_test.py b/modules/api/functional_test/live_tests/authentication_test.py index b10c51a79..9a83ae3fb 100644 --- a/modules/api/functional_test/live_tests/authentication_test.py +++ b/modules/api/functional_test/live_tests/authentication_test.py @@ -10,7 +10,7 @@ def test_request_fails_when_user_account_is_locked(): """ Test request fails with Forbidden (403) when user account is locked """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'lockedAccessKey', 'lockedSecretKey') + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "lockedAccessKey", "lockedSecretKey") client.list_batch_change_summaries(status=403) @@ -18,7 +18,7 @@ def test_request_fails_when_user_is_not_found(): """ Test request fails with Unauthorized (401) when user account is not found """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'unknownAccessKey', 'anyAccessSecretKey') + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "unknownAccessKey", "anyAccessSecretKey") client.list_batch_change_summaries(status=401) @@ -27,7 +27,7 @@ def test_request_succeeds_when_user_is_found_and_not_locked(): """ Test request success with Success (200) when user account is found and not locked """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'okAccessKey', 'okSecretKey') + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") client.list_batch_change_summaries(status=200) @@ -36,9 +36,9 @@ def test_request_fails_when_accessing_non_existent_route(): """ Test request fails with NotFound (404) when route cannot be resolved, regardless of authentication """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'unknownAccessKey', 'anyAccessSecretKey') - url = urljoin(VinylDNSTestContext.vinyldns_url, u'/no-existo') - _, data = client.make_request(url, u'GET', client.headers, status=404) + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "unknownAccessKey", "anyAccessSecretKey") + url = urljoin(VinylDNSTestContext.vinyldns_url, "/no-existo") + _, data = client.make_request(url, "GET", client.headers, status=404) assert_that(data, is_("The requested path [/no-existo] does not exist.")) @@ -46,8 +46,8 @@ def test_request_fails_with_unsupported_http_method_for_route(): """ Test request fails with MethodNotAllowed (405) when HTTP Method is not supported for specified route """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'unknownAccessKey', 'anyAccessSecretKey') - url = urljoin(VinylDNSTestContext.vinyldns_url, u'/zones') - _, data = client.make_request(url, u'PUT', client.headers, status=405) + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "unknownAccessKey", "anyAccessSecretKey") + url = urljoin(VinylDNSTestContext.vinyldns_url, "/zones") + _, data = client.make_request(url, "PUT", client.headers, status=405) assert_that(data, is_("HTTP method not allowed, supported methods: GET, POST")) diff --git a/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py b/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py index 54d442add..b0ce3fb78 100644 --- a/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py @@ -1,4 +1,5 @@ -from hamcrest import * +import pytest + from utils import * @@ -10,58 +11,60 @@ def test_approve_pending_batch_change_success(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client approver = shared_zone_test_context.support_user_client + partition_id = shared_zone_test_context.partition_id batch_change_input = { "changes": [ - get_change_A_AAAA_json("test-approve-success.not.loaded.", address="4.3.2.1"), - get_change_A_AAAA_json("needs-review.not.loaded.", address="4.3.2.1"), - get_change_A_AAAA_json("zone-name-flagged-for-manual-review.zone.requires.review.") + get_change_A_AAAA_json(f"test-approve-success.not.loaded{partition_id}.", address="4.3.2.1"), + get_change_A_AAAA_json(f"needs-review.not.loaded{partition_id}.", address="4.3.2.1"), + get_change_A_AAAA_json(f"zone-name-flagged-for-manual-review.zone.requires.review{partition_id}.") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } to_delete = [] to_disconnect = None try: result = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(result['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - assert_that(get_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) - assert_that(get_batch['changes'][1]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][1]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview')) - assert_that(get_batch['changes'][2]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][2]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview')) + get_batch = client.get_batch_change(result["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + assert_that(get_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][0]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) + assert_that(get_batch["changes"][1]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][1]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) + assert_that(get_batch["changes"][2]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][2]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) # need to create the zone so the change can succeed zone = { - 'name': 'not.loaded.', - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'backendId': 'func-test-backend', - 'shared': True + "name": f"not.loaded{partition_id}.", + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "backendId": "func-test-backend", + "shared": True } zone_create = approver.create_zone(zone, status=202) - to_disconnect = zone_create['zone'] - approver.wait_until_zone_active(to_disconnect['id']) + to_disconnect = zone_create["zone"] + approver.wait_until_zone_active(to_disconnect["id"]) - approved = approver.approve_batch_change(result['id'], status=202) + approved = approver.approve_batch_change(result["id"], status=202) completed_batch = client.wait_until_batch_change_completed(approved) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_that(completed_batch['status'], is_('Complete')) - for change in completed_batch['changes']: - assert_that(change['status'], is_('Complete')) - assert_that(len(change['validationErrors']), is_(0)) - assert_that(completed_batch['approvalStatus'], is_('ManuallyApproved')) - assert_that(completed_batch['reviewerId'], is_('support-user-id')) - assert_that(completed_batch['reviewerUserName'], is_('support-user')) - assert_that(completed_batch, has_key('reviewTimestamp')) - assert_that(get_batch, not(has_key('cancelledTimestamp'))) + assert_that(completed_batch["status"], is_("Complete")) + for change in completed_batch["changes"]: + assert_that(change["status"], is_("Complete")) + assert_that(len(change["validationErrors"]), is_(0)) + assert_that(completed_batch["approvalStatus"], is_("ManuallyApproved")) + assert_that(completed_batch["reviewerId"], is_("support-user-id")) + assert_that(completed_batch["reviewerUserName"], is_("support-user")) + assert_that(completed_batch, has_key("reviewTimestamp")) + assert_that(get_batch, not (has_key("cancelledTimestamp"))) finally: clear_zoneid_rsid_tuple_list(to_delete, client) if to_disconnect: - approver.abandon_zones(to_disconnect['id'], status=202) + approver.abandon_zones(to_disconnect["id"], status=202) + @pytest.mark.manual_batch_review def test_approve_pending_batch_change_fails_if_there_are_still_errors(shared_zone_test_context): @@ -75,39 +78,40 @@ def test_approve_pending_batch_change_fails_if_there_are_still_errors(shared_zon get_change_A_AAAA_json("needs-review.nonexistent.", address="4.3.2.1"), get_change_A_AAAA_json("zone.does.not.exist.") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } complete_rs = None try: result = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(result['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - assert_that(get_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview')) - assert_that(get_batch['changes'][1]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][1]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) + get_batch = client.get_batch_change(result["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + assert_that(get_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][0]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) + assert_that(get_batch["changes"][1]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][1]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) - approval_response = approver.approve_batch_change(result['id'], status=400) - assert_that((approval_response[0]['errors'][0]), contains_string('Zone Discovery Failed')) - assert_that((approval_response[1]['errors'][0]), contains_string('Zone Discovery Failed')) + approval_response = approver.approve_batch_change(result["id"], status=400) + assert_that((approval_response[0]["errors"][0]), contains_string("Zone Discovery Failed")) + assert_that((approval_response[1]["errors"][0]), contains_string("Zone Discovery Failed")) - updated_batch = client.get_batch_change(result['id'], status=200) - assert_that(updated_batch['status'], is_('PendingReview')) - assert_that(updated_batch['approvalStatus'], is_('PendingReview')) - assert_that(updated_batch, not(has_key('reviewerId'))) - assert_that(updated_batch, not(has_key('reviewerUserName'))) - assert_that(updated_batch, not(has_key('reviewTimestamp'))) - assert_that(updated_batch, not(has_key('cancelledTimestamp'))) - assert_that(updated_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(updated_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) - assert_that(updated_batch['changes'][1]['status'], is_('NeedsReview')) - assert_that(updated_batch['changes'][1]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) + updated_batch = client.get_batch_change(result["id"], status=200) + assert_that(updated_batch["status"], is_("PendingReview")) + assert_that(updated_batch["approvalStatus"], is_("PendingReview")) + assert_that(updated_batch, not (has_key("reviewerId"))) + assert_that(updated_batch, not (has_key("reviewerUserName"))) + assert_that(updated_batch, not (has_key("reviewTimestamp"))) + assert_that(updated_batch, not (has_key("cancelledTimestamp"))) + assert_that(updated_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(updated_batch["changes"][0]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) + assert_that(updated_batch["changes"][1]["status"], is_("NeedsReview")) + assert_that(updated_batch["changes"][1]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) finally: if complete_rs: - delete_result = client.delete_recordset(complete_rs['zoneId'], complete_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(complete_rs["zoneId"], complete_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") + @pytest.mark.manual_batch_review def test_approve_batch_change_with_invalid_batch_change_id_fails(shared_zone_test_context): @@ -120,6 +124,7 @@ def test_approve_batch_change_with_invalid_batch_change_id_fails(shared_zone_tes error = client.approve_batch_change("some-id", status=404) assert_that(error, is_("Batch change with id some-id cannot be found")) + @pytest.mark.manual_batch_review def test_approve_batch_change_with_comments_exceeding_max_length_fails(shared_zone_test_context): """ @@ -128,11 +133,12 @@ def test_approve_batch_change_with_comments_exceeding_max_length_fails(shared_zo client = shared_zone_test_context.ok_vinyldns_client approve_batch_change_input = { - "reviewComment": "a"*1025 + "reviewComment": "a" * 1025 } - errors = client.approve_batch_change("some-id", approve_batch_change_input, status=400)['errors'] + errors = client.approve_batch_change("some-id", approve_batch_change_input, status=400)["errors"] assert_that(errors, contains_inanyorder("Comment length must not exceed 1024 characters.")) + @pytest.mark.manual_batch_review def test_approve_batch_change_fails_with_forbidden_error_for_non_system_admins(shared_zone_test_context): """ @@ -141,7 +147,7 @@ def test_approve_batch_change_fails_with_forbidden_error_for_non_system_admins(s client = shared_zone_test_context.ok_vinyldns_client batch_change_input = { "changes": [ - get_change_A_AAAA_json("no-owner-group-id.ok.", address="4.3.2.1") + get_change_A_AAAA_json(f"no-owner-group-id.ok{shared_zone_test_context.partition_id}.", address="4.3.2.1") ] } to_delete = [] @@ -149,8 +155,8 @@ def test_approve_batch_change_fails_with_forbidden_error_for_non_system_admins(s try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] - error = client.approve_batch_change(completed_batch['id'], status=403) - assert_that(error, is_("User does not have access to item " + completed_batch['id'])) + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] + error = client.approve_batch_change(completed_batch["id"], status=403) + assert_that(error, is_("User does not have access to item " + completed_batch["id"])) finally: clear_zoneid_rsid_tuple_list(to_delete, client) diff --git a/modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py b/modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py index f0a806418..a70967ea7 100644 --- a/modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py @@ -1,4 +1,5 @@ -from hamcrest import * +import pytest + from utils import * @@ -12,26 +13,27 @@ def test_cancel_batch_change_success(shared_zone_test_context): "changes": [ get_change_A_AAAA_json("zone.discovery.failure.", address="4.3.2.1") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } result = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(result['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - assert_that(get_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) + get_batch = client.get_batch_change(result["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + assert_that(get_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][0]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) - client.cancel_batch_change(result['id'], status=200) - get_batch = client.get_batch_change(result['id']) + client.cancel_batch_change(result["id"], status=200) + get_batch = client.get_batch_change(result["id"]) + + assert_that(get_batch["status"], is_("Cancelled")) + assert_that(get_batch["approvalStatus"], is_("Cancelled")) + assert_that(get_batch["changes"][0]["status"], is_("Cancelled")) + assert_that(get_batch, has_key("cancelledTimestamp")) + assert_that(get_batch, not (has_key("reviewTimestamp"))) + assert_that(get_batch, not (has_key("reviewerId"))) + assert_that(get_batch, not (has_key("reviewerUserName"))) + assert_that(get_batch, not (has_key("reviewComment"))) - assert_that(get_batch['status'], is_('Cancelled')) - assert_that(get_batch['approvalStatus'], is_('Cancelled')) - assert_that(get_batch['changes'][0]['status'], is_('Cancelled')) - assert_that(get_batch, has_key('cancelledTimestamp')) - assert_that(get_batch, not(has_key('reviewTimestamp'))) - assert_that(get_batch, not(has_key('reviewerId'))) - assert_that(get_batch, not(has_key('reviewerUserName'))) - assert_that(get_batch, not(has_key('reviewComment'))) @pytest.mark.manual_batch_review def test_cancel_batch_change_fails_for_non_creator(shared_zone_test_context): @@ -44,22 +46,22 @@ def test_cancel_batch_change_fails_for_non_creator(shared_zone_test_context): "changes": [ get_change_A_AAAA_json("zone.discovery.failure.", address="4.3.2.1") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } result = None try: result = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(result['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - assert_that(get_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) + get_batch = client.get_batch_change(result["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + assert_that(get_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][0]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) - error = rejecter.cancel_batch_change(get_batch['id'], status=403) - assert_that(error, is_("User does not have access to item " + get_batch['id'])) + error = rejecter.cancel_batch_change(get_batch["id"], status=403) + assert_that(error, is_("User does not have access to item " + get_batch["id"])) finally: if result: - rejecter.reject_batch_change(result['id'], status=200) + rejecter.reject_batch_change(result["id"], status=200) @pytest.mark.manual_batch_review @@ -70,7 +72,7 @@ def test_cancel_batch_change_fails_when_not_pending_approval(shared_zone_test_co client = shared_zone_test_context.ok_vinyldns_client batch_change_input = { "changes": [ - get_change_A_AAAA_json("reject-completed-change-test.ok.", address="4.3.2.1") + get_change_A_AAAA_json(f"reject-completed-change-test.ok{shared_zone_test_context.partition_id}.", address="4.3.2.1") ] } to_delete = [] @@ -78,9 +80,9 @@ def test_cancel_batch_change_fails_when_not_pending_approval(shared_zone_test_co try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] - error = client.cancel_batch_change(completed_batch['id'], status=400) - assert_that(error, is_("Batch change " + completed_batch['id'] + + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] + error = client.cancel_batch_change(completed_batch["id"], status=400) + assert_that(error, is_("Batch change " + completed_batch["id"] + " is not pending review, so it cannot be rejected.")) finally: clear_zoneid_rsid_tuple_list(to_delete, client) diff --git a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py index dee2013f0..8a879c459 100644 --- a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py @@ -1,78 +1,80 @@ -from hamcrest import * -from utils import * import datetime -import json +from typing import Optional, Union + +import pytest + +from utils import * def does_not_contain(x): - is_not(contains(x)) + is_not(contains_exactly(x)) def validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data): - assert_that(input_json['changeType'], is_(change_type)) - assert_that(input_json['inputName'], is_(input_name)) - assert_that(input_json['type'], is_(record_type)) - assert_that(record_type, is_in(['A', 'AAAA', 'CNAME', 'PTR', 'TXT', 'MX'])) + assert_that(input_json["changeType"], is_(change_type)) + assert_that(input_json["inputName"], is_(input_name)) + assert_that(input_json["type"], is_(record_type)) + assert_that(record_type, is_in(["A", "AAAA", "CNAME", "PTR", "TXT", "MX"])) if change_type == "Add": - assert_that(input_json['ttl'], is_(ttl)) + assert_that(input_json["ttl"], is_(ttl)) if record_type in ["A", "AAAA"]: - assert_that(input_json['record']['address'], is_(record_data)) + assert_that(input_json["record"]["address"], is_(record_data)) elif record_type == "CNAME": - assert_that(input_json['record']['cname'], is_(record_data)) + assert_that(input_json["record"]["cname"], is_(record_data)) elif record_type == "PTR": - assert_that(input_json['record']['ptrdname'], is_(record_data)) + assert_that(input_json["record"]["ptrdname"], is_(record_data)) elif record_type == "TXT": - assert_that(input_json['record']['text'], is_(record_data)) + assert_that(input_json["record"]["text"], is_(record_data)) elif record_type == "MX": - assert_that(input_json['record']['preference'], is_(record_data['preference'])) - assert_that(input_json['record']['exchange'], is_(record_data['exchange'])) + assert_that(input_json["record"]["preference"], is_(record_data["preference"])) + assert_that(input_json["record"]["exchange"], is_(record_data["exchange"])) return def assert_failed_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", ttl=200, - record_data="1.1.1.1", error_messages=[]): + record_data: Optional[Union[str, dict]] = "1.1.1.1", error_messages=[]): validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data) assert_error(input_json, error_messages) return def assert_successful_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", - ttl=200, record_data="1.1.1.1"): + ttl=200, record_data: Optional[Union[str, dict]] = "1.1.1.1"): validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data) - assert_that('errors' in input_json, is_(False)) + assert_that("errors" in input_json, is_(False)) return -def assert_change_success_response_values(changes_json, zone, index, record_name, input_name, record_data, ttl=200, - record_type="A", change_type="Add"): - assert_that(changes_json[index]['zoneId'], is_(zone['id'])) - assert_that(changes_json[index]['zoneName'], is_(zone['name'])) - assert_that(changes_json[index]['recordName'], is_(record_name)) - assert_that(changes_json[index]['inputName'], is_(input_name)) +def assert_change_success(changes_json, zone, index, record_name, input_name, record_data, ttl=200, + record_type="A", change_type="Add"): + assert_that(changes_json[index]["zoneId"], is_(zone["id"])) + assert_that(changes_json[index]["zoneName"], is_(zone["name"])) + assert_that(changes_json[index]["recordName"], is_(record_name)) + assert_that(changes_json[index]["inputName"], is_(input_name)) if change_type == "Add": - assert_that(changes_json[index]['ttl'], is_(ttl)) - assert_that(changes_json[index]['type'], is_(record_type)) - assert_that(changes_json[index]['id'], is_not(none())) - assert_that(changes_json[index]['changeType'], is_(change_type)) - assert_that(record_type, is_in(['A', 'AAAA', 'CNAME', 'PTR', 'TXT', 'MX'])) + assert_that(changes_json[index]["ttl"], is_(ttl)) + assert_that(changes_json[index]["type"], is_(record_type)) + assert_that(changes_json[index]["id"], is_not(none())) + assert_that(changes_json[index]["changeType"], is_(change_type)) + assert_that(record_type, is_in(["A", "AAAA", "CNAME", "PTR", "TXT", "MX"])) if record_type in ["A", "AAAA"] and change_type == "Add": - assert_that(changes_json[index]['record']['address'], is_(record_data)) + assert_that(changes_json[index]["record"]["address"], is_(record_data)) elif record_type == "CNAME" and change_type == "Add": - assert_that(changes_json[index]['record']['cname'], is_(record_data)) + assert_that(changes_json[index]["record"]["cname"], is_(record_data)) elif record_type == "PTR" and change_type == "Add": - assert_that(changes_json[index]['record']['ptrdname'], is_(record_data)) + assert_that(changes_json[index]["record"]["ptrdname"], is_(record_data)) elif record_type == "TXT" and change_type == "Add": - assert_that(changes_json[index]['record']['text'], is_(record_data)) + assert_that(changes_json[index]["record"]["text"], is_(record_data)) elif record_type == "MX" and change_type == "Add": - assert_that(changes_json[index]['record']['preference'], is_(record_data['preference'])) - assert_that(changes_json[index]['record']['exchange'], is_(record_data['exchange'])) + assert_that(changes_json[index]["record"]["preference"], is_(record_data["preference"])) + assert_that(changes_json[index]["record"]["exchange"], is_(record_data["exchange"])) return def assert_error(input_json, error_messages): for error in error_messages: - assert_that(input_json['errors'], has_item(error)) - assert_that(len(input_json['errors']), is_(len(error_messages))) + assert_that(input_json["errors"], has_item(error)) + assert_that(len(input_json["errors"]), is_(len(error_messages))) @pytest.mark.serial @@ -87,24 +89,31 @@ def test_create_batch_change_with_adds_success(shared_zone_test_context): classless_base_zone = shared_zone_test_context.classless_base_zone ip6_reverse_zone = shared_zone_test_context.ip6_16_nibble_zone + partition_id = shared_zone_test_context.partition_id + ok_zone_name = shared_zone_test_context.ok_zone["name"] + parent_zone_name = shared_zone_test_context.parent_zone["name"] + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip6_prefix = shared_zone_test_context.ip6_prefix + batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("parent.com.", address="4.5.6.7"), - get_change_A_AAAA_json("ok.", record_type="AAAA", address="fd69:27cc:fe91::60"), - get_change_A_AAAA_json("relative.parent.com."), - get_change_CNAME_json("CNAME.PARENT.COM", cname="nice.parent.com"), - get_change_CNAME_json("_2cname.parent.com", cname="nice.parent.com"), - get_change_CNAME_json("4.2.0.192.in-addr.arpa.", cname="4.4/30.2.0.192.in-addr.arpa."), - get_change_PTR_json("192.0.2.193", ptrdname="www.vinyldns"), - get_change_PTR_json("192.0.2.44"), - get_change_PTR_json("fd69:27cc:fe91:1000::60", ptrdname="www.vinyldns"), - get_change_TXT_json("txt.ok."), - get_change_TXT_json("ok."), - get_change_TXT_json("txt-unique-characters.ok.", text='a\\\\`=` =\\"Cat\\"\nattr=val'), - get_change_TXT_json("txt.2.0.192.in-addr.arpa."), - get_change_MX_json("mx.ok.", preference=0), - get_change_MX_json("ok.", preference=1000, exchange="bar.foo.") + get_change_A_AAAA_json(f"{parent_zone_name}", address="4.5.6.7"), + get_change_A_AAAA_json(f"{ok_zone_name}", record_type="AAAA", address=f"{ip6_prefix}::60"), + get_change_A_AAAA_json(f"relative.{parent_zone_name}"), + get_change_CNAME_json(f"CNAME.PARENT.COM{partition_id}", cname="nice.parent.com"), + get_change_CNAME_json(f"_2cname.{parent_zone_name}", cname="nice.parent.com"), + get_change_CNAME_json(f"4.{ip4_zone_name}", cname=f"4.4/30.{ip4_zone_name}"), + get_change_PTR_json(f"{ip4_prefix}.193", ptrdname="www.vinyldns"), + get_change_PTR_json(f"{ip4_prefix}.44"), + get_change_PTR_json(f"{ip6_prefix}:1000::60", ptrdname="www.vinyldns"), + get_change_TXT_json(f"txt.{ok_zone_name}"), + get_change_TXT_json(f"{ok_zone_name}"), + get_change_TXT_json(f"txt-unique-characters.{ok_zone_name}", text='a\\\\`=` =\\"Cat\\"\nattr=val'), + get_change_TXT_json(f"txt.{ip4_zone_name}"), + get_change_MX_json(f"mx.{ok_zone_name}", preference=0), + get_change_MX_json(f"{ok_zone_name}", preference=1000, exchange="bar.foo.") ] } @@ -112,181 +121,169 @@ def test_create_batch_change_with_adds_success(shared_zone_test_context): try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS # validate initial response - assert_that(result['comments'], is_("this is optional")) - assert_that(result['userName'], is_("ok")) - assert_that(result['userId'], is_("ok")) - assert_that(result['id'], is_not(none())) - assert_that(completed_batch['status'], is_("Complete")) + assert_that(result["comments"], is_("this is optional")) + assert_that(result["userName"], is_("ok")) + assert_that(result["userId"], is_("ok")) + assert_that(result["id"], is_not(none())) + assert_that(completed_batch["status"], is_("Complete")) - assert_change_success_response_values(result['changes'], zone=parent_zone, index=0, record_name="parent.com.", - input_name="parent.com.", record_data="4.5.6.7") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=1, record_name="ok.", - input_name="ok.", record_data="fd69:27cc:fe91::60", record_type="AAAA") - assert_change_success_response_values(result['changes'], zone=parent_zone, index=2, record_name="relative", - input_name="relative.parent.com.", record_data="1.1.1.1") - assert_change_success_response_values(result['changes'], zone=parent_zone, index=3, record_name="CNAME", - input_name="CNAME.PARENT.COM.", record_data="nice.parent.com.", - record_type="CNAME") - assert_change_success_response_values(result['changes'], zone=parent_zone, index=4, record_name="_2cname", - input_name="_2cname.parent.com.", record_data="nice.parent.com.", - record_type="CNAME") - assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=5, record_name="4", - input_name="4.2.0.192.in-addr.arpa.", - record_data="4.4/30.2.0.192.in-addr.arpa.", record_type="CNAME") - assert_change_success_response_values(result['changes'], zone=classless_delegation_zone, index=6, - record_name="193", - input_name="192.0.2.193", record_data="www.vinyldns.", record_type="PTR") - assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=7, record_name="44", - input_name="192.0.2.44", record_data="test.com.", record_type="PTR") - assert_change_success_response_values(result['changes'], zone=ip6_reverse_zone, index=8, - record_name="0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0", - input_name="fd69:27cc:fe91:1000::60", record_data="www.vinyldns.", - record_type="PTR") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=9, record_name="txt", - input_name="txt.ok.", record_data="test", record_type="TXT") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=10, record_name="ok.", - input_name="ok.", record_data="test", record_type="TXT") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=11, - record_name="txt-unique-characters", - input_name="txt-unique-characters.ok.", - record_data='a\\\\`=` =\\"Cat\\"\nattr=val', record_type="TXT") - assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=12, record_name="txt", - input_name="txt.2.0.192.in-addr.arpa.", record_data="test", - record_type="TXT") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=13, record_name="mx", - input_name="mx.ok.", - record_data={'preference': 0, 'exchange': 'foo.bar.'}, record_type="MX") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=14, record_name="ok.", - input_name="ok.", - record_data={'preference': 1000, 'exchange': 'bar.foo.'}, - record_type="MX") + assert_change_success(result["changes"], zone=parent_zone, index=0, + record_name=f"{parent_zone_name}", input_name=f"{parent_zone_name}", record_data="4.5.6.7") + assert_change_success(result["changes"], zone=ok_zone, index=1, + record_name=f"{ok_zone_name}", input_name=f"{ok_zone_name}", record_data=f"{ip6_prefix}::60", record_type="AAAA") + assert_change_success(result["changes"], zone=parent_zone, index=2, + record_name="relative", input_name=f"relative.{parent_zone_name}", record_data="1.1.1.1") + assert_change_success(result["changes"], zone=parent_zone, index=3, + record_name="CNAME", input_name=f"CNAME.PARENT.COM{partition_id}.", record_data="nice.parent.com.", record_type="CNAME") + assert_change_success(result["changes"], zone=parent_zone, index=4, + record_name="_2cname", input_name=f"_2cname.{parent_zone_name}", record_data="nice.parent.com.", record_type="CNAME") + assert_change_success(result["changes"], zone=classless_base_zone, index=5, + record_name="4", input_name=f"4.{ip4_zone_name}", record_data=f"4.4/30.{ip4_zone_name}", record_type="CNAME") + assert_change_success(result["changes"], zone=classless_delegation_zone, index=6, + record_name="193", input_name=f"{ip4_prefix}.193", record_data="www.vinyldns.", record_type="PTR") + assert_change_success(result["changes"], zone=classless_base_zone, index=7, + record_name="44", input_name=f"{ip4_prefix}.44", record_data="test.com.", record_type="PTR") + assert_change_success(result["changes"], zone=ip6_reverse_zone, index=8, + record_name="0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0", input_name=f"{ip6_prefix}:1000::60", record_data="www.vinyldns.", record_type="PTR") + assert_change_success(result["changes"], zone=ok_zone, index=9, + record_name="txt", input_name=f"txt.{ok_zone_name}", record_data="test", record_type="TXT") + assert_change_success(result["changes"], zone=ok_zone, index=10, + record_name=f"{ok_zone_name}", input_name=f"{ok_zone_name}", record_data="test", record_type="TXT") + assert_change_success(result["changes"], zone=ok_zone, index=11, + record_name="txt-unique-characters", input_name=f"txt-unique-characters.{ok_zone_name}", record_data='a\\\\`=` =\\"Cat\\"\nattr=val', record_type="TXT") + assert_change_success(result["changes"], zone=classless_base_zone, index=12, + record_name="txt", input_name=f"txt.{ip4_zone_name}", record_data="test", record_type="TXT") + assert_change_success(result["changes"], zone=ok_zone, index=13, + record_name="mx", input_name=f"mx.{ok_zone_name}", record_data={"preference": 0, "exchange": "foo.bar."}, record_type="MX") + assert_change_success(result["changes"], zone=ok_zone, index=14, + record_name=f"{ok_zone_name}", input_name=f"{ok_zone_name}", record_data={"preference": 1000, "exchange": "bar.foo."}, record_type="MX") - completed_status = [change['status'] == 'Complete' for change in completed_batch['changes']] + completed_status = [change["status"] == "Complete" for change in completed_batch["changes"]] assert_that(all(completed_status), is_(True)) # get all the recordsets created by this batch, validate - rs1 = client.get_recordset(record_set_list[0][0], record_set_list[0][1])['recordSet'] - expected1 = {'name': 'parent.com.', - 'zoneId': parent_zone['id'], - 'type': 'A', - 'ttl': 200, - 'records': [{'address': '4.5.6.7'}]} + rs1 = client.get_recordset(record_set_list[0][0], record_set_list[0][1])["recordSet"] + expected1 = {"name": parent_zone_name, + "zoneId": parent_zone["id"], + "type": "A", + "ttl": 200, + "records": [{"address": "4.5.6.7"}]} verify_recordset(rs1, expected1) - rs3 = client.get_recordset(record_set_list[1][0], record_set_list[1][1])['recordSet'] - expected3 = {'name': 'ok.', - 'zoneId': ok_zone['id'], - 'type': 'AAAA', - 'ttl': 200, - 'records': [{'address': 'fd69:27cc:fe91::60'}]} + rs3 = client.get_recordset(record_set_list[1][0], record_set_list[1][1])["recordSet"] + expected3 = {"name": ok_zone_name, + "zoneId": ok_zone["id"], + "type": "AAAA", + "ttl": 200, + "records": [{"address": f"{ip6_prefix}::60"}]} verify_recordset(rs3, expected3) - rs4 = client.get_recordset(record_set_list[2][0], record_set_list[2][1])['recordSet'] - expected4 = {'name': 'relative', - 'zoneId': parent_zone['id'], - 'type': 'A', - 'ttl': 200, - 'records': [{'address': '1.1.1.1'}]} + rs4 = client.get_recordset(record_set_list[2][0], record_set_list[2][1])["recordSet"] + expected4 = {"name": "relative", + "zoneId": parent_zone["id"], + "type": "A", + "ttl": 200, + "records": [{"address": "1.1.1.1"}]} verify_recordset(rs4, expected4) - rs5 = client.get_recordset(record_set_list[3][0], record_set_list[3][1])['recordSet'] - expected5 = {'name': 'CNAME', - 'zoneId': parent_zone['id'], - 'type': 'CNAME', - 'ttl': 200, - 'records': [{'cname': 'nice.parent.com.'}]} + rs5 = client.get_recordset(record_set_list[3][0], record_set_list[3][1])["recordSet"] + expected5 = {"name": "CNAME", + "zoneId": parent_zone["id"], + "type": "CNAME", + "ttl": 200, + "records": [{"cname": "nice.parent.com."}]} verify_recordset(rs5, expected5) - rs6 = client.get_recordset(record_set_list[4][0], record_set_list[4][1])['recordSet'] - expected6 = {'name': '_2cname', - 'zoneId': parent_zone['id'], - 'type': 'CNAME', - 'ttl': 200, - 'records': [{'cname': 'nice.parent.com.'}]} + rs6 = client.get_recordset(record_set_list[4][0], record_set_list[4][1])["recordSet"] + expected6 = {"name": "_2cname", + "zoneId": parent_zone["id"], + "type": "CNAME", + "ttl": 200, + "records": [{"cname": "nice.parent.com."}]} verify_recordset(rs6, expected6) - rs7 = client.get_recordset(record_set_list[5][0], record_set_list[5][1])['recordSet'] - expected7 = {'name': '4', - 'zoneId': classless_base_zone['id'], - 'type': 'CNAME', - 'ttl': 200, - 'records': [{'cname': '4.4/30.2.0.192.in-addr.arpa.'}]} + rs7 = client.get_recordset(record_set_list[5][0], record_set_list[5][1])["recordSet"] + expected7 = {"name": "4", + "zoneId": classless_base_zone["id"], + "type": "CNAME", + "ttl": 200, + "records": [{"cname": f"4.4/30.{ip4_zone_name}"}]} verify_recordset(rs7, expected7) - rs8 = client.get_recordset(record_set_list[6][0], record_set_list[6][1])['recordSet'] - expected8 = {'name': '193', - 'zoneId': classless_delegation_zone['id'], - 'type': 'PTR', - 'ttl': 200, - 'records': [{'ptrdname': 'www.vinyldns.'}]} + rs8 = client.get_recordset(record_set_list[6][0], record_set_list[6][1])["recordSet"] + expected8 = {"name": "193", + "zoneId": classless_delegation_zone["id"], + "type": "PTR", + "ttl": 200, + "records": [{"ptrdname": "www.vinyldns."}]} verify_recordset(rs8, expected8) - rs9 = client.get_recordset(record_set_list[7][0], record_set_list[7][1])['recordSet'] - expected9 = {'name': '44', - 'zoneId': classless_base_zone['id'], - 'type': 'PTR', - 'ttl': 200, - 'records': [{'ptrdname': 'test.com.'}]} + rs9 = client.get_recordset(record_set_list[7][0], record_set_list[7][1])["recordSet"] + expected9 = {"name": "44", + "zoneId": classless_base_zone["id"], + "type": "PTR", + "ttl": 200, + "records": [{"ptrdname": "test.com."}]} verify_recordset(rs9, expected9) - rs10 = client.get_recordset(record_set_list[8][0], record_set_list[8][1])['recordSet'] - expected10 = {'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'zoneId': ip6_reverse_zone['id'], - 'type': 'PTR', - 'ttl': 200, - 'records': [{'ptrdname': 'www.vinyldns.'}]} + rs10 = client.get_recordset(record_set_list[8][0], record_set_list[8][1])["recordSet"] + expected10 = {"name": "0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "zoneId": ip6_reverse_zone["id"], + "type": "PTR", + "ttl": 200, + "records": [{"ptrdname": "www.vinyldns."}]} verify_recordset(rs10, expected10) - rs11 = client.get_recordset(record_set_list[9][0], record_set_list[9][1])['recordSet'] - expected11 = {'name': 'txt', - 'zoneId': ok_zone['id'], - 'type': 'TXT', - 'ttl': 200, - 'records': [{'text': 'test'}]} + rs11 = client.get_recordset(record_set_list[9][0], record_set_list[9][1])["recordSet"] + expected11 = {"name": "txt", + "zoneId": ok_zone["id"], + "type": "TXT", + "ttl": 200, + "records": [{"text": "test"}]} verify_recordset(rs11, expected11) - rs12 = client.get_recordset(record_set_list[10][0], record_set_list[10][1])['recordSet'] - expected12 = {'name': 'ok.', - 'zoneId': ok_zone['id'], - 'type': 'TXT', - 'ttl': 200, - 'records': [{'text': 'test'}]} + rs12 = client.get_recordset(record_set_list[10][0], record_set_list[10][1])["recordSet"] + expected12 = {"name": f"{ok_zone_name}", + "zoneId": ok_zone["id"], + "type": "TXT", + "ttl": 200, + "records": [{"text": "test"}]} verify_recordset(rs12, expected12) - rs13 = client.get_recordset(record_set_list[11][0], record_set_list[11][1])['recordSet'] - expected13 = {'name': 'txt-unique-characters', - 'zoneId': ok_zone['id'], - 'type': 'TXT', - 'ttl': 200, - 'records': [{'text': 'a\\\\`=` =\\"Cat\\"\nattr=val'}]} + rs13 = client.get_recordset(record_set_list[11][0], record_set_list[11][1])["recordSet"] + expected13 = {"name": "txt-unique-characters", + "zoneId": ok_zone["id"], + "type": "TXT", + "ttl": 200, + "records": [{"text": 'a\\\\`=` =\\"Cat\\"\nattr=val'}]} verify_recordset(rs13, expected13) - rs14 = client.get_recordset(record_set_list[12][0], record_set_list[12][1])['recordSet'] - expected14 = {'name': 'txt', - 'zoneId': classless_base_zone['id'], - 'type': 'TXT', - 'ttl': 200, - 'records': [{'text': 'test'}]} + rs14 = client.get_recordset(record_set_list[12][0], record_set_list[12][1])["recordSet"] + expected14 = {"name": "txt", + "zoneId": classless_base_zone["id"], + "type": "TXT", + "ttl": 200, + "records": [{"text": "test"}]} verify_recordset(rs14, expected14) - rs15 = client.get_recordset(record_set_list[13][0], record_set_list[13][1])['recordSet'] - expected15 = {'name': 'mx', - 'zoneId': ok_zone['id'], - 'type': 'MX', - 'ttl': 200, - 'records': [{'preference': 0, 'exchange': 'foo.bar.'}]} + rs15 = client.get_recordset(record_set_list[13][0], record_set_list[13][1])["recordSet"] + expected15 = {"name": "mx", + "zoneId": ok_zone["id"], + "type": "MX", + "ttl": 200, + "records": [{"preference": 0, "exchange": "foo.bar."}]} verify_recordset(rs15, expected15) - rs16 = client.get_recordset(record_set_list[14][0], record_set_list[14][1])['recordSet'] - expected16 = {'name': 'ok.', - 'zoneId': ok_zone['id'], - 'type': 'MX', - 'ttl': 200, - 'records': [{'preference': 1000, 'exchange': 'bar.foo.'}]} + rs16 = client.get_recordset(record_set_list[14][0], record_set_list[14][1])["recordSet"] + expected16 = {"name": f"{ok_zone_name}", + "zoneId": ok_zone["id"], + "type": "MX", + "ttl": 200, + "records": [{"preference": 1000, "exchange": "bar.foo."}]} verify_recordset(rs16, expected16) finally: @@ -299,25 +296,25 @@ def test_create_batch_change_with_scheduled_time_and_owner_group_succeeds(shared Test successfully creating a batch change with scheduled time and owner group set """ client = shared_zone_test_context.ok_vinyldns_client - dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ') - + dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ") + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json(generate_record_name("ok."), address="4.5.6.7"), + get_change_A_AAAA_json(generate_record_name(ok_zone_name), address="4.5.6.7"), ], "scheduledTime": dt, - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } result = None try: result = client.create_batch_change(batch_change_input, status=202) - assert_that(result['status'], 'Scheduled') - assert_that(result['scheduledTime'], dt) + assert_that(result["status"], "Scheduled") + assert_that(result["scheduledTime"], dt) finally: if result: rejecter = shared_zone_test_context.support_user_client - rejecter.reject_batch_change(result['id'], status=200) + rejecter.reject_batch_change(result["id"], status=200) @pytest.mark.manual_batch_review @@ -326,7 +323,7 @@ def test_create_scheduled_batch_change_with_zone_discovery_error_without_owner_g Test creating a scheduled batch without owner group ID fails if there is a zone discovery error """ client = shared_zone_test_context.ok_vinyldns_client - dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ') + dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ") batch_change_input = { "comments": "this is optional", @@ -347,14 +344,14 @@ def test_create_scheduled_batch_change_with_scheduled_time_in_the_past_fails(sha Test creating a scheduled batch with a scheduled time in the past """ client = shared_zone_test_context.ok_vinyldns_client - yesterday = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ') - + yesterday = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ") + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json(generate_record_name("ok."), address="4.5.6.7"), + get_change_A_AAAA_json(generate_record_name(ok_zone_name), address="4.5.6.7"), ], - "ownerGroupId": shared_zone_test_context.ok_group['id'], + "ownerGroupId": shared_zone_test_context.ok_group["id"], "scheduledTime": yesterday } @@ -370,7 +367,7 @@ def test_create_batch_change_with_soft_failures_scheduled_time_and_allow_manual_ Test creating a batch change with soft errors, scheduled time, and allowManualReview disabled results in hard failure """ client = shared_zone_test_context.ok_vinyldns_client - dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ') + dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ") batch_change_input = { "comments": "this is optional", @@ -378,7 +375,7 @@ def test_create_batch_change_with_soft_failures_scheduled_time_and_allow_manual_ get_change_A_AAAA_json("non.existent", address="4.5.6.7"), ], "scheduledTime": dt, - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } response = client.create_batch_change(batch_change_input, False, status=400) @@ -393,10 +390,11 @@ def test_create_batch_change_without_scheduled_time_succeeds(shared_zone_test_co Test successfully creating a batch change without scheduled time set """ client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json(generate_record_name("ok."), address="4.5.6.7"), + get_change_A_AAAA_json(generate_record_name(ok_zone_name), address="4.5.6.7"), ] } @@ -404,9 +402,9 @@ def test_create_batch_change_without_scheduled_time_succeeds(shared_zone_test_co try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) - assert_that(completed_batch, is_not(has_key('scheduledTime'))) + assert_that(completed_batch, is_not(has_key("scheduledTime"))) finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -440,30 +438,33 @@ def test_create_batch_change_with_updates_deletes_success(shared_zone_test_conte ok_zone = shared_zone_test_context.ok_zone classless_zone_delegation_zone = shared_zone_test_context.classless_zone_delegation_zone - ok_zone_acl = generate_acl_rule('Delete', groupId=shared_zone_test_context.dummy_group['id'], recordMask='.*', recordTypes=['CNAME']) - classless_zone_delegation_zone_acl = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordTypes=['PTR']) + ok_zone_acl = generate_acl_rule("Delete", groupId=shared_zone_test_context.dummy_group["id"], recordMask=".*", recordTypes=["CNAME"]) + classless_zone_delegation_zone_acl = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"], recordTypes=["PTR"]) - rs_delete_dummy = get_recordset_json(dummy_zone, "delete", "AAAA", [{"address": "1:2:3:4:5:6:7:8"}]) - rs_update_dummy = get_recordset_json(dummy_zone, "update", "A", [{"address": "1.2.3.4"}]) - rs_delete_ok = get_recordset_json(ok_zone, "delete", "CNAME", [{"cname": "delete.cname."}]) - rs_update_classless = get_recordset_json(classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "will.change."}]) - txt_delete_dummy = get_recordset_json(dummy_zone, "delete-txt", "TXT", [{"text": "test"}]) - mx_delete_dummy = get_recordset_json(dummy_zone, "delete-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) - mx_update_dummy = get_recordset_json(dummy_zone, "update-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) + rs_delete_dummy = create_recordset(dummy_zone, "delete", "AAAA", [{"address": "1:2:3:4:5:6:7:8"}]) + rs_update_dummy = create_recordset(dummy_zone, "update", "A", [{"address": "1.2.3.4"}]) + rs_delete_ok = create_recordset(ok_zone, "delete", "CNAME", [{"cname": "delete.cname."}]) + rs_update_classless = create_recordset(classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "will.change."}]) + txt_delete_dummy = create_recordset(dummy_zone, "delete-txt", "TXT", [{"text": "test"}]) + mx_delete_dummy = create_recordset(dummy_zone, "delete-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) + mx_update_dummy = create_recordset(dummy_zone, "update-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) + ok_zone_name = shared_zone_test_context.ok_zone["name"] + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("delete.dummy.", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update.dummy.", ttl=300, address="1.2.3.4"), - get_change_A_AAAA_json("Update.dummy.", change_type="DeleteRecordSet"), - get_change_CNAME_json("delete.ok.", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.193", ttl=300, ptrdname="has.changed."), - get_change_PTR_json("192.0.2.193", change_type="DeleteRecordSet"), - get_change_TXT_json("delete-txt.dummy.", change_type="DeleteRecordSet"), - get_change_MX_json("delete-mx.dummy.", change_type="DeleteRecordSet"), - get_change_MX_json("update-mx.dummy.", change_type="DeleteRecordSet"), - get_change_MX_json("update-mx.dummy.", preference=1000) + get_change_A_AAAA_json(f"delete.{dummy_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"update.{dummy_zone_name}", ttl=300, address="1.2.3.4"), + get_change_A_AAAA_json(f"Update.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"delete.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.193", ttl=300, ptrdname="has.changed."), + get_change_PTR_json(f"{ip4_prefix}.193", change_type="DeleteRecordSet"), + get_change_TXT_json(f"delete-txt.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"delete-mx.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"update-mx.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"update-mx.{dummy_zone_name}", preference=1000) ] } @@ -472,13 +473,13 @@ def test_create_batch_change_with_updates_deletes_success(shared_zone_test_conte try: for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - create_client.wait_until_recordset_change_status(create_rs, 'Complete') + create_client.wait_until_recordset_change_status(create_rs, "Complete") # Configure ACL rules add_ok_acl_rules(shared_zone_test_context, [ok_zone_acl]) @@ -487,66 +488,66 @@ def test_create_batch_change_with_updates_deletes_success(shared_zone_test_conte result = dummy_client.create_batch_change(batch_change_input, status=202) completed_batch = dummy_client.wait_until_batch_change_completed(result) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS + to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS ## validate initial response - assert_that(result['comments'], is_("this is optional")) - assert_that(result['userName'], is_("dummy")) - assert_that(result['userId'], is_("dummy")) - assert_that(result['id'], is_not(none())) - assert_that(completed_batch['status'], is_("Complete")) + assert_that(result["comments"], is_("this is optional")) + assert_that(result["userName"], is_("dummy")) + assert_that(result["userId"], is_("dummy")) + assert_that(result["id"], is_not(none())) + assert_that(completed_batch["status"], is_("Complete")) - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=0, record_name="delete", - input_name="delete.dummy.", record_data=None, record_type="AAAA", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=1, record_name="update", ttl=300, - input_name="update.dummy.", record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=2, record_name="Update", - input_name="Update.dummy.", record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=3, record_name="delete", - input_name="delete.ok.", record_data=None, record_type="CNAME", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=classless_zone_delegation_zone, index=4, record_name="193", ttl=300, - input_name="192.0.2.193", record_data="has.changed.", record_type="PTR") - assert_change_success_response_values(result['changes'], zone=classless_zone_delegation_zone, index=5, record_name="193", - input_name="192.0.2.193", record_data=None, record_type="PTR", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=6, record_name="delete-txt", - input_name="delete-txt.dummy.", record_data=None, record_type="TXT", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=7, record_name="delete-mx", - input_name="delete-mx.dummy.", record_data=None, record_type="MX", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=8, record_name="update-mx", - input_name="update-mx.dummy.", record_data=None, record_type="MX", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=dummy_zone, index=9, record_name="update-mx", - input_name="update-mx.dummy.", record_data={'preference': 1000, 'exchange': 'foo.bar.'}, record_type="MX") + assert_change_success(result["changes"], zone=dummy_zone, index=0, record_name="delete", + input_name=f"delete.{dummy_zone_name}", record_data=None, record_type="AAAA", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=dummy_zone, index=1, record_name="update", ttl=300, + input_name=f"update.{dummy_zone_name}", record_data="1.2.3.4") + assert_change_success(result["changes"], zone=dummy_zone, index=2, record_name="Update", + input_name=f"Update.{dummy_zone_name}", record_data=None, change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=3, record_name="delete", + input_name=f"delete.{ok_zone_name}", record_data=None, record_type="CNAME", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=classless_zone_delegation_zone, index=4, record_name="193", ttl=300, + input_name=f"{ip4_prefix}.193", record_data="has.changed.", record_type="PTR") + assert_change_success(result["changes"], zone=classless_zone_delegation_zone, index=5, record_name="193", + input_name=f"{ip4_prefix}.193", record_data=None, record_type="PTR", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=dummy_zone, index=6, record_name="delete-txt", + input_name=f"delete-txt.{dummy_zone_name}", record_data=None, record_type="TXT", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=dummy_zone, index=7, record_name="delete-mx", + input_name=f"delete-mx.{dummy_zone_name}", record_data=None, record_type="MX", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=dummy_zone, index=8, record_name="update-mx", + input_name=f"update-mx.{dummy_zone_name}", record_data=None, record_type="MX", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=dummy_zone, index=9, record_name="update-mx", + input_name=f"update-mx.{dummy_zone_name}", record_data={"preference": 1000, "exchange": "foo.bar."}, record_type="MX") rs1 = dummy_client.get_recordset(record_set_list[0][0], record_set_list[0][1], status=404) assert_that(rs1, is_("RecordSet with id " + record_set_list[0][1] + " does not exist.")) - rs2 = dummy_client.get_recordset(record_set_list[1][0], record_set_list[1][1])['recordSet'] - expected2 = {'name': 'update', - 'zoneId': dummy_zone['id'], - 'type': 'A', - 'ttl': 300, - 'records': [{'address': '1.2.3.4'}]} + rs2 = dummy_client.get_recordset(record_set_list[1][0], record_set_list[1][1])["recordSet"] + expected2 = {"name": "update", + "zoneId": dummy_zone["id"], + "type": "A", + "ttl": 300, + "records": [{"address": "1.2.3.4"}]} verify_recordset(rs2, expected2) # since this is an update, record_set_list[1] and record_set_list[2] are the same record - rs3 = dummy_client.get_recordset(record_set_list[2][0], record_set_list[2][1])['recordSet'] + rs3 = dummy_client.get_recordset(record_set_list[2][0], record_set_list[2][1])["recordSet"] verify_recordset(rs3, expected2) rs4 = dummy_client.get_recordset(record_set_list[3][0], record_set_list[3][1], status=404) assert_that(rs4, is_("RecordSet with id " + record_set_list[3][1] + " does not exist.")) - rs5 = dummy_client.get_recordset(record_set_list[4][0], record_set_list[4][1])['recordSet'] - expected5 = {'name': '193', - 'zoneId': classless_zone_delegation_zone['id'], - 'type': 'PTR', - 'ttl': 300, - 'records': [{'ptrdname': 'has.changed.'}]} + rs5 = dummy_client.get_recordset(record_set_list[4][0], record_set_list[4][1])["recordSet"] + expected5 = {"name": "193", + "zoneId": classless_zone_delegation_zone["id"], + "type": "PTR", + "ttl": 300, + "records": [{"ptrdname": "has.changed."}]} verify_recordset(rs5, expected5) # since this is an update, record_set_list[5] and record_set_list[4] are the same record - rs6 = dummy_client.get_recordset(record_set_list[5][0], record_set_list[5][1])['recordSet'] + rs6 = dummy_client.get_recordset(record_set_list[5][0], record_set_list[5][1])["recordSet"] verify_recordset(rs6, expected5) rs7 = dummy_client.get_recordset(record_set_list[6][0], record_set_list[6][1], status=404) @@ -555,18 +556,18 @@ def test_create_batch_change_with_updates_deletes_success(shared_zone_test_conte rs8 = dummy_client.get_recordset(record_set_list[7][0], record_set_list[7][1], status=404) assert_that(rs8, is_("RecordSet with id " + record_set_list[7][1] + " does not exist.")) - rs9 = dummy_client.get_recordset(record_set_list[8][0], record_set_list[8][1])['recordSet'] - expected9 = {'name': 'update-mx', - 'zoneId': dummy_zone['id'], - 'type': 'MX', - 'ttl': 200, - 'records': [{'preference': 1000, 'exchange': 'foo.bar.'}]} + rs9 = dummy_client.get_recordset(record_set_list[8][0], record_set_list[8][1])["recordSet"] + expected9 = {"name": "update-mx", + "zoneId": dummy_zone["id"], + "type": "MX", + "ttl": 200, + "records": [{"preference": 1000, "exchange": "foo.bar."}]} verify_recordset(rs9, expected9) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs[0] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs[0] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs[0] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs[0] != dummy_zone["id"]] clear_zoneid_rsid_tuple_list(dummy_deletes, dummy_client) clear_zoneid_rsid_tuple_list(ok_deletes, ok_client) @@ -584,7 +585,7 @@ def test_create_batch_change_without_comments_succeeds(shared_zone_test_context) client = shared_zone_test_context.ok_vinyldns_client parent_zone = shared_zone_test_context.parent_zone test_record_name = generate_record_name() - test_record_fqdn = '{0}.{1}'.format(test_record_name, parent_zone['name']) + test_record_fqdn = "{0}.{1}".format(test_record_name, parent_zone["name"]) batch_change_input = { "changes": [ get_change_A_AAAA_json(test_record_fqdn, address="4.5.6.7"), @@ -595,10 +596,10 @@ def test_create_batch_change_without_comments_succeeds(shared_zone_test_context) try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success_response_values(result['changes'], zone=parent_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.5.6.7") + assert_change_success(result["changes"], zone=parent_zone, index=0, record_name=test_record_name, + input_name=test_record_fqdn, record_data="4.5.6.7") finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -610,23 +611,23 @@ def test_create_batch_change_with_owner_group_id_succeeds(shared_zone_test_conte client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone test_record_name = generate_record_name() - test_record_fqdn = '{0}.{1}'.format(test_record_name, ok_zone['name']) + test_record_fqdn = "{0}.{1}".format(test_record_name, ok_zone["name"]) batch_change_input = { "changes": [ get_change_A_AAAA_json(test_record_fqdn, address="4.3.2.1") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } to_delete = [] try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success_response_values(result['changes'], zone=ok_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.3.2.1") - assert_that(completed_batch['ownerGroupId'], is_(shared_zone_test_context.ok_group['id'])) + assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, + input_name=test_record_fqdn, record_data="4.3.2.1") + assert_that(completed_batch["ownerGroupId"], is_(shared_zone_test_context.ok_group["id"])) finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -639,7 +640,7 @@ def test_create_batch_change_without_owner_group_id_succeeds(shared_zone_test_co client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone test_record_name = generate_record_name() - test_record_fqdn = '{0}.{1}'.format(test_record_name, ok_zone['name']) + test_record_fqdn = "{0}.{1}".format(test_record_name, ok_zone["name"]) batch_change_input = { "changes": [ get_change_A_AAAA_json(test_record_fqdn, address="4.3.2.1") @@ -650,11 +651,11 @@ def test_create_batch_change_without_owner_group_id_succeeds(shared_zone_test_co try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success_response_values(result['changes'], zone=ok_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.3.2.1") - assert_that(completed_batch, is_not(has_key('ownerGroupId'))) + assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, + input_name=test_record_fqdn, record_data="4.3.2.1") + assert_that(completed_batch, is_not(has_key("ownerGroupId"))) finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -668,8 +669,8 @@ def test_create_batch_change_with_missing_ttl_returns_default_or_existing(shared client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone update_name = generate_record_name() - update_fqdn = '{0}.{1}'.format(update_name, ok_zone['name']) - rs_update = get_recordset_json(ok_zone, update_name, "CNAME", [{"cname": "old-ttl.cname."}], ttl=300) + update_fqdn = "{0}.{1}".format(update_name, ok_zone["name"]) + rs_update = create_recordset(ok_zone, update_name, "CNAME", [{"cname": "old-ttl.cname."}], ttl=300) batch_change_input = { "comments": "this is optional", "changes": [ @@ -688,7 +689,7 @@ def test_create_batch_change_with_missing_ttl_returns_default_or_existing(shared }, { "changeType": "Add", - "inputName": generate_record_name("ok."), + "inputName": generate_record_name(ok_zone["name"]), "type": "CNAME", "record": { "cname": "new-ttl-record.cname." @@ -700,19 +701,19 @@ def test_create_batch_change_with_missing_ttl_returns_default_or_existing(shared try: create_rs = client.create_recordset(rs_update, status=202) - client.wait_until_recordset_change_status(create_rs, 'Complete') - to_delete = [(create_rs['zone']['id'], create_rs['recordSet']['id'])] + client.wait_until_recordset_change_status(create_rs, "Complete") + to_delete = [(create_rs["zone"]["id"], create_rs["recordSet"]["id"])] result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) - updated_record = client.get_recordset(record_set_list[0][0], record_set_list[0][1])['recordSet'] - assert_that(updated_record['ttl'], is_(300)) + updated_record = client.get_recordset(record_set_list[0][0], record_set_list[0][1])["recordSet"] + assert_that(updated_record["ttl"], is_(300)) - new_record = client.get_recordset(record_set_list[2][0], record_set_list[2][1])['recordSet'] - assert_that(new_record['ttl'], is_(7200)) + new_record = client.get_recordset(record_set_list[2][0], record_set_list[2][1])["recordSet"] + assert_that(new_record["ttl"], is_(7200)) finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -723,12 +724,12 @@ def test_create_batch_change_partial_failure(shared_zone_test_context): Test batch change status with partial failures """ client = shared_zone_test_context.ok_vinyldns_client - + ok_zone = shared_zone_test_context.ok_zone batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("will-succeed.ok.", address="4.5.6.7"), - get_change_A_AAAA_json("direct-to-backend.ok.", address="4.5.6.7") # this record will fail in processing + get_change_A_AAAA_json(f"will-succeed.{ok_zone['name']}", address="4.5.6.7"), + get_change_A_AAAA_json(f"direct-to-backend.{ok_zone['name']}", address="4.5.6.7") # this record will fail in processing ] } @@ -738,11 +739,11 @@ def test_create_batch_change_partial_failure(shared_zone_test_context): dns_add(shared_zone_test_context.ok_zone, "direct-to-backend", 200, "A", "1.2.3.4") result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes'] if - change['status'] == "Complete"] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"] if + change["status"] == "Complete"] to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS - assert_that(completed_batch['status'], is_("PartialFailure")) + assert_that(completed_batch["status"], is_("PartialFailure")) finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -754,12 +755,12 @@ def test_create_batch_change_failed(shared_zone_test_context): Test batch change status with all failures """ client = shared_zone_test_context.ok_vinyldns_client - + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("backend-foo.ok.", address="4.5.6.7"), - get_change_A_AAAA_json("backend-already-exists.ok.", address="4.5.6.7") + get_change_A_AAAA_json(f"backend-foo.{ok_zone_name}", address="4.5.6.7"), + get_change_A_AAAA_json(f"backend-already-exists.{ok_zone_name}", address="4.5.6.7") ] } @@ -770,7 +771,7 @@ def test_create_batch_change_failed(shared_zone_test_context): result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - assert_that(completed_batch['status'], is_("Failed")) + assert_that(completed_batch["status"], is_("Failed")) finally: dns_delete(shared_zone_test_context.ok_zone, "backend-foo", "A") @@ -787,7 +788,7 @@ def test_empty_batch_fails(shared_zone_test_context): "changes": [] } - errors = shared_zone_test_context.ok_vinyldns_client.create_batch_change(batch_change_input, status=400)['errors'] + errors = shared_zone_test_context.ok_vinyldns_client.create_batch_change(batch_change_input, status=400)["errors"] assert_that(errors[0], contains_string( "Batch change contained no changes. Batch change must have at least one change, up to a maximum of")) @@ -909,53 +910,57 @@ def test_create_batch_change_with_high_value_domain_fails(shared_zone_test_conte """ client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip6_prefix = shared_zone_test_context.ip6_prefix + batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("high-value-domain-add.ok."), - get_change_A_AAAA_json("high-value-domain-update.ok.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("high-value-domain-update.ok."), - get_change_A_AAAA_json("high-value-domain-delete.ok.", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.252"), - get_change_PTR_json("192.0.2.253", change_type="DeleteRecordSet"), # 253 exists already - get_change_PTR_json("192.0.2.253"), - get_change_PTR_json("192.0.2.253", change_type="DeleteRecordSet"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:0:ffff"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:0", change_type="DeleteRecordSet"), # ffff:0 exists already - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:0"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:0", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"high-value-domain-add.{ok_zone_name}"), + get_change_A_AAAA_json(f"high-value-domain-update.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"high-value-domain-update.{ok_zone_name}"), + get_change_A_AAAA_json(f"high-value-domain-delete.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.252"), + get_change_PTR_json(f"{ip4_prefix}.253", change_type="DeleteRecordSet"), # 253 exists already + get_change_PTR_json(f"{ip4_prefix}.253"), + get_change_PTR_json(f"{ip4_prefix}.253", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:0:ffff"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:0", change_type="DeleteRecordSet"), # ffff:0 exists already + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:0"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:0", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("i-can-be-touched.ok.") + get_change_A_AAAA_json(f"i-can-be-touched.{ok_zone_name}") ] } response = client.create_batch_change(batch_change_input, status=400) assert_error(response[0], error_messages=[ - 'Record name "high-value-domain-add.ok." is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "high-value-domain-add.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[1], error_messages=[ - 'Record name "high-value-domain-update.ok." is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[2], error_messages=[ - 'Record name "high-value-domain-update.ok." is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[3], error_messages=[ - 'Record name "high-value-domain-delete.ok." is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "high-value-domain-delete.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[4], error_messages=[ - 'Record name "192.0.2.252" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip4_prefix}.252" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[5], error_messages=[ - 'Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[6], error_messages=[ - 'Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[7], error_messages=[ - 'Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[8], error_messages=[ - 'Record name "fd69:27cc:fe91:0:0:0:0:ffff" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip6_prefix}:0:0:0:0:ffff" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[9], error_messages=[ - 'Record name "fd69:27cc:fe91:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[10], error_messages=[ - 'Record name "fd69:27cc:fe91:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) assert_error(response[11], error_messages=[ - 'Record name "fd69:27cc:fe91:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) assert_that(response[12], is_not(has_key("errors"))) @@ -968,42 +973,46 @@ def test_create_batch_change_with_domains_requiring_review_succeeds(shared_zone_ rejecter = shared_zone_test_context.support_user_client client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip6_prefix = shared_zone_test_context.ip6_prefix + batch_change_input = { - "ownerGroupId": shared_zone_test_context.ok_group['id'], + "ownerGroupId": shared_zone_test_context.ok_group["id"], "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("needs-review-add.ok."), - get_change_A_AAAA_json("needs-review-update.ok.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("needs-review-update.ok."), - get_change_A_AAAA_json("needs-review-delete.ok.", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.254"), - get_change_PTR_json("192.0.2.255", change_type="DeleteRecordSet"), # 255 exists already - get_change_PTR_json("192.0.2.255"), - get_change_PTR_json("192.0.2.255", change_type="DeleteRecordSet"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:1"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:2", change_type="DeleteRecordSet"), # ffff:2 exists already - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:2"), - get_change_PTR_json("fd69:27cc:fe91:0:0:0:ffff:2", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"needs-review-add.{ok_zone_name}"), + get_change_A_AAAA_json(f"needs-review-update.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"needs-review-update.{ok_zone_name}"), + get_change_A_AAAA_json(f"needs-review-delete.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.254"), + get_change_PTR_json(f"{ip4_prefix}.255", change_type="DeleteRecordSet"), # 255 exists already + get_change_PTR_json(f"{ip4_prefix}.255"), + get_change_PTR_json(f"{ip4_prefix}.255", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:1"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:2", change_type="DeleteRecordSet"), # ffff:2 exists already + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:2"), + get_change_PTR_json(f"{ip6_prefix}:0:0:0:ffff:2", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("i-can-be-touched.ok.") + get_change_A_AAAA_json(f"i-can-be-touched.{ok_zone_name}") ] } response = None try: response = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(response['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - for i in xrange(1, 11): - assert_that(get_batch['changes'][i]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][i]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview')) - assert_that(get_batch['changes'][12]['validationErrors'], empty()) + get_batch = client.get_batch_change(response["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + for i in range(1, 11): + assert_that(get_batch["changes"][i]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][i]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) + assert_that(get_batch["changes"][12]["validationErrors"], empty()) finally: # Clean up so data doesn't change if response: - rejecter.reject_batch_change(response['id'], status=200) + rejecter.reject_batch_change(response["id"], status=200) @pytest.mark.manual_batch_review @@ -1012,14 +1021,14 @@ def test_create_batch_change_with_soft_failures_and_allow_manual_review_disabled Test creating a batch change with soft errors and allowManualReview disabled results in hard failure """ client = shared_zone_test_context.ok_vinyldns_client - dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ') + dt = (datetime.datetime.now() + datetime.timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ") batch_change_input = { "comments": "this is optional", "changes": [ get_change_A_AAAA_json("non.existent", address="4.5.6.7"), ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } response = client.create_batch_change(batch_change_input, False, status=400) @@ -1157,9 +1166,9 @@ def test_create_batch_change_with_incorrect_CNAME_record_attribute_fails(shared_ } ] } - errors = client.create_batch_change(bad_CNAME_data_request, status=400)['errors'] + errors = client.create_batch_change(bad_CNAME_data_request, status=400)["errors"] - assert_that(errors, contains("Missing CNAME.cname")) + assert_that(errors, contains_exactly("Missing CNAME.cname")) def test_create_batch_change_with_incorrect_PTR_record_attribute_fails(shared_zone_test_context): @@ -1181,9 +1190,9 @@ def test_create_batch_change_with_incorrect_PTR_record_attribute_fails(shared_zo } ] } - errors = client.create_batch_change(bad_PTR_data_request, status=400)['errors'] + errors = client.create_batch_change(bad_PTR_data_request, status=400)["errors"] - assert_that(errors, contains("Missing PTR.ptrdname")) + assert_that(errors, contains_exactly("Missing PTR.ptrdname")) def test_create_batch_change_with_bad_CNAME_record_attribute_fails(shared_zone_test_context): @@ -1235,6 +1244,7 @@ def test_create_batch_change_with_bad_PTR_record_attribute_fails(shared_zone_tes assert_error(error1, error_messages=["PTR must be less than 255 characters"]) assert_error(error2, error_messages=["PTR must be less than 255 characters"]) + def test_create_batch_change_with_missing_input_name_for_delete_fails(shared_zone_test_context): """ Test creating a batch change without an inputName for DeleteRecordSet fails @@ -1278,32 +1288,33 @@ def test_mx_recordtype_cannot_have_invalid_preference(shared_zone_test_context): Test batch fails with bad mx preference """ ok_client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input_low_add = { "comments": "this is optional", "changes": [ - get_change_MX_json("too-small.ok.", preference=-1) + get_change_MX_json(f"too-small.{ok_zone_name}", preference=-1) ] } batch_change_input_high_add = { "comments": "this is optional", "changes": [ - get_change_MX_json("too-big.ok.", preference=65536) + get_change_MX_json(f"too-big.{ok_zone_name}", preference=65536) ] } batch_change_input_low_delete_record_set = { "comments": "this is optional", "changes": [ - get_change_MX_json("too-small.ok.", preference=-1, change_type="DeleteRecordSet") + get_change_MX_json(f"too-small.{ok_zone_name}", preference=-1, change_type="DeleteRecordSet") ] } batch_change_input_high_delete_record_set = { "comments": "this is optional", "changes": [ - get_change_MX_json("too-big.ok.", preference=65536, change_type="DeleteRecordSet") + get_change_MX_json(f"too-big.{ok_zone_name}", preference=65536, change_type="DeleteRecordSet") ] } @@ -1320,45 +1331,46 @@ def test_mx_recordtype_cannot_have_invalid_preference(shared_zone_test_context): def test_create_batch_change_with_invalid_duplicate_record_names_fails(shared_zone_test_context): """ - Test creating a batch change that contains a CNAME record and another record with the same name fails + Test creating a batch change that contains_exactly a CNAME record and another record with the same name fails """ client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name: str = shared_zone_test_context.ok_zone["name"] - rs_A_delete = get_recordset_json(shared_zone_test_context.ok_zone, "delete1", "A", [{"address": "10.1.1.1"}]) - rs_CNAME_delete = get_recordset_json(shared_zone_test_context.ok_zone, "delete-this1", "CNAME", - [{"cname": "cname."}]) + rs_A_delete = create_recordset(shared_zone_test_context.ok_zone, "delete1", "A", [{"address": "10.1.1.1"}]) + rs_CNAME_delete = create_recordset(shared_zone_test_context.ok_zone, "delete-this1", "CNAME", + [{"cname": "cname."}]) to_create = [rs_A_delete, rs_CNAME_delete] to_delete = [] - + bare_ok_zone_name = ok_zone_name.rstrip('.') batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("thing1.ok.", address="4.5.6.7"), - get_change_CNAME_json("thing1.ok"), - get_change_A_AAAA_json("delete1.ok", change_type="DeleteRecordSet"), - get_change_CNAME_json("delete1.ok"), - get_change_A_AAAA_json("delete-this1.ok", address="4.5.6.7"), - get_change_CNAME_json("delete-this1.ok", change_type="DeleteRecordSet") + get_change_A_AAAA_json(f"thing1.{ok_zone_name}", address="4.5.6.7"), + get_change_CNAME_json(f"thing1.{bare_ok_zone_name}"), + get_change_A_AAAA_json(f"delete1.{bare_ok_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"delete1.{bare_ok_zone_name}"), + get_change_A_AAAA_json(f"delete-this1.{bare_ok_zone_name}", address="4.5.6.7"), + get_change_CNAME_json(f"delete-this1.{bare_ok_zone_name}", change_type="DeleteRecordSet") ] } try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) - assert_successful_change_in_error_response(response[0], input_name="thing1.ok.", record_data="4.5.6.7") - assert_failed_change_in_error_response(response[1], input_name="thing1.ok.", record_type="CNAME", + assert_successful_change_in_error_response(response[0], input_name=f"thing1.{ok_zone_name}", record_data="4.5.6.7") + assert_failed_change_in_error_response(response[1], input_name=f"thing1.{ok_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=['Record Name "thing1.ok." Not Unique In Batch Change:' + error_messages=[f'Record Name "thing1.{ok_zone_name}" Not Unique In Batch Change:' ' cannot have multiple "CNAME" records with the same name.']) - assert_successful_change_in_error_response(response[2], input_name="delete1.ok.", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[3], input_name="delete1.ok.", record_type="CNAME", + assert_successful_change_in_error_response(response[2], input_name=f"delete1.{ok_zone_name}", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[3], input_name=f"delete1.{ok_zone_name}", record_type="CNAME", record_data="test.com.") - assert_successful_change_in_error_response(response[4], input_name="delete-this1.ok.", record_data="4.5.6.7") - assert_successful_change_in_error_response(response[5], input_name="delete-this1.ok.", + assert_successful_change_in_error_response(response[4], input_name=f"delete-this1.{ok_zone_name}", record_data="4.5.6.7") + assert_successful_change_in_error_response(response[5], input_name=f"delete-this1.{ok_zone_name}", change_type="DeleteRecordSet", record_type="CNAME") finally: @@ -1371,20 +1383,22 @@ def test_create_batch_change_with_readonly_user_fails(shared_zone_test_context): """ dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ok_group_name = shared_zone_test_context.ok_group["name"] - acl_rule = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id'], recordMask='.*', - recordTypes=['A', 'AAAA']) + acl_rule = generate_acl_rule("Read", groupId=shared_zone_test_context.dummy_group["id"], recordMask=".*", + recordTypes=["A", "AAAA"]) - delete_rs = get_recordset_json(shared_zone_test_context.ok_zone, "delete", "A", [{"address": "127.0.0.1"}], 300) - update_rs = get_recordset_json(shared_zone_test_context.ok_zone, "update", "A", [{"address": "127.0.0.1"}], 300) + delete_rs = create_recordset(shared_zone_test_context.ok_zone, "delete", "A", [{"address": "127.0.0.1"}], 300) + update_rs = create_recordset(shared_zone_test_context.ok_zone, "update", "A", [{"address": "127.0.0.1"}], 300) batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("relative.ok.", address="4.5.6.7"), - get_change_A_AAAA_json("delete.ok.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update.ok.", address="1.2.3.4"), - get_change_A_AAAA_json("update.ok.", change_type="DeleteRecordSet") + get_change_A_AAAA_json(f"relative.{ok_zone_name}", address="4.5.6.7"), + get_change_A_AAAA_json(f"delete.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"update.{ok_zone_name}", address="1.2.3.4"), + get_change_A_AAAA_json(f"update.{ok_zone_name}", change_type="DeleteRecordSet") ] } @@ -1394,20 +1408,20 @@ def test_create_batch_change_with_readonly_user_fails(shared_zone_test_context): for rs in [delete_rs, update_rs]: create_result = ok_client.create_recordset(rs, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, "Complete")) errors = dummy_client.create_batch_change(batch_change_input, status=400) - assert_failed_change_in_error_response(errors[0], input_name="relative.ok.", record_data="4.5.6.7", - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) - assert_failed_change_in_error_response(errors[1], input_name="delete.ok.", change_type="DeleteRecordSet", + assert_failed_change_in_error_response(errors[0], input_name=f"relative.{ok_zone_name}", record_data="4.5.6.7", + error_messages=[f'User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes.']) + assert_failed_change_in_error_response(errors[1], input_name=f"delete.{ok_zone_name}", change_type="DeleteRecordSet", record_data="4.5.6.7", - error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) - assert_failed_change_in_error_response(errors[2], input_name="update.ok.", record_data="1.2.3.4", - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) - assert_failed_change_in_error_response(errors[3], input_name="update.ok.", change_type="DeleteRecordSet", + error_messages=[f'User "dummy" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes.']) + assert_failed_change_in_error_response(errors[2], input_name=f"update.{ok_zone_name}", record_data="1.2.3.4", + error_messages=[f'User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes.']) + assert_failed_change_in_error_response(errors[3], input_name=f"update.{ok_zone_name}", change_type="DeleteRecordSet", record_data=None, - error_messages=['User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes.']) finally: clear_ok_acl_rules(shared_zone_test_context) clear_recordset_list(to_delete, ok_client) @@ -1418,38 +1432,41 @@ def test_a_recordtype_add_checks(shared_zone_test_context): Test all add validations performed on A records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + parent_zone_name = shared_zone_test_context.parent_zone["name"] existing_a_name = generate_record_name() - existing_a_fqdn = '{0}.{1}'.format(existing_a_name, shared_zone_test_context.parent_zone['name']) - existing_a = get_recordset_json(shared_zone_test_context.parent_zone, existing_a_name, "A", [{"address": "10.1.1.1"}], - 100) + existing_a_fqdn = "{0}.{1}".format(existing_a_name, shared_zone_test_context.parent_zone["name"]) + existing_a = create_recordset(shared_zone_test_context.parent_zone, existing_a_name, "A", [{"address": "10.1.1.1"}], + 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = '{0}.{1}'.format(existing_cname_name, shared_zone_test_context.parent_zone['name']) - existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", - [{"cname": "cname.data."}], 100) + existing_cname_fqdn = "{0}.{1}".format(existing_cname_name, shared_zone_test_context.parent_zone["name"]) + existing_cname = create_recordset(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", + [{"cname": "cname.data."}], 100) good_record_name = generate_record_name() - good_record_fqdn = '{0}.{1}'.format(good_record_name, shared_zone_test_context.parent_zone['name']) + good_record_fqdn = "{0}.{1}".format(good_record_name, shared_zone_test_context.parent_zone["name"]) batch_change_input = { "changes": [ # valid changes get_change_A_AAAA_json(good_record_fqdn, address="1.2.3.4"), # input validation failures - get_change_A_AAAA_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, address="1.2.3.4"), + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, address="1.2.3.4"), get_change_A_AAAA_json("reverse-zone.10.10.in-addr.arpa.", address="1.2.3.4"), # zone discovery failures - get_change_A_AAAA_json("no.subzone.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json(f"no.subzone.{parent_zone_name}", address="1.2.3.4"), get_change_A_AAAA_json("no.zone.at.all.", address="1.2.3.4"), # context validation failures - get_change_CNAME_json("cname-duplicate.parent.com."), - get_change_A_AAAA_json("cname-duplicate.parent.com.", address="1.2.3.4"), + get_change_CNAME_json(f"cname-duplicate.{parent_zone_name}"), + get_change_A_AAAA_json(f"cname-duplicate.{parent_zone_name}", address="1.2.3.4"), get_change_A_AAAA_json(existing_a_fqdn, address="1.2.3.4"), get_change_A_AAAA_json(existing_cname_fqdn, address="1.2.3.4"), - get_change_A_AAAA_json("user-add-unauthorized.dummy.", address="1.2.3.4") + get_change_A_AAAA_json(f"user-add-unauthorized.{dummy_zone_name}", address="1.2.3.4") ] } @@ -1458,7 +1475,7 @@ def test_a_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) @@ -1467,44 +1484,44 @@ def test_a_recordtype_add_checks(shared_zone_test_context): record_data="1.2.3.4") # ttl, domain name, reverse zone input validations - assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, + assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_data="1.2.3.4", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) assert_failed_change_in_error_response(response[2], input_name="reverse-zone.10.10.in-addr.arpa.", record_data="1.2.3.4", error_messages=[ "Invalid Record Type In Reverse Zone: record with name \"reverse-zone.10.10.in-addr.arpa.\" and type \"A\" is not allowed in a reverse zone."]) # zone discovery failure - assert_failed_change_in_error_response(response[3], input_name="no.subzone.parent.com.", record_data="1.2.3.4", + assert_failed_change_in_error_response(response[3], input_name=f"no.subzone.{parent_zone_name}", record_data="1.2.3.4", error_messages=[ - 'Zone Discovery Failed: zone for "no.subzone.parent.com." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + f'Zone Discovery Failed: zone for "no.subzone.{parent_zone_name}" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[4], input_name="no.zone.at.all.", record_data="1.2.3.4", error_messages=[ 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validations: duplicate name failure is always on the cname - assert_failed_change_in_error_response(response[5], input_name="cname-duplicate.parent.com.", + assert_failed_change_in_error_response(response[5], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="test.com.", error_messages=[ - "Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) - assert_successful_change_in_error_response(response[6], input_name="cname-duplicate.parent.com.", + f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_successful_change_in_error_response(response[6], input_name=f"cname-duplicate.{parent_zone_name}", record_data="1.2.3.4") # context validations: conflicting recordsets, unauthorized error assert_failed_change_in_error_response(response[7], input_name=existing_a_fqdn, record_data="1.2.3.4", error_messages=[ - "Record \"" + existing_a_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + f"Record \"{existing_a_fqdn}\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[8], input_name=existing_cname_fqdn, record_data="1.2.3.4", error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) - assert_failed_change_in_error_response(response[9], input_name="user-add-unauthorized.dummy.", + f'CNAME Conflict: CNAME record names must be unique. Existing record with name "{existing_cname_fqdn}" and type \"CNAME\" conflicts with this record.']) + assert_failed_change_in_error_response(response[9], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_data="1.2.3.4", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -1518,35 +1535,38 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_zone = shared_zone_test_context.ok_zone dummy_zone = shared_zone_test_context.dummy_zone + ok_zone_name = ok_zone["name"] + dummy_zone_name = dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] group_to_delete = {} temp_group = { - 'name': 'test-group-for-record-in-private-zone', - 'email': 'test@test.com', - 'description': 'for testing that a get batch change still works when record owner group is deleted', - 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], - 'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ] + "name": "test-group-for-record-in-private-zone", + "email": "test@test.com", + "description": "for testing that a get batch change still works when record owner group is deleted", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } rs_delete_name = generate_record_name() - rs_delete_fqdn = rs_delete_name + ".ok." - rs_delete_ok = get_recordset_json(ok_zone, rs_delete_name, "A", [{'address': '1.1.1.1'}]) + rs_delete_fqdn = rs_delete_name + f".{ok_zone_name}" + rs_delete_ok = create_recordset(ok_zone, rs_delete_name, "A", [{"address": "1.1.1.1"}]) rs_update_name = generate_record_name() - rs_update_fqdn = rs_update_name + ".ok." - rs_update_ok = get_recordset_json(ok_zone, rs_update_name, "A", [{'address': '1.1.1.1'}]) + rs_update_fqdn = rs_update_name + f".{ok_zone_name}" + rs_update_ok = create_recordset(ok_zone, rs_update_name, "A", [{"address": "1.1.1.1"}]) rs_delete_dummy_name = generate_record_name() - rs_delete_dummy_fqdn = rs_delete_dummy_name + ".dummy." - rs_delete_dummy = get_recordset_json(dummy_zone, rs_delete_dummy_name, "A", [{'address': '1.1.1.1'}]) + rs_delete_dummy_fqdn = rs_delete_dummy_name + f".{dummy_zone_name}" + rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "A", [{"address": "1.1.1.1"}]) rs_update_dummy_name = generate_record_name() - rs_update_dummy_fqdn = rs_update_dummy_name + ".dummy." - rs_update_dummy = get_recordset_json(dummy_zone, rs_update_dummy_name, "A", [{'address': '1.1.1.1'}]) + rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "A", [{"address": "1.1.1.1"}]) rs_dummy_with_owner_name = generate_record_name() - rs_delete_dummy_with_owner_fqdn = rs_dummy_with_owner_name + ".dummy." - rs_update_dummy_with_owner_fqdn = rs_dummy_with_owner_name + ".dummy." + rs_delete_dummy_with_owner_fqdn = rs_dummy_with_owner_name + f".{dummy_zone_name}" + rs_update_dummy_with_owner_fqdn = rs_dummy_with_owner_name + f".{dummy_zone_name}" batch_change_input = { "comments": "this is optional", @@ -1568,7 +1588,7 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): get_change_A_AAAA_json("zone.discovery.error.", change_type="DeleteRecordSet"), # context validation failures: record does not exist, not authorized - get_change_A_AAAA_json("non-existent.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"non-existent.{ok_zone_name}", change_type="DeleteRecordSet"), get_change_A_AAAA_json(rs_delete_dummy_fqdn, change_type="DeleteRecordSet"), get_change_A_AAAA_json(rs_update_dummy_fqdn, change_type="DeleteRecordSet"), get_change_A_AAAA_json(rs_update_dummy_fqdn, ttl=300), @@ -1583,21 +1603,21 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): try: group_to_delete = dummy_client.create_group(temp_group, status=200) - rs_update_dummy_with_owner = get_recordset_json(dummy_zone, rs_dummy_with_owner_name, "A", [{'address': '1.1.1.1'}], 100, group_to_delete['id']) + rs_update_dummy_with_owner = create_recordset(dummy_zone, rs_dummy_with_owner_name, "A", [{"address": "1.1.1.1"}], 100, group_to_delete["id"]) create_rs_update_dummy_with_owner = dummy_client.create_recordset(rs_update_dummy_with_owner, status=202) - to_delete.append(dummy_client.wait_until_recordset_change_status(create_rs_update_dummy_with_owner, 'Complete')) + to_delete.append(dummy_client.wait_until_recordset_change_status(create_rs_update_dummy_with_owner, "Complete")) for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, "Complete")) # Confirm that record set doesn't already exist - ok_client.get_recordset(ok_zone['id'], 'non-existent', status=404) + ok_client.get_recordset(ok_zone["id"], "non-existent", status=404) response = ok_client.create_batch_change(batch_change_input, status=400) @@ -1638,30 +1658,30 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): 'Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[10], input_name="non-existent.ok.", + assert_failed_change_in_error_response(response[10], input_name=f"non-existent.{ok_zone_name}", change_type="DeleteRecordSet", error_messages=[ - 'Record "non-existent.ok." Does Not Exist: cannot delete a record that does not exist.']) + f'Record "non-existent.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, ttl=300, - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[14], input_name=rs_update_dummy_with_owner_fqdn, change_type="DeleteRecordSet", - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[15], input_name=rs_update_dummy_with_owner_fqdn, ttl=300, - error_messages=['User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) + error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs["zone"]["id"] != dummy_zone["id"]] clear_recordset_list(dummy_deletes, dummy_client) clear_recordset_list(ok_deletes, ok_client) - dummy_client.delete_group(group_to_delete['id'], status=200) + dummy_client.delete_group(group_to_delete["id"], status=200) def test_aaaa_recordtype_add_checks(shared_zone_test_context): @@ -1669,38 +1689,41 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): Test all add validations performed on AAAA records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + parent_zone_name = shared_zone_test_context.parent_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] existing_aaaa_name = generate_record_name() - existing_aaaa_fqdn = existing_aaaa_name + "." + shared_zone_test_context.parent_zone['name'] - existing_aaaa = get_recordset_json(shared_zone_test_context.parent_zone, existing_aaaa_name, "AAAA", - [{"address": "1::1"}], 100) + existing_aaaa_fqdn = existing_aaaa_name + "." + shared_zone_test_context.parent_zone["name"] + existing_aaaa = create_recordset(shared_zone_test_context.parent_zone, existing_aaaa_name, "AAAA", + [{"address": "1::1"}], 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = existing_cname_name + "." + shared_zone_test_context.parent_zone['name'] - existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", - [{"cname": "cname.data."}], 100) + existing_cname_fqdn = existing_cname_name + "." + shared_zone_test_context.parent_zone["name"] + existing_cname = create_recordset(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", + [{"cname": "cname.data."}], 100) good_record_name = generate_record_name() - good_record_fqdn = good_record_name + "." + shared_zone_test_context.parent_zone['name'] + good_record_fqdn = good_record_name + "." + shared_zone_test_context.parent_zone["name"] batch_change_input = { "changes": [ # valid changes get_change_A_AAAA_json(good_record_fqdn, record_type="AAAA", address="1::1"), # input validation failures - get_change_A_AAAA_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, record_type="AAAA", address="1::1"), + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_type="AAAA", address="1::1"), get_change_A_AAAA_json("reverse-zone.1.2.3.ip6.arpa.", record_type="AAAA", address="1::1"), # zone discovery failures - get_change_A_AAAA_json("no.subzone.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json(f"no.subzone.{parent_zone_name}", record_type="AAAA", address="1::1"), get_change_A_AAAA_json("no.zone.at.all.", record_type="AAAA", address="1::1"), # context validation failures - get_change_CNAME_json("cname-duplicate.parent.com."), - get_change_A_AAAA_json("cname-duplicate.parent.com.", record_type="AAAA", address="1::1"), + get_change_CNAME_json(f"cname-duplicate.{parent_zone_name}"), + get_change_A_AAAA_json(f"cname-duplicate.{parent_zone_name}", record_type="AAAA", address="1::1"), get_change_A_AAAA_json(existing_aaaa_fqdn, record_type="AAAA", address="1::1"), get_change_A_AAAA_json(existing_cname_fqdn, record_type="AAAA", address="1::1"), - get_change_A_AAAA_json("user-add-unauthorized.dummy.", record_type="AAAA", address="1::1") + get_change_A_AAAA_json(f"user-add-unauthorized.{dummy_zone_name}", record_type="AAAA", address="1::1") ] } @@ -1709,7 +1732,7 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) @@ -1718,45 +1741,44 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): record_type="AAAA", record_data="1::1") # ttl, domain name, reverse zone input validations - assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, + assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_type="AAAA", record_data="1::1", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) assert_failed_change_in_error_response(response[2], input_name="reverse-zone.1.2.3.ip6.arpa.", record_type="AAAA", record_data="1::1", error_messages=[ "Invalid Record Type In Reverse Zone: record with name \"reverse-zone.1.2.3.ip6.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) # zone discovery failures - assert_failed_change_in_error_response(response[3], input_name="no.subzone.parent.com.", record_type="AAAA", + assert_failed_change_in_error_response(response[3], input_name=f"no.subzone.{parent_zone_name}", record_type="AAAA", record_data="1::1", error_messages=[ - 'Zone Discovery Failed: zone for \"no.subzone.parent.com.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + f'Zone Discovery Failed: zone for \"no.subzone.{parent_zone_name}\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[4], input_name="no.zone.at.all.", record_type="AAAA", record_data="1::1", error_messages=[ "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validations: duplicate name failure (always on the cname), conflicting recordsets, unauthorized error - assert_failed_change_in_error_response(response[5], input_name="cname-duplicate.parent.com.", + assert_failed_change_in_error_response(response[5], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="test.com.", error_messages=[ - "Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) - assert_successful_change_in_error_response(response[6], input_name="cname-duplicate.parent.com.", + f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_successful_change_in_error_response(response[6], input_name=f"cname-duplicate.{parent_zone_name}", record_type="AAAA", record_data="1::1") assert_failed_change_in_error_response(response[7], input_name=existing_aaaa_fqdn, record_type="AAAA", record_data="1::1", - error_messages=[ - "Record \"" + existing_aaaa_fqdn+ "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + error_messages=[f"Record \"{existing_aaaa_fqdn}\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[8], input_name=existing_cname_fqdn, record_type="AAAA", record_data="1::1", error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn+ "\" and type \"CNAME\" conflicts with this record."]) - assert_failed_change_in_error_response(response[9], input_name="user-add-unauthorized.dummy.", + f"CNAME Conflict: CNAME record names must be unique. Existing record with name \"{existing_cname_fqdn}\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[9], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="AAAA", record_data="1::1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -1770,23 +1792,26 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_zone = shared_zone_test_context.ok_zone dummy_zone = shared_zone_test_context.dummy_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] rs_delete_name = generate_record_name() - rs_delete_fqdn = rs_delete_name + ".ok." - rs_delete_ok = get_recordset_json(ok_zone, rs_delete_name, "AAAA", [{"address": "1::4:5:6:7:8"}], 200) + rs_delete_fqdn = rs_delete_name + f".{ok_zone_name}" + rs_delete_ok = create_recordset(ok_zone, rs_delete_name, "AAAA", [{"address": "1::4:5:6:7:8"}], 200) rs_update_name = generate_record_name() - rs_update_fqdn = rs_update_name + ".ok." - rs_update_ok = get_recordset_json(ok_zone, rs_update_name, "AAAA", [{"address": "1:1:1:1:1:1:1:1"}], 200) + rs_update_fqdn = rs_update_name + f".{ok_zone_name}" + rs_update_ok = create_recordset(ok_zone, rs_update_name, "AAAA", [{"address": "1:1:1:1:1:1:1:1"}], 200) rs_delete_dummy_name = generate_record_name() - rs_delete_dummy_fqdn = rs_delete_dummy_name + ".dummy." - rs_delete_dummy = get_recordset_json(dummy_zone, rs_delete_dummy_name, "AAAA", [{"address": "1::1"}], 200) + rs_delete_dummy_fqdn = rs_delete_dummy_name + f".{dummy_zone_name}" + rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "AAAA", [{"address": "1::1"}], 200) rs_update_dummy_name = generate_record_name() - rs_update_dummy_fqdn = rs_update_dummy_name + ".dummy." - rs_update_dummy = get_recordset_json(dummy_zone, rs_update_dummy_name, "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], - 200) + rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], + 200) batch_change_input = { "comments": "this is optional", @@ -1797,20 +1822,20 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): get_change_A_AAAA_json(rs_update_fqdn, record_type="AAAA", change_type="DeleteRecordSet"), # input validations failures - get_change_A_AAAA_json("invalid-name$.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"invalid-name$.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), get_change_A_AAAA_json("reverse.zone.in-addr.arpa.", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("bad-ttl-and-invalid-name$-update.ok.", record_type="AAAA", + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("bad-ttl-and-invalid-name$-update.ok.", ttl=29, record_type="AAAA", + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", ttl=29, record_type="AAAA", address="1:2:3:4:5:6:7:8"), # zone discovery failure get_change_A_AAAA_json("no.zone.at.all.", record_type="AAAA", change_type="DeleteRecordSet"), # context validation failures - get_change_A_AAAA_json("delete-nonexistent.ok.", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-nonexistent.ok.", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-nonexistent.ok.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json(f"delete-nonexistent.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"update-nonexistent.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"update-nonexistent.{ok_zone_name}", record_type="AAAA", address="1::1"), get_change_A_AAAA_json(rs_delete_dummy_fqdn, record_type="AAAA", change_type="DeleteRecordSet"), get_change_A_AAAA_json(rs_update_dummy_fqdn, record_type="AAAA", address="1::1"), get_change_A_AAAA_json(rs_update_dummy_fqdn, record_type="AAAA", change_type="DeleteRecordSet") @@ -1822,16 +1847,16 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, "Complete")) # Confirm that record set doesn't already exist - ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + ok_client.get_recordset(ok_zone["id"], "delete-nonexistent", status=404) response = ok_client.create_batch_change(batch_change_input, status=400) @@ -1844,55 +1869,55 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): record_data=None, change_type="DeleteRecordSet") # input validations failures: invalid input name, reverse zone error, invalid ttl - assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="AAAA", + assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", error_messages=[ - 'Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + f'Invalid domain name: "invalid-name$.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[4], input_name="reverse.zone.in-addr.arpa.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Invalid Record Type In Reverse Zone: record with name \"reverse.zone.in-addr.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) - assert_failed_change_in_error_response(response[5], input_name="bad-ttl-and-invalid-name$-update.ok.", + assert_failed_change_in_error_response(response[5], input_name=f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", error_messages=[ - 'Invalid domain name: "bad-ttl-and-invalid-name$-update.ok.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[6], input_name="bad-ttl-and-invalid-name$-update.ok.", ttl=29, + f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[6], input_name=f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", ttl=29, record_type="AAAA", record_data="1:2:3:4:5:6:7:8", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$-update.ok.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[8], input_name="delete-nonexistent.ok.", record_type="AAAA", + assert_failed_change_in_error_response(response[8], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_failed_change_in_error_response(response[9], input_name="update-nonexistent.ok.", record_type="AAAA", + error_messages=[f"Record \"delete-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) + assert_failed_change_in_error_response(response[9], input_name=f"update-nonexistent.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[10], input_name="update-nonexistent.ok.", - record_type="AAAA", record_data="1::1") + error_messages=[f"Record \"update-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[10], input_name=f"update-nonexistent.{ok_zone_name}", record_type="AAAA", record_data="1::1") assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, record_type="AAAA", record_data="1::1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs["zone"]["id"] != dummy_zone["id"]] clear_recordset_list(dummy_deletes, dummy_client) clear_recordset_list(ok_deletes, ok_client) @@ -1903,31 +1928,38 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] + ip4_reverse_zone_name = shared_zone_test_context.ip4_reverse_zone["name"] + parent_zone_name = shared_zone_test_context.parent_zone["name"] existing_forward_name = generate_record_name() - existing_forward_fqdn = existing_forward_name + "." + shared_zone_test_context.parent_zone['name'] - existing_forward = get_recordset_json(shared_zone_test_context.parent_zone, existing_forward_name, "A", - [{"address": "1.2.3.4"}], 100) + existing_forward_fqdn = existing_forward_name + "." + shared_zone_test_context.parent_zone["name"] + existing_forward = create_recordset(shared_zone_test_context.parent_zone, existing_forward_name, "A", + [{"address": "1.2.3.4"}], 100) - existing_reverse_fqdn = "0." + shared_zone_test_context.classless_base_zone['name'] - existing_reverse = get_recordset_json(shared_zone_test_context.classless_base_zone, "0", "PTR", - [{"ptrdname": "test.com. "}], 100) + existing_reverse_fqdn = "0." + shared_zone_test_context.classless_base_zone["name"] + existing_reverse = create_recordset(shared_zone_test_context.classless_base_zone, "0", "PTR", + [{"ptrdname": "test.com. "}], 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = existing_cname_name + "." + shared_zone_test_context.parent_zone['name'] - existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", - [{"cname": "cname.data. "}], 100) + existing_cname_fqdn = existing_cname_name + "." + shared_zone_test_context.parent_zone["name"] + existing_cname = create_recordset(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", + [{"cname": "cname.data. "}], 100) rs_a_to_cname_ok_name = generate_record_name() - rs_a_to_cname_ok_fqdn = rs_a_to_cname_ok_name + ".ok." - rs_a_to_cname_ok = get_recordset_json(ok_zone, rs_a_to_cname_ok_name, "A", [{'address': '1.1.1.1'}]) + rs_a_to_cname_ok_fqdn = rs_a_to_cname_ok_name + f".{ok_zone_name}" + rs_a_to_cname_ok = create_recordset(ok_zone, rs_a_to_cname_ok_name, "A", [{"address": "1.1.1.1"}]) rs_cname_to_A_ok_name = generate_record_name() - rs_cname_to_A_ok_fqdn = rs_cname_to_A_ok_name + ".ok." - rs_cname_to_A_ok = get_recordset_json(ok_zone, rs_cname_to_A_ok_name, "CNAME", [{'cname': 'test.com.'}]) + rs_cname_to_A_ok_fqdn = rs_cname_to_A_ok_name + f".{ok_zone_name}" + rs_cname_to_A_ok = create_recordset(ok_zone, rs_cname_to_A_ok_name, "CNAME", [{"cname": "test.com."}]) - forward_fqdn = generate_record_name("parent.com.") - reverse_fqdn = generate_record_name("10.10.in-addr.arpa.") + forward_fqdn = generate_record_name(parent_zone_name) + reverse_fqdn = generate_record_name(ip4_reverse_zone_name) batch_change_input = { "changes": [ @@ -1942,23 +1974,23 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): get_change_CNAME_json(rs_cname_to_A_ok_fqdn, change_type="DeleteRecordSet"), # input validations failures - get_change_CNAME_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, cname="also$bad.name"), + get_change_CNAME_json(f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, cname="also$bad.name"), # zone discovery failure get_change_CNAME_json("no.zone.com."), # cant be apex - get_change_CNAME_json("parent.com."), + get_change_CNAME_json(parent_zone_name), # context validation failures - get_change_PTR_json("192.0.2.15"), - get_change_CNAME_json("15.2.0.192.in-addr.arpa.", cname="duplicate.other.type.within.batch."), - get_change_CNAME_json("cname-duplicate.parent.com."), - get_change_CNAME_json("cname-duplicate.parent.com.", cname="duplicate.cname.type.within.batch."), + get_change_PTR_json(f"{ip4_prefix}.15"), + get_change_CNAME_json(f"15.{ip4_zone_name}", cname="duplicate.other.type.within.batch."), + get_change_CNAME_json(f"cname-duplicate.{parent_zone_name}"), + get_change_CNAME_json(f"cname-duplicate.{parent_zone_name}", cname="duplicate.cname.type.within.batch."), get_change_CNAME_json(existing_forward_fqdn), get_change_CNAME_json(existing_cname_fqdn), - get_change_CNAME_json("0.2.0.192.in-addr.arpa.", cname="duplicate.in.db."), - get_change_CNAME_json("user-add-unauthorized.dummy.") + get_change_CNAME_json(f"0.{ip4_zone_name}", cname="duplicate.in.db."), + get_change_CNAME_json(f"user-add-unauthorized.{dummy_zone_name}") ] } @@ -1967,7 +1999,7 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) @@ -1987,62 +2019,59 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): change_type="DeleteRecordSet") # ttl, domain name, data - assert_failed_change_in_error_response(response[6], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, + assert_failed_change_in_error_response(response[6], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_type="CNAME", record_data="also$bad.name.", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, ' - 'joined by dots, and terminated with a dot.', - 'Invalid domain name: "also$bad.name.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, ' - 'joined by dots, and terminated with a dot.']) + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.", + 'Invalid domain name: "also$bad.name.", valid domain names must be letters, numbers, underscores, and hyphens, ' + "joined by dots, and terminated with a dot."]) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.com.", record_type="CNAME", record_data="test.com.", - error_messages=[ - "Zone Discovery Failed: zone for \"no.zone.com.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"no.zone.com.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # CNAME cant be apex - assert_failed_change_in_error_response(response[8], input_name="parent.com.", record_type="CNAME", + assert_failed_change_in_error_response(response[8], input_name=parent_zone_name, record_type="CNAME", record_data="test.com.", - error_messages=[ - "CNAME cannot be the same name as zone \"parent.com.\"."]) + error_messages=[f"CNAME cannot be the same name as zone \"{parent_zone_name}\"."]) # context validations: duplicates in batch - assert_successful_change_in_error_response(response[9], input_name="192.0.2.15", record_type="PTR", + assert_successful_change_in_error_response(response[9], input_name=f"{ip4_prefix}.15", record_type="PTR", record_data="test.com.") - assert_failed_change_in_error_response(response[10], input_name="15.2.0.192.in-addr.arpa.", record_type="CNAME", + assert_failed_change_in_error_response(response[10], input_name=f"15.{ip4_zone_name}", record_type="CNAME", record_data="duplicate.other.type.within.batch.", - error_messages=[ - "Record Name \"15.2.0.192.in-addr.arpa.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) - - assert_failed_change_in_error_response(response[11], input_name="cname-duplicate.parent.com.", + error_messages=[f"Record Name \"15.{ip4_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) + assert_failed_change_in_error_response(response[11], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - "Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) - assert_failed_change_in_error_response(response[12], input_name="cname-duplicate.parent.com.", + error_messages=[f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) + assert_failed_change_in_error_response(response[12], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="duplicate.cname.type.within.batch.", - error_messages=[ - "Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) # context validations: existing recordsets pre-request, unauthorized, failure on duplicate add assert_failed_change_in_error_response(response[13], input_name=existing_forward_fqdn, record_type="CNAME", record_data="test.com.", - error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_forward_fqdn + "\" and type \"A\" conflicts with this record."]) + error_messages=[f"CNAME Conflict: CNAME record names must be unique. " + f"Existing record with name \"{existing_forward_fqdn}\" and type \"A\" conflicts with this record."]) assert_failed_change_in_error_response(response[14], input_name=existing_cname_fqdn, record_type="CNAME", record_data="test.com.", - error_messages=[ - "Record \"" + existing_cname_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.", - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) + error_messages=[f"Record \"{existing_cname_fqdn}\" Already Exists: cannot add an existing record; to update it, " + f"issue a DeleteRecordSet then an Add.", + f"CNAME Conflict: CNAME record names must be unique. " + f"Existing record with name \"{existing_cname_fqdn}\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[15], input_name=existing_reverse_fqdn, record_type="CNAME", record_data="duplicate.in.db.", - error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_reverse_fqdn + "\" and type \"PTR\" conflicts with this record."]) - assert_failed_change_in_error_response(response[16], input_name="user-add-unauthorized.dummy.", + error_messages=["CNAME Conflict: CNAME record names must be unique. " + f"Existing record with name \"{existing_reverse_fqdn}\" and type \"PTR\" conflicts with this record."]) + assert_failed_change_in_error_response(response[16], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -2057,30 +2086,35 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone dummy_zone = shared_zone_test_context.dummy_zone classless_base_zone = shared_zone_test_context.classless_base_zone + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] + parent_zone_name = shared_zone_test_context.parent_zone["name"] - rs_delete_ok = get_recordset_json(ok_zone, "delete3", "CNAME", [{'cname': 'test.com.'}]) - rs_update_ok = get_recordset_json(ok_zone, "update3", "CNAME", [{'cname': 'test.com.'}]) - rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized3", "CNAME", [{'cname': 'test.com.'}]) - rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized3", "CNAME", [{'cname': 'test.com.'}]) - rs_delete_base = get_recordset_json(classless_base_zone, "200", "CNAME", - [{'cname': '200.192/30.2.0.192.in-addr.arpa.'}]) - rs_update_base = get_recordset_json(classless_base_zone, "201", "CNAME", - [{'cname': '201.192/30.2.0.192.in-addr.arpa.'}]) - rs_update_duplicate_add = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Cname2", "CNAME", - [{"cname": "cname.data. "}], 100) + rs_delete_ok = create_recordset(ok_zone, "delete3", "CNAME", [{"cname": "test.com."}]) + rs_update_ok = create_recordset(ok_zone, "update3", "CNAME", [{"cname": "test.com."}]) + rs_delete_dummy = create_recordset(dummy_zone, "delete-unauthorized3", "CNAME", [{"cname": "test.com."}]) + rs_update_dummy = create_recordset(dummy_zone, "update-unauthorized3", "CNAME", [{"cname": "test.com."}]) + rs_delete_base = create_recordset(classless_base_zone, "200", "CNAME", + [{"cname": f"200.192/30.{ip4_zone_name}"}]) + rs_update_base = create_recordset(classless_base_zone, "201", "CNAME", + [{"cname": f"201.192/30.{ip4_zone_name}"}]) + rs_update_duplicate_add = create_recordset(shared_zone_test_context.parent_zone, "Existing-Cname2", "CNAME", + [{"cname": "cname.data. "}], 100) batch_change_input = { "comments": "this is optional", "changes": [ # valid changes - forward zone - get_change_CNAME_json("delete3.ok.", change_type="DeleteRecordSet"), - get_change_CNAME_json("update3.ok.", change_type="DeleteRecordSet"), - get_change_CNAME_json("update3.ok.", ttl=300), + get_change_CNAME_json(f"delete3.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"update3.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"update3.{ok_zone_name}", ttl=300), # valid changes - reverse zone - get_change_CNAME_json("200.2.0.192.in-addr.arpa.", change_type="DeleteRecordSet"), - get_change_CNAME_json("201.2.0.192.in-addr.arpa.", change_type="DeleteRecordSet"), - get_change_CNAME_json("201.2.0.192.in-addr.arpa.", ttl=300), + get_change_CNAME_json(f"200.{ip4_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"201.{ip4_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"201.{ip4_zone_name}", ttl=300), # input validation failures get_change_CNAME_json("$invalid.host.name.", change_type="DeleteRecordSet"), @@ -2091,15 +2125,15 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): get_change_CNAME_json("zone.discovery.error.", change_type="DeleteRecordSet"), # context validation failures: record does not exist, not authorized, failure on update with multiple adds - get_change_CNAME_json("non-existent-delete.ok.", change_type="DeleteRecordSet"), - get_change_CNAME_json("non-existent-update.ok.", change_type="DeleteRecordSet"), - get_change_CNAME_json("non-existent-update.ok."), - get_change_CNAME_json("delete-unauthorized3.dummy.", change_type="DeleteRecordSet"), - get_change_CNAME_json("update-unauthorized3.dummy.", change_type="DeleteRecordSet"), - get_change_CNAME_json("update-unauthorized3.dummy.", ttl=300), - get_change_CNAME_json("existing-cname2.parent.com.", change_type="DeleteRecordSet"), - get_change_CNAME_json("existing-cname2.parent.com."), - get_change_CNAME_json("existing-cname2.parent.com.", cname="test2.com.") + get_change_CNAME_json(f"non-existent-delete.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"non-existent-update.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"non-existent-update.{ok_zone_name}"), + get_change_CNAME_json(f"delete-unauthorized3.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"update-unauthorized3.{dummy_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"update-unauthorized3.{dummy_zone_name}", ttl=300), + get_change_CNAME_json(f"existing-cname2.{parent_zone_name}", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"existing-cname2.{parent_zone_name}"), + get_change_CNAME_json(f"existing-cname2.{parent_zone_name}", cname="test2.com.") ] } @@ -2109,33 +2143,33 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, "Complete")) # Confirm that record set doesn't already exist - ok_client.get_recordset(ok_zone['id'], 'non-existent', status=404) + ok_client.get_recordset(ok_zone["id"], "non-existent", status=404) response = ok_client.create_batch_change(batch_change_input, status=400) # valid changes - forward zone - assert_successful_change_in_error_response(response[0], input_name="delete3.ok.", record_type="CNAME", + assert_successful_change_in_error_response(response[0], input_name=f"delete3.{ok_zone_name}", record_type="CNAME", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[1], input_name="update3.ok.", record_type="CNAME", + assert_successful_change_in_error_response(response[1], input_name=f"update3.{ok_zone_name}", record_type="CNAME", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[2], input_name="update3.ok.", record_type="CNAME", ttl=300, + assert_successful_change_in_error_response(response[2], input_name=f"update3.{ok_zone_name}", record_type="CNAME", ttl=300, record_data="test.com.") # valid changes - reverse zone - assert_successful_change_in_error_response(response[3], input_name="200.2.0.192.in-addr.arpa.", + assert_successful_change_in_error_response(response[3], input_name=f"200.{ip4_zone_name}", record_type="CNAME", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[4], input_name="201.2.0.192.in-addr.arpa.", + assert_successful_change_in_error_response(response[4], input_name=f"201.{ip4_zone_name}", record_type="CNAME", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[5], input_name="201.2.0.192.in-addr.arpa.", + assert_successful_change_in_error_response(response[5], input_name=f"201.{ip4_zone_name}", record_type="CNAME", ttl=300, record_data="test.com.") # ttl, domain name, data @@ -2161,41 +2195,41 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): 'Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[10], input_name="non-existent-delete.ok.", record_type="CNAME", + assert_failed_change_in_error_response(response[10], input_name=f"non-existent-delete.{ok_zone_name}", record_type="CNAME", change_type="DeleteRecordSet", error_messages=[ - 'Record "non-existent-delete.ok." Does Not Exist: cannot delete a record that does not exist.']) - assert_failed_change_in_error_response(response[11], input_name="non-existent-update.ok.", record_type="CNAME", + f'Record "non-existent-delete.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) + assert_failed_change_in_error_response(response[11], input_name=f"non-existent-update.{ok_zone_name}", record_type="CNAME", change_type="DeleteRecordSet", error_messages=[ - 'Record "non-existent-update.ok." Does Not Exist: cannot delete a record that does not exist.']) - assert_successful_change_in_error_response(response[12], input_name="non-existent-update.ok.", + f'Record "non-existent-update.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) + assert_successful_change_in_error_response(response[12], input_name=f"non-existent-update.{ok_zone_name}", record_type="CNAME", record_data="test.com.") - assert_failed_change_in_error_response(response[13], input_name="delete-unauthorized3.dummy.", + assert_failed_change_in_error_response(response[13], input_name=f"delete-unauthorized3.{dummy_zone_name}", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) - assert_failed_change_in_error_response(response[14], input_name="update-unauthorized3.dummy.", + error_messages=[f'User "ok" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) + assert_failed_change_in_error_response(response[14], input_name=f"update-unauthorized3.{dummy_zone_name}", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) - assert_failed_change_in_error_response(response[15], input_name="update-unauthorized3.dummy.", + error_messages=[f'User "ok" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) + assert_failed_change_in_error_response(response[15], input_name=f"update-unauthorized3.{dummy_zone_name}", record_type="CNAME", ttl=300, record_data="test.com.", - error_messages=['User "ok" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes.']) - assert_successful_change_in_error_response(response[16], input_name="existing-cname2.parent.com.", + error_messages=[f'User "ok" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) + assert_successful_change_in_error_response(response[16], input_name=f"existing-cname2.{parent_zone_name}", record_type="CNAME", change_type="DeleteRecordSet") - assert_failed_change_in_error_response(response[17], input_name="existing-cname2.parent.com.", + assert_failed_change_in_error_response(response[17], input_name=f"existing-cname2.{parent_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - "Record Name \"existing-cname2.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) - assert_failed_change_in_error_response(response[18], input_name="existing-cname2.parent.com.", + error_messages=[f"Record Name \"existing-cname2.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) + assert_failed_change_in_error_response(response[18], input_name=f"existing-cname2.{parent_zone_name}", record_type="CNAME", record_data="test2.com.", - error_messages=[ - "Record Name \"existing-cname2.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"existing-cname2.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs["zone"]["id"] != dummy_zone["id"]] clear_recordset_list(dummy_deletes, dummy_client) clear_recordset_list(ok_deletes, ok_client) @@ -2207,19 +2241,21 @@ def test_ptr_recordtype_auth_checks(shared_zone_test_context): """ dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_client = shared_zone_test_context.ok_vinyldns_client + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip6_prefix = shared_zone_test_context.ip6_prefix - no_auth_ipv4 = get_recordset_json(shared_zone_test_context.classless_base_zone, "25", "PTR", - [{"ptrdname": "ptrdname.data."}], 200) - no_auth_ipv6 = get_recordset_json(shared_zone_test_context.ip6_16_nibble_zone, "4.3.2.1.0.0.0.0.0.0.0.0.0.0.0.0", - "PTR", [{"ptrdname": "ptrdname.data."}], 200) + no_auth_ipv4 = create_recordset(shared_zone_test_context.classless_base_zone, "25", "PTR", + [{"ptrdname": "ptrdname.data."}], 200) + no_auth_ipv6 = create_recordset(shared_zone_test_context.ip6_16_nibble_zone, "4.3.2.1.0.0.0.0.0.0.0.0.0.0.0.0", + "PTR", [{"ptrdname": "ptrdname.data."}], 200) batch_change_input = { "changes": [ - get_change_PTR_json("192.0.2.5", ptrdname="not.authorized.ipv4.ptr.base."), - get_change_PTR_json("192.0.2.193", ptrdname="not.authorized.ipv4.ptr.classless.delegation."), - get_change_PTR_json("fd69:27cc:fe91:1000::1234", ptrdname="not.authorized.ipv6.ptr."), - get_change_PTR_json("192.0.2.25", change_type="DeleteRecordSet"), - get_change_PTR_json("fd69:27cc:fe91:1000::1234", change_type="DeleteRecordSet") + get_change_PTR_json(f"{ip4_prefix}.5", ptrdname="not.authorized.ipv4.ptr.base."), + get_change_PTR_json(f"{ip4_prefix}.193", ptrdname="not.authorized.ipv4.ptr.classless.delegation."), + get_change_PTR_json(f"{ip6_prefix}:1000::1234", ptrdname="not.authorized.ipv6.ptr."), + get_change_PTR_json(f"{ip4_prefix}.25", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:1000::1234", change_type="DeleteRecordSet") ] } @@ -2229,23 +2265,23 @@ def test_ptr_recordtype_auth_checks(shared_zone_test_context): try: for create_json in to_create: create_result = ok_client.create_recordset(create_json, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, "Complete")) errors = dummy_client.create_batch_change(batch_change_input, status=400) - assert_failed_change_in_error_response(errors[0], input_name="192.0.2.5", record_type="PTR", + assert_failed_change_in_error_response(errors[0], input_name=f"{ip4_prefix}.5", record_type="PTR", record_data="not.authorized.ipv4.ptr.base.", error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(errors[1], input_name="192.0.2.193", record_type="PTR", + assert_failed_change_in_error_response(errors[1], input_name=f"{ip4_prefix}.193", record_type="PTR", record_data="not.authorized.ipv4.ptr.classless.delegation.", error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(errors[2], input_name="fd69:27cc:fe91:1000::1234", record_type="PTR", + assert_failed_change_in_error_response(errors[2], input_name=f"{ip6_prefix}:1000::1234", record_type="PTR", record_data="not.authorized.ipv6.ptr.", error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(errors[3], input_name="192.0.2.25", record_type="PTR", record_data=None, + assert_failed_change_in_error_response(errors[3], input_name=f"{ip4_prefix}.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(errors[4], input_name="fd69:27cc:fe91:1000::1234", record_type="PTR", + assert_failed_change_in_error_response(errors[4], input_name=f"{ip6_prefix}:1000::1234", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) finally: @@ -2258,35 +2294,37 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): Perform all add, non-authorization validations performed on IPv4 PTR records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] - existing_ipv4 = get_recordset_json(shared_zone_test_context.classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "ptrdname.data."}]) - existing_cname = get_recordset_json(shared_zone_test_context.classless_base_zone, "199", "CNAME", [{"cname": "cname.data. "}], 300) + existing_ipv4 = create_recordset(shared_zone_test_context.classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "ptrdname.data."}]) + existing_cname = create_recordset(shared_zone_test_context.classless_base_zone, "199", "CNAME", [{"cname": "cname.data. "}], 300) batch_change_input = { "changes": [ # valid change - get_change_PTR_json("192.0.2.44", ptrdname="base.vinyldns"), - get_change_PTR_json("192.0.2.198", ptrdname="delegated.vinyldns"), + get_change_PTR_json(f"{ip4_prefix}.44", ptrdname="base.vinyldns"), + get_change_PTR_json(f"{ip4_prefix}.198", ptrdname="delegated.vinyldns"), # input validation failures get_change_PTR_json("invalidip.111."), get_change_PTR_json("4.5.6.7", ttl=29, ptrdname="-1.2.3.4"), # delegated and non-delegated PTR duplicate name checks - get_change_PTR_json("192.0.2.196"), # delegated zone - get_change_CNAME_json("196.2.0.192.in-addr.arpa"), # non-delegated zone - get_change_CNAME_json("196.192/30.2.0.192.in-addr.arpa"), # delegated zone + get_change_PTR_json(f"{ip4_prefix}.196"), # delegated zone + get_change_CNAME_json(f"196.{ip4_zone_name}"), # non-delegated zone + get_change_CNAME_json(f"196.192/30.{ip4_zone_name}"), # delegated zone - get_change_PTR_json("192.0.2.55"), # non-delegated zone - get_change_CNAME_json("55.2.0.192.in-addr.arpa"), # non-delegated zone - get_change_CNAME_json("55.192/30.2.0.192.in-addr.arpa"), # delegated zone + get_change_PTR_json(f"{ip4_prefix}.55"), # non-delegated zone + get_change_CNAME_json(f"55.{ip4_zone_name}"), # non-delegated zone + get_change_CNAME_json(f"55.192/30.{ip4_zone_name}"), # delegated zone # zone discovery failure - get_change_PTR_json("192.0.1.192"), + get_change_PTR_json(f"{ip4_prefix}.192"), # context validation failures - get_change_PTR_json("192.0.2.193", ptrdname="existing-ptr."), - get_change_PTR_json("192.0.2.199", ptrdname="existing-cname.") + get_change_PTR_json(f"{ip4_prefix}.193", ptrdname="existing-ptr."), + get_change_PTR_json(f"{ip4_prefix}.199", ptrdname="existing-cname.") ] } @@ -2294,16 +2332,16 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): to_delete = [] try: # make sure 196 is cleared before continuing - delete_recordset_by_name(shared_zone_test_context.classless_zone_delegation_zone['id'], '196', client) + delete_recordset_by_name(shared_zone_test_context.classless_zone_delegation_zone["id"], "196", client) for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name="192.0.2.44", record_type="PTR", record_data="base.vinyldns.") - assert_successful_change_in_error_response(response[1], input_name="192.0.2.198", record_type="PTR", record_data="delegated.vinyldns.") + assert_successful_change_in_error_response(response[0], input_name=f"{ip4_prefix}.44", record_type="PTR", record_data="base.vinyldns.") + assert_successful_change_in_error_response(response[1], input_name=f"{ip4_prefix}.198", record_type="PTR", record_data="delegated.vinyldns.") # input validation failures: invalid ip, ttl, data assert_failed_change_in_error_response(response[2], input_name="invalidip.111.", record_type="PTR", record_data="test.com.", @@ -2311,27 +2349,28 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): assert_failed_change_in_error_response(response[3], input_name="4.5.6.7", ttl=29, record_type="PTR", record_data="-1.2.3.4.", error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', 'Invalid domain name: "-1.2.3.4.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) # delegated and non-delegated PTR duplicate name checks - assert_successful_change_in_error_response(response[4], input_name="192.0.2.196", record_type="PTR", record_data="test.com.") - assert_successful_change_in_error_response(response[5], input_name="196.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.") - assert_failed_change_in_error_response(response[6], input_name="196.192/30.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.", - error_messages=['Record Name "196.192/30.2.0.192.in-addr.arpa." Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) - assert_successful_change_in_error_response(response[7], input_name="192.0.2.55", record_type="PTR", record_data="test.com.") - assert_failed_change_in_error_response(response[8], input_name="55.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.", - error_messages=['Record Name "55.2.0.192.in-addr.arpa." Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) - assert_successful_change_in_error_response(response[9], input_name="55.192/30.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[4], input_name=f"{ip4_prefix}.196", record_type="PTR", record_data="test.com.") + assert_successful_change_in_error_response(response[5], input_name=f"196.{ip4_zone_name}", record_type="CNAME", record_data="test.com.") + assert_failed_change_in_error_response(response[6], input_name=f"196.192/30.{ip4_zone_name}", record_type="CNAME", record_data="test.com.", + error_messages=[f'Record Name "196.192/30.{ip4_zone_name}" Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) + assert_successful_change_in_error_response(response[7], input_name=f"{ip4_prefix}.55", record_type="PTR", record_data="test.com.") + assert_failed_change_in_error_response(response[8], input_name=f"55.{ip4_zone_name}", record_type="CNAME", record_data="test.com.", + error_messages=[f'Record Name "55.{ip4_zone_name}" Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) + assert_successful_change_in_error_response(response[9], input_name=f"55.192/30.{ip4_zone_name}", record_type="CNAME", record_data="test.com.") # zone discovery failure assert_failed_change_in_error_response(response[10], input_name="192.0.1.192", record_type="PTR", record_data="test.com.", error_messages=['Zone Discovery Failed: zone for "192.0.1.192" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validations: existing cname recordset - assert_failed_change_in_error_response(response[11], input_name="192.0.2.193", record_type="PTR", record_data="existing-ptr.", - error_messages=['Record "192.0.2.193" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) - assert_failed_change_in_error_response(response[12], input_name="192.0.2.199", record_type="PTR", record_data="existing-cname.", - error_messages=['CNAME Conflict: CNAME record names must be unique. Existing record with name "192.0.2.199" and type "CNAME" conflicts with this record.']) + assert_failed_change_in_error_response(response[11], input_name=f"{ip4_prefix}.193", record_type="PTR", record_data="existing-ptr.", + error_messages=['Record f"{ip4_prefix}.193" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) + assert_failed_change_in_error_response(response[12], input_name=f"{ip4_prefix}.199", record_type="PTR", record_data="existing-cname.", + error_messages=[ + f'CNAME Conflict: CNAME record names must be unique. Existing record with name "{ip4_prefix}.199" and type "CNAME" conflicts with this record.']) finally: clear_recordset_list(to_delete, client) @@ -2349,7 +2388,7 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): # # batch_change_input = { # "changes": [ -# get_change_PTR_json("192.0.2.1"), +# get_change_PTR_json(f"{ip4_prefix}.1"), # # dummy change with too big TTL so ZD failure wont go to pending if enabled # get_change_A_AAAA_json("this.change.will.fail.", ttl=99999999999, address="1.1.1.1") # ] @@ -2357,19 +2396,19 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): # # try: # # delete classless base zone (2.0.192.in-addr.arpa); only remaining zone is delegated zone (192/30.2.0.192.in-addr.arpa) -# ok_client.delete_zone(classless_base_zone['id'], status=202) -# ok_client.wait_until_zone_deleted(classless_base_zone['id']) +# ok_client.delete_zone(classless_base_zone["id"], status=202) +# ok_client.wait_until_zone_deleted(classless_base_zone["id"]) # response = ok_client.create_batch_change(batch_change_input, status=400) -# assert_failed_change_in_error_response(response[0], input_name="192.0.2.1", record_type="PTR", +# assert_failed_change_in_error_response(response[0], input_name=f"{ip4_prefix}.1", record_type="PTR", # record_data="test.com.", # error_messages=[ -# 'Zone Discovery Failed: zone for "192.0.2.1" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) +# 'Zone Discovery Failed: zone for f"{ip4_prefix}.1" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # # finally: # # re-create classless base zone and update zone info in shared_zone_test_context for use in future tests # zone_create_change = ok_client.create_zone(shared_zone_test_context.classless_base_zone_json, status=202) -# shared_zone_test_context.classless_base_zone = zone_create_change['zone'] -# ok_client.wait_until_zone_active(zone_create_change[u'zone'][u'id']) +# shared_zone_test_context.classless_base_zone = zone_create_change["zone"] +# ok_client.wait_until_zone_active(zone_create_change[u"zone"][u"id"]) @pytest.mark.serial @@ -2380,26 +2419,28 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): ok_client = shared_zone_test_context.ok_vinyldns_client base_zone = shared_zone_test_context.classless_base_zone delegated_zone = shared_zone_test_context.classless_zone_delegation_zone + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] - rs_delete_ipv4 = get_recordset_json(base_zone, "25", "PTR", [{"ptrdname": "delete.ptr."}], 200) - rs_update_ipv4 = get_recordset_json(delegated_zone, "193", "PTR", [{"ptrdname": "update.ptr."}], 200) - rs_replace_cname = get_recordset_json(base_zone, "21", "CNAME", [{"cname": "replace.cname."}], 200) - rs_replace_ptr = get_recordset_json(base_zone, "17", "PTR", [{"ptrdname": "replace.ptr."}], 200) - rs_update_ipv4_fail = get_recordset_json(base_zone, "9", "PTR", [{"ptrdname": "failed-update.ptr."}], 200) + rs_delete_ipv4 = create_recordset(base_zone, "25", "PTR", [{"ptrdname": "delete.ptr."}], 200) + rs_update_ipv4 = create_recordset(delegated_zone, "193", "PTR", [{"ptrdname": "update.ptr."}], 200) + rs_replace_cname = create_recordset(base_zone, "21", "CNAME", [{"cname": "replace.cname."}], 200) + rs_replace_ptr = create_recordset(base_zone, "17", "PTR", [{"ptrdname": "replace.ptr."}], 200) + rs_update_ipv4_fail = create_recordset(base_zone, "9", "PTR", [{"ptrdname": "failed-update.ptr."}], 200) batch_change_input = { "comments": "this is optional", "changes": [ # valid changes ipv4 - get_change_PTR_json("192.0.2.25", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.193", ttl=300, ptrdname="has-updated.ptr."), - get_change_PTR_json("192.0.2.193", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.25", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.193", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json(f"{ip4_prefix}.193", change_type="DeleteRecordSet"), # valid changes: delete and add of same record name but different type - get_change_CNAME_json("21.2.0.192.in-addr.arpa", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.21", ptrdname="replace-cname.ptr."), - get_change_CNAME_json("17.2.0.192.in-addr.arpa", cname="replace-ptr.cname."), - get_change_PTR_json("192.0.2.17", change_type="DeleteRecordSet"), + get_change_CNAME_json(f"21.{ip4_zone_name}", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.21", ptrdname="replace-cname.ptr."), + get_change_CNAME_json(f"17.{ip4_zone_name}", cname="replace-ptr.cname."), + get_change_PTR_json(f"{ip4_prefix}.17", change_type="DeleteRecordSet"), # input validations failures get_change_PTR_json("1.1.1", change_type="DeleteRecordSet"), @@ -2410,9 +2451,9 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): get_change_PTR_json("192.0.1.25", change_type="DeleteRecordSet"), # context validation failures - get_change_PTR_json("192.0.2.199", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.200", ttl=300, ptrdname="has-updated.ptr."), - get_change_PTR_json("192.0.2.200", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.199", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip4_prefix}.200", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json(f"{ip4_prefix}.200", change_type="DeleteRecordSet"), ] } @@ -2422,26 +2463,26 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: create_rs = ok_client.create_recordset(rs, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, "Complete")) response = ok_client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name="192.0.2.25", record_type="PTR", + assert_successful_change_in_error_response(response[0], input_name=f"{ip4_prefix}.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[1], ttl=300, input_name="192.0.2.193", record_type="PTR", + assert_successful_change_in_error_response(response[1], ttl=300, input_name=f"{ip4_prefix}.193", record_type="PTR", record_data="has-updated.ptr.") - assert_successful_change_in_error_response(response[2], input_name="192.0.2.193", record_type="PTR", + assert_successful_change_in_error_response(response[2], input_name=f"{ip4_prefix}.193", record_type="PTR", record_data=None, change_type="DeleteRecordSet") # successful changes: add and delete of same record name but different type - assert_successful_change_in_error_response(response[3], input_name="21.2.0.192.in-addr.arpa.", + assert_successful_change_in_error_response(response[3], input_name=f"21.{ip4_zone_name}", record_type="CNAME", record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[4], input_name="192.0.2.21", record_type="PTR", + assert_successful_change_in_error_response(response[4], input_name=f"{ip4_prefix}.21", record_type="PTR", record_data="replace-cname.ptr.") - assert_successful_change_in_error_response(response[5], input_name="17.2.0.192.in-addr.arpa.", + assert_successful_change_in_error_response(response[5], input_name=f"17.{ip4_zone_name}", record_type="CNAME", record_data="replace-ptr.cname.") - assert_successful_change_in_error_response(response[6], input_name="192.0.2.17", record_type="PTR", + assert_successful_change_in_error_response(response[6], input_name=f"{ip4_prefix}.17", record_type="PTR", record_data=None, change_type="DeleteRecordSet") # input validations failures: invalid IP, ttl, and record data @@ -2465,13 +2506,13 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): "Zone Discovery Failed: zone for \"192.0.1.25\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist - assert_failed_change_in_error_response(response[11], input_name="192.0.2.199", record_type="PTR", + assert_failed_change_in_error_response(response[11], input_name=f"{ip4_prefix}.199", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"192.0.2.199\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[12], ttl=300, input_name="192.0.2.200", record_type="PTR", + assert_successful_change_in_error_response(response[12], ttl=300, input_name=f"{ip4_prefix}.200", record_type="PTR", record_data="has-updated.ptr.") - assert_failed_change_in_error_response(response[13], input_name="192.0.2.200", record_type="PTR", + assert_failed_change_in_error_response(response[13], input_name=f"{ip4_prefix}.200", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"192.0.2.200\" Does Not Exist: cannot delete a record that does not exist."]) @@ -2485,25 +2526,26 @@ def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): Test all add, non-authorization validations performed on IPv6 PTR records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix - existing_ptr = get_recordset_json(shared_zone_test_context.ip6_16_nibble_zone, "b.b.b.b.0.0.0.0.0.0.0.0.0.0.0.0", - "PTR", [{"ptrdname": "test.com."}], 100) + existing_ptr = create_recordset(shared_zone_test_context.ip6_16_nibble_zone, "b.b.b.b.0.0.0.0.0.0.0.0.0.0.0.0", + "PTR", [{"ptrdname": "test.com."}], 100) batch_change_input = { "changes": [ # valid change - get_change_PTR_json("fd69:27cc:fe91:1000::1234"), + get_change_PTR_json(f"{ip6_prefix}:1000::1234"), # input validation failures - get_change_PTR_json("fd69:27cc:fe91:1000::abe", ttl=29), - get_change_PTR_json("fd69:27cc:fe91:1000::bae", ptrdname="$malformed.hostname."), + get_change_PTR_json(f"{ip6_prefix}:1000::abe", ttl=29), + get_change_PTR_json(f"{ip6_prefix}:1000::bae", ptrdname="$malformed.hostname."), get_change_PTR_json("fd69:27cc:fe91de::ab", ptrdname="malformed.ip.address."), # zone discovery failure get_change_PTR_json("fedc:ba98:7654::abc", ptrdname="zone.discovery.error."), # context validation failures - get_change_PTR_json("fd69:27cc:fe91:1000::bbbb", ptrdname="existing.ptr.") + get_change_PTR_json(f"{ip6_prefix}:1000::bbbb", ptrdname="existing.ptr.") ] } @@ -2512,20 +2554,20 @@ def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name="fd69:27cc:fe91:1000::1234", + assert_successful_change_in_error_response(response[0], input_name=f"{ip6_prefix}:1000::1234", record_type="PTR", record_data="test.com.") # independent validations: bad TTL, malformed host name/IP address, duplicate record - assert_failed_change_in_error_response(response[1], input_name="fd69:27cc:fe91:1000::abe", ttl=29, + assert_failed_change_in_error_response(response[1], input_name=f"{ip6_prefix}:1000::abe", ttl=29, record_type="PTR", record_data="test.com.", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) - assert_failed_change_in_error_response(response[2], input_name="fd69:27cc:fe91:1000::bae", record_type="PTR", + assert_failed_change_in_error_response(response[2], input_name=f"{ip6_prefix}:1000::bae", record_type="PTR", record_data="$malformed.hostname.", error_messages=[ 'Invalid domain name: "$malformed.hostname.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) @@ -2540,7 +2582,7 @@ def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): "Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validations: existing record sets pre-request - assert_failed_change_in_error_response(response[5], input_name="fd69:27cc:fe91:1000::bbbb", record_type="PTR", + assert_failed_change_in_error_response(response[5], input_name=f"{ip6_prefix}:1000::bbbb", record_type="PTR", record_data="existing.ptr.", error_messages=[ "Record \"fd69:27cc:fe91:1000::bbbb\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) @@ -2555,21 +2597,22 @@ def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): """ ok_client = shared_zone_test_context.ok_vinyldns_client ip6_reverse_zone = shared_zone_test_context.ip6_16_nibble_zone + ip6_prefix = shared_zone_test_context.ip6_prefix - rs_delete_ipv6 = get_recordset_json(ip6_reverse_zone, "a.a.a.a.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", - [{"ptrdname": "delete.ptr."}], 200) - rs_update_ipv6 = get_recordset_json(ip6_reverse_zone, "2.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", - [{"ptrdname": "update.ptr."}], 200) - rs_update_ipv6_fail = get_recordset_json(ip6_reverse_zone, "8.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", - [{"ptrdname": "failed-update.ptr."}], 200) + rs_delete_ipv6 = create_recordset(ip6_reverse_zone, "a.a.a.a.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", + [{"ptrdname": "delete.ptr."}], 200) + rs_update_ipv6 = create_recordset(ip6_reverse_zone, "2.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", + [{"ptrdname": "update.ptr."}], 200) + rs_update_ipv6_fail = create_recordset(ip6_reverse_zone, "8.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", + [{"ptrdname": "failed-update.ptr."}], 200) batch_change_input = { "comments": "this is optional", "changes": [ # valid changes ipv6 - get_change_PTR_json("fd69:27cc:fe91:1000::aaaa", change_type="DeleteRecordSet"), - get_change_PTR_json("fd69:27cc:fe91:1000::62", ttl=300, ptrdname="has-updated.ptr."), - get_change_PTR_json("fd69:27cc:fe91:1000::62", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:1000::aaaa", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:1000::62", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json(f"{ip6_prefix}:1000::62", change_type="DeleteRecordSet"), # input validations failures get_change_PTR_json("fd69:27cc:fe91de::ab", change_type="DeleteRecordSet"), @@ -2580,9 +2623,9 @@ def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): get_change_PTR_json("fedc:ba98:7654::abc", change_type="DeleteRecordSet"), # context validation failures - get_change_PTR_json("fd69:27cc:fe91:1000::60", change_type="DeleteRecordSet"), - get_change_PTR_json("fd69:27cc:fe91:1000::65", ttl=300, ptrdname="has-updated.ptr."), - get_change_PTR_json("fd69:27cc:fe91:1000::65", change_type="DeleteRecordSet") + get_change_PTR_json(f"{ip6_prefix}:1000::60", change_type="DeleteRecordSet"), + get_change_PTR_json(f"{ip6_prefix}:1000::65", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json(f"{ip6_prefix}:1000::65", change_type="DeleteRecordSet") ] } @@ -2592,16 +2635,16 @@ def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: create_rs = ok_client.create_recordset(rs, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, "Complete")) response = ok_client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name="fd69:27cc:fe91:1000::aaaa", + assert_successful_change_in_error_response(response[0], input_name=f"{ip6_prefix}:1000::aaaa", record_type="PTR", record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[1], ttl=300, input_name="fd69:27cc:fe91:1000::62", + assert_successful_change_in_error_response(response[1], ttl=300, input_name=f"{ip6_prefix}:1000::62", record_type="PTR", record_data="has-updated.ptr.") - assert_successful_change_in_error_response(response[2], input_name="fd69:27cc:fe91:1000::62", record_type="PTR", + assert_successful_change_in_error_response(response[2], input_name=f"{ip6_prefix}:1000::62", record_type="PTR", record_data=None, change_type="DeleteRecordSet") # input validations failures: invalid IP, ttl, and record data @@ -2625,13 +2668,13 @@ def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): "Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, failure on update with double add - assert_failed_change_in_error_response(response[7], input_name="fd69:27cc:fe91:1000::60", record_type="PTR", + assert_failed_change_in_error_response(response[7], input_name=f"{ip6_prefix}:1000::60", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"fd69:27cc:fe91:1000::60\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[8], ttl=300, input_name="fd69:27cc:fe91:1000::65", + assert_successful_change_in_error_response(response[8], ttl=300, input_name=f"{ip6_prefix}:1000::65", record_type="PTR", record_data="has-updated.ptr.") - assert_failed_change_in_error_response(response[9], input_name="fd69:27cc:fe91:1000::65", record_type="PTR", + assert_failed_change_in_error_response(response[9], input_name=f"{ip6_prefix}:1000::65", record_type="PTR", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"fd69:27cc:fe91:1000::65\" Does Not Exist: cannot delete a record that does not exist."]) @@ -2646,15 +2689,18 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): Test all add validations performed on TXT records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ok_zone_name = shared_zone_test_context.ok_zone["name"] existing_txt_name = generate_record_name() - existing_txt_fqdn = existing_txt_name + ".ok." - existing_txt = get_recordset_json(shared_zone_test_context.ok_zone, existing_txt_name, "TXT", [{"text": "test"}], 100) + existing_txt_fqdn = existing_txt_name + f".{ok_zone_name}" + existing_txt = create_recordset(shared_zone_test_context.ok_zone, existing_txt_name, "TXT", [{"text": "test"}], 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = existing_cname_name + ".ok." - existing_cname = get_recordset_json(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", - [{"cname": "test."}], 100) + existing_cname_fqdn = existing_cname_name + f".{ok_zone_name}" + existing_cname = create_recordset(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", + [{"cname": "test."}], 100) good_record_fqdn = generate_record_name("ok.") batch_change_input = { @@ -2663,17 +2709,17 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): get_change_TXT_json(good_record_fqdn), # input validation failures - get_change_TXT_json("bad-ttl-and-invalid-name$.ok.", ttl=29), + get_change_TXT_json(f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29), # zone discovery failures get_change_TXT_json("no.zone.at.all."), # context validation failures - get_change_CNAME_json("cname-duplicate.ok."), - get_change_TXT_json("cname-duplicate.ok."), + get_change_CNAME_json(f"cname-duplicate.{ok_zone_name}"), + get_change_TXT_json(f"cname-duplicate.{ok_zone_name}"), get_change_TXT_json(existing_txt_fqdn), get_change_TXT_json(existing_cname_fqdn), - get_change_TXT_json("user-add-unauthorized.dummy.") + get_change_TXT_json(f"user-add-unauthorized.{dummy_zone_name}") ] } @@ -2682,7 +2728,7 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) @@ -2691,12 +2737,12 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): record_data="test") # ttl, domain name, record data - assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.ok.", ttl=29, + assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29, record_type="TXT", record_data="test", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$.ok.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + f'Invalid domain name: "bad-ttl-and-invalid-name$.{ok_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) # zone discovery failure assert_failed_change_in_error_response(response[2], input_name="no.zone.at.all.", record_type="TXT", @@ -2705,7 +2751,7 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validations: cname duplicate - assert_failed_change_in_error_response(response[3], input_name="cname-duplicate.ok.", record_type="CNAME", + assert_failed_change_in_error_response(response[3], input_name=f"cname-duplicate.{ok_zone_name}", record_type="CNAME", record_data="test.com.", error_messages=[ "Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) @@ -2719,9 +2765,9 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): record_data="test", error_messages=[ "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) - assert_failed_change_in_error_response(response[7], input_name="user-add-unauthorized.dummy.", + assert_failed_change_in_error_response(response[7], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="TXT", record_data="test", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -2735,22 +2781,25 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_zone = shared_zone_test_context.ok_zone dummy_zone = shared_zone_test_context.dummy_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] rs_delete_name = generate_record_name() - rs_delete_fqdn = rs_delete_name + ".ok." - rs_delete_ok = get_recordset_json(ok_zone, rs_delete_name, "TXT", [{"text": "test"}], 200) + rs_delete_fqdn = rs_delete_name + f".{ok_zone_name}" + rs_delete_ok = create_recordset(ok_zone, rs_delete_name, "TXT", [{"text": "test"}], 200) rs_update_name = generate_record_name() - rs_update_fqdn = rs_update_name + ".ok." - rs_update_ok = get_recordset_json(ok_zone, rs_update_name, "TXT", [{"text": "test"}], 200) + rs_update_fqdn = rs_update_name + f".{ok_zone_name}" + rs_update_ok = create_recordset(ok_zone, rs_update_name, "TXT", [{"text": "test"}], 200) rs_delete_dummy_name = generate_record_name() - rs_delete_dummy_fqdn = rs_delete_dummy_name + ".dummy." - rs_delete_dummy = get_recordset_json(dummy_zone, rs_delete_dummy_name, "TXT", [{"text": "test"}], 200) + rs_delete_dummy_fqdn = rs_delete_dummy_name + f".{dummy_zone_name}" + rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "TXT", [{"text": "test"}], 200) rs_update_dummy_name = generate_record_name() - rs_update_dummy_fqdn = rs_update_dummy_name + ".dummy." - rs_update_dummy = get_recordset_json(dummy_zone, rs_update_dummy_name, "TXT", [{"text": "test"}], 200) + rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "TXT", [{"text": "test"}], 200) batch_change_input = { "comments": "this is optional", @@ -2761,16 +2810,16 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): get_change_TXT_json(rs_update_fqdn, ttl=300), # input validations failures - get_change_TXT_json("invalid-name$.ok.", change_type="DeleteRecordSet"), - get_change_TXT_json("invalid-ttl.ok.", ttl=29, text="bad-ttl"), + get_change_TXT_json(f"invalid-name$.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_TXT_json(f"invalid-ttl.{ok_zone_name}", ttl=29, text="bad-ttl"), # zone discovery failure get_change_TXT_json("no.zone.at.all.", change_type="DeleteRecordSet"), # context validation failures - get_change_TXT_json("delete-nonexistent.ok.", change_type="DeleteRecordSet"), - get_change_TXT_json("update-nonexistent.ok.", change_type="DeleteRecordSet"), - get_change_TXT_json("update-nonexistent.ok.", text="test"), + get_change_TXT_json(f"delete-nonexistent.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_TXT_json(f"update-nonexistent.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_TXT_json(f"update-nonexistent.{ok_zone_name}", text="test"), get_change_TXT_json(rs_delete_dummy_fqdn, change_type="DeleteRecordSet"), get_change_TXT_json(rs_update_dummy_fqdn, text="test"), get_change_TXT_json(rs_update_dummy_fqdn, change_type="DeleteRecordSet") @@ -2782,16 +2831,16 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, "Complete")) # Confirm that record set doesn't already exist - ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + ok_client.get_recordset(ok_zone["id"], "delete-nonexistent", status=404) response = ok_client.create_batch_change(batch_change_input, status=400) @@ -2804,11 +2853,11 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): record_data="test") # input validations failures: invalid input name, reverse zone error, invalid ttl - assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="TXT", + assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="TXT", record_data="test", change_type="DeleteRecordSet", error_messages=[ - 'Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[4], input_name="invalid-ttl.ok.", ttl=29, record_type="TXT", + f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name=f"invalid-ttl.{ok_zone_name}", ttl=29, record_type="TXT", record_data="bad-ttl", error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) @@ -2820,30 +2869,30 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[6], input_name="delete-nonexistent.ok.", record_type="TXT", + assert_failed_change_in_error_response(response[6], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="TXT", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_failed_change_in_error_response(response[7], input_name="update-nonexistent.ok.", record_type="TXT", + assert_failed_change_in_error_response(response[7], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[8], input_name="update-nonexistent.ok.", record_type="TXT", + assert_successful_change_in_error_response(response[8], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", record_data="test") assert_failed_change_in_error_response(response[9], input_name=rs_delete_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[10], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data="test", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[11], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs["zone"]["id"] != dummy_zone["id"]] clear_recordset_list(dummy_deletes, dummy_client) clear_recordset_list(ok_deletes, ok_client) @@ -2853,16 +2902,20 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): Test all add validations performed on MX records submitted in batch changes """ client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] existing_mx_name = generate_record_name() - existing_mx_fqdn = existing_mx_name + ".ok." - existing_mx = get_recordset_json(shared_zone_test_context.ok_zone, existing_mx_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 100) + existing_mx_fqdn = existing_mx_name + f".{ok_zone_name}" + existing_mx = create_recordset(shared_zone_test_context.ok_zone, existing_mx_name, "MX", + [{"preference": 1, "exchange": "foo.bar."}], 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = existing_cname_name + ".ok." - existing_cname = get_recordset_json(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", - [{"cname": "test."}], 100) + existing_cname_fqdn = existing_cname_name + f".{ok_zone_name}" + existing_cname = create_recordset(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", + [{"cname": "test."}], 100) good_record_fqdn = generate_record_name("ok.") batch_change_input = { @@ -2871,20 +2924,20 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): get_change_MX_json(good_record_fqdn), # input validation failures - get_change_MX_json("bad-ttl-and-invalid-name$.ok.", ttl=29), - get_change_MX_json("bad-exchange.ok.", exchange="foo$.bar."), - get_change_MX_json("mx.2.0.192.in-addr.arpa."), + get_change_MX_json(f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29), + get_change_MX_json(f"bad-exchange.{ok_zone_name}", exchange="foo$.bar."), + get_change_MX_json(f"mx.{ip4_zone_name}"), # zone discovery failures - get_change_MX_json("no.subzone.ok."), + get_change_MX_json(f"no.subzone.{ok_zone_name}"), get_change_MX_json("no.zone.at.all."), # context validation failures - get_change_CNAME_json("cname-duplicate.ok."), - get_change_MX_json("cname-duplicate.ok."), + get_change_CNAME_json(f"cname-duplicate.{ok_zone_name}"), + get_change_MX_json(f"cname-duplicate.{ok_zone_name}"), get_change_MX_json(existing_mx_fqdn), get_change_MX_json(existing_cname_fqdn), - get_change_MX_json("user-add-unauthorized.dummy.") + get_change_MX_json(f"user-add-unauthorized.{dummy_zone_name}") ] } @@ -2893,7 +2946,7 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): try: for create_json in to_create: create_result = client.create_recordset(create_json, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_result, "Complete")) response = client.create_batch_change(batch_change_input, status=400) @@ -2902,33 +2955,33 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): record_data={"preference": 1, "exchange": "foo.bar."}) # ttl, domain name, record data - assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.ok.", ttl=29, + assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid domain name: "bad-ttl-and-invalid-name$.ok.", ' - 'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[2], input_name="bad-exchange.ok.", record_type="MX", + f'Invalid domain name: "bad-ttl-and-invalid-name$.{ok_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) + assert_failed_change_in_error_response(response[2], input_name=f"bad-exchange.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, error_messages=[ 'Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[3], input_name="mx.2.0.192.in-addr.arpa.", record_type="MX", + assert_failed_change_in_error_response(response[3], input_name=f"mx.{ip4_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ - 'Invalid Record Type In Reverse Zone: record with name "mx.2.0.192.in-addr.arpa." and type "MX" is not allowed in a reverse zone.']) + f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" and type "MX" is not allowed in a reverse zone.']) # zone discovery failures - assert_failed_change_in_error_response(response[4], input_name="no.subzone.ok.", record_type="MX", + assert_failed_change_in_error_response(response[4], input_name=f"no.subzone.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ - 'Zone Discovery Failed: zone for "no.subzone.ok." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + f'Zone Discovery Failed: zone for "no.subzone.{ok_zone_name}" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # context validations: cname duplicate - assert_failed_change_in_error_response(response[6], input_name="cname-duplicate.ok.", record_type="CNAME", + assert_failed_change_in_error_response(response[6], input_name=f"cname-duplicate.{ok_zone_name}", record_type="CNAME", record_data="test.com.", error_messages=[ "Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) @@ -2942,9 +2995,9 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) - assert_failed_change_in_error_response(response[10], input_name="user-add-unauthorized.dummy.", + assert_failed_change_in_error_response(response[10], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, client) @@ -2958,24 +3011,29 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client ok_zone = shared_zone_test_context.ok_zone dummy_zone = shared_zone_test_context.dummy_zone + dummy_zone_name = shared_zone_test_context.dummy_zone["name"] + + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] rs_delete_name = generate_record_name() - rs_delete_fqdn = rs_delete_name + ".ok." - rs_delete_ok = get_recordset_json(ok_zone, rs_delete_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_delete_fqdn = rs_delete_name + f".{ok_zone_name}" + rs_delete_ok = create_recordset(ok_zone, rs_delete_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) rs_update_name = generate_record_name() - rs_update_fqdn = rs_update_name + ".ok." - rs_update_ok = get_recordset_json(ok_zone, rs_update_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_update_fqdn = rs_update_name + f".{ok_zone_name}" + rs_update_ok = create_recordset(ok_zone, rs_update_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) rs_delete_dummy_name = generate_record_name() - rs_delete_dummy_fqdn = rs_delete_dummy_name + ".dummy." - rs_delete_dummy = get_recordset_json(dummy_zone, rs_delete_dummy_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_delete_dummy_fqdn = rs_delete_dummy_name + f".{dummy_zone_name}" + rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "MX", + [{"preference": 1, "exchange": "foo.bar."}], 200) rs_update_dummy_name = generate_record_name() - rs_update_dummy_fqdn = rs_update_dummy_name + ".dummy." - rs_update_dummy = get_recordset_json(dummy_zone, rs_update_dummy_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "MX", + [{"preference": 1, "exchange": "foo.bar."}], 200) batch_change_input = { "comments": "this is optional", @@ -2986,18 +3044,18 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): get_change_MX_json(rs_update_fqdn, ttl=300), # input validations failures - get_change_MX_json("invalid-name$.ok.", change_type="DeleteRecordSet"), - get_change_MX_json("delete.ok.", ttl=29), - get_change_MX_json("bad-exchange.ok.", exchange="foo$.bar."), - get_change_MX_json("mx.2.0.192.in-addr.arpa."), + get_change_MX_json(f"invalid-name$.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"delete.{ok_zone_name}", ttl=29), + get_change_MX_json(f"bad-exchange.{ok_zone_name}", exchange="foo$.bar."), + get_change_MX_json(f"mx.{ip4_zone_name}"), # zone discovery failures get_change_MX_json("no.zone.at.all.", change_type="DeleteRecordSet"), # context validation failures - get_change_MX_json("delete-nonexistent.ok.", change_type="DeleteRecordSet"), - get_change_MX_json("update-nonexistent.ok.", change_type="DeleteRecordSet"), - get_change_MX_json("update-nonexistent.ok.", preference=1000, exchange="foo.bar."), + get_change_MX_json(f"delete-nonexistent.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"update-nonexistent.{ok_zone_name}", change_type="DeleteRecordSet"), + get_change_MX_json(f"update-nonexistent.{ok_zone_name}", preference=1000, exchange="foo.bar."), get_change_MX_json(rs_delete_dummy_fqdn, change_type="DeleteRecordSet"), get_change_MX_json(rs_update_dummy_fqdn, preference=1000, exchange="foo.bar."), get_change_MX_json(rs_update_dummy_fqdn, change_type="DeleteRecordSet") @@ -3009,16 +3067,16 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): try: for rs in to_create: - if rs['zoneId'] == dummy_zone['id']: + if rs["zoneId"] == dummy_zone["id"]: create_client = dummy_client else: create_client = ok_client create_rs = create_client.create_recordset(rs, status=202) - to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, "Complete")) # Confirm that record set doesn't already exist - ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + ok_client.get_recordset(ok_zone["id"], "delete-nonexistent", status=404) response = ok_client.create_batch_change(batch_change_input, status=400) @@ -3031,23 +3089,23 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): record_data={"preference": 1, "exchange": "foo.bar."}) # input validations failures: invalid input name, reverse zone error, invalid ttl - assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="MX", + assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, change_type="DeleteRecordSet", error_messages=[ - 'Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[4], input_name="delete.ok.", ttl=29, record_type="MX", + f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name=f"delete.{ok_zone_name}", ttl=29, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) - assert_failed_change_in_error_response(response[5], input_name="bad-exchange.ok.", record_type="MX", + assert_failed_change_in_error_response(response[5], input_name=f"bad-exchange.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, error_messages=[ 'Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[6], input_name="mx.2.0.192.in-addr.arpa.", record_type="MX", + assert_failed_change_in_error_response(response[6], input_name=f"mx.{ip4_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[ - 'Invalid Record Type In Reverse Zone: record with name "mx.2.0.192.in-addr.arpa." and type "MX" is not allowed in a reverse zone.']) + f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" and type "MX" is not allowed in a reverse zone.']) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="MX", @@ -3056,30 +3114,30 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[8], input_name="delete-nonexistent.ok.", record_type="MX", + assert_failed_change_in_error_response(response[8], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="MX", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_failed_change_in_error_response(response[9], input_name="update-nonexistent.ok.", record_type="MX", + assert_failed_change_in_error_response(response[9], input_name=f"update-nonexistent.{ok_zone_name}", record_type="MX", record_data=None, change_type="DeleteRecordSet", error_messages=[ "Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[10], input_name="update-nonexistent.ok.", record_type="MX", + assert_successful_change_in_error_response(response[10], input_name=f"update-nonexistent.{ok_zone_name}", record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."}) assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[12], input_name=rs_update_dummy_fqdn, record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."}, - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: dummy-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) finally: # Clean up updates - dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] - ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] + ok_deletes = [rs for rs in to_delete if rs["zone"]["id"] != dummy_zone["id"]] clear_recordset_list(dummy_deletes, dummy_client) clear_recordset_list(ok_deletes, ok_client) @@ -3138,7 +3196,7 @@ def test_user_validation_shared(shared_zone_test_context): get_change_A_AAAA_json("update-test-batch.non.test.shared."), get_change_A_AAAA_json("delete-test-batch.non.test.shared.", change_type="DeleteRecordSet") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } response = client.create_batch_change(batch_change_input, status=400) @@ -3163,42 +3221,43 @@ def test_create_batch_change_does_not_save_owner_group_id_for_non_shared_zone(sh ok_client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone ok_group = shared_zone_test_context.ok_group + ok_zone_name = shared_zone_test_context.ok_zone["name"] update_name = generate_record_name() - update_fqdn = update_name + ".ok." - update_rs = get_recordset_json(ok_zone, update_name, "A", [{"address": "127.0.0.1"}], 300) + update_fqdn = update_name + f".{ok_zone_name}" + update_rs = create_recordset(ok_zone, update_name, "A", [{"address": "127.0.0.1"}], 300) batch_change_input = { "changes": [ - get_change_A_AAAA_json("no-owner-group-id.ok.", address="4.3.2.1"), + get_change_A_AAAA_json(f"no-owner-group-id.{ok_zone_name}", address="4.3.2.1"), get_change_A_AAAA_json(update_fqdn, address="1.2.3.4"), get_change_A_AAAA_json(update_fqdn, change_type="DeleteRecordSet") ], - "ownerGroupId": ok_group['id'] + "ownerGroupId": ok_group["id"] } to_delete = [] try: create_result = ok_client.create_recordset(update_rs, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, "Complete")) result = ok_client.create_batch_change(batch_change_input, status=202) completed_batch = ok_client.wait_until_batch_change_completed(result) - assert_that(completed_batch['ownerGroupId'], is_(batch_change_input['ownerGroupId'])) + assert_that(completed_batch["ownerGroupId"], is_(batch_change_input["ownerGroupId"])) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success_response_values(result['changes'], zone=ok_zone, index=0, record_name="no-owner-group-id", - input_name="no-owner-group-id.ok.", record_data="4.3.2.1") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=1, record_name=update_name, - input_name=update_fqdn, record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=2, record_name=update_name, - input_name=update_fqdn, change_type="DeleteRecordSet", record_data=None) + assert_change_success(result["changes"], zone=ok_zone, index=0, record_name="no-owner-group-id", + input_name=f"no-owner-group-id.{ok_zone_name}", record_data="4.3.2.1") + assert_change_success(result["changes"], zone=ok_zone, index=1, record_name=update_name, + input_name=update_fqdn, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=ok_zone, index=2, record_name=update_name, + input_name=update_fqdn, change_type="DeleteRecordSet", record_data=None) for (zoneId, recordSetId) in to_delete: get_recordset = ok_client.get_recordset(zoneId, recordSetId, status=200) - assert_that(get_recordset['recordSet'], is_not(has_key('ownerGroupId'))) + assert_that(get_recordset["recordSet"], is_not(has_key("ownerGroupId"))) finally: clear_zoneid_rsid_tuple_list(to_delete, ok_client) @@ -3215,13 +3274,13 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo without_group_name = generate_record_name() without_group_fqdn = without_group_name + ".shared." - update_rs_without_owner_group = get_recordset_json(shared_zone, without_group_name, "A", - [{"address": "127.0.0.1"}], 300) + update_rs_without_owner_group = create_recordset(shared_zone, without_group_name, "A", + [{"address": "127.0.0.1"}], 300) with_group_name = generate_record_name() with_group_fqdn = with_group_name + ".shared." - update_rs_with_owner_group = get_recordset_json(shared_zone, with_group_name, "A", - [{"address": "127.0.0.1"}], 300, shared_record_group['id']) + update_rs_with_owner_group = create_recordset(shared_zone, with_group_name, "A", + [{"address": "127.0.0.1"}], 300, shared_record_group["id"]) create_name = generate_record_name() create_fqdn = create_name + ".shared." @@ -3240,55 +3299,55 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo try: # Create first record for updating and verify that owner group ID is not set create_result = shared_client.create_recordset(update_rs_without_owner_group, status=202) - to_delete.append(shared_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(shared_client.wait_until_recordset_change_status(create_result, "Complete")) - create_result = shared_client.get_recordset(create_result['recordSet']['zoneId'], - create_result['recordSet']['id'], status=200) - assert_that(create_result['recordSet'], is_not(has_key('ownerGroupId'))) + create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], + create_result["recordSet"]["id"], status=200) + assert_that(create_result["recordSet"], is_not(has_key("ownerGroupId"))) # Create second record for updating and verify that owner group ID is set create_result = shared_client.create_recordset(update_rs_with_owner_group, status=202) - to_delete.append(shared_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(shared_client.wait_until_recordset_change_status(create_result, "Complete")) - create_result = shared_client.get_recordset(create_result['recordSet']['zoneId'], - create_result['recordSet']['id'], status=200) - assert_that(create_result['recordSet']['ownerGroupId'], is_(shared_record_group['id'])) + create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], + create_result["recordSet"]["id"], status=200) + assert_that(create_result["recordSet"]["ownerGroupId"], is_(shared_record_group["id"])) # Create batch result = shared_client.create_batch_change(batch_change_input, status=202) completed_batch = shared_client.wait_until_batch_change_completed(result) - assert_that(completed_batch['ownerGroupId'], is_(batch_change_input['ownerGroupId'])) + assert_that(completed_batch["ownerGroupId"], is_(batch_change_input["ownerGroupId"])) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_that(result['ownerGroupId'], is_('shared-zone-group')) - assert_change_success_response_values(result['changes'], zone=shared_zone, index=0, - record_name=create_name, - input_name=create_fqdn, record_data="4.3.2.1") - assert_change_success_response_values(result['changes'], zone=shared_zone, index=1, - record_name=without_group_name, - input_name=without_group_fqdn, - record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=shared_zone, index=2, - record_name=without_group_name, - input_name=without_group_fqdn, - change_type="DeleteRecordSet", record_data=None) - assert_change_success_response_values(result['changes'], zone=shared_zone, index=3, - record_name=with_group_name, - input_name=with_group_fqdn, - record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=shared_zone, index=4, - record_name=with_group_name, - input_name=with_group_fqdn, - change_type="DeleteRecordSet", record_data=None) + assert_that(result["ownerGroupId"], is_("shared-zone-group")) + assert_change_success(result["changes"], zone=shared_zone, index=0, + record_name=create_name, + input_name=create_fqdn, record_data="4.3.2.1") + assert_change_success(result["changes"], zone=shared_zone, index=1, + record_name=without_group_name, + input_name=without_group_fqdn, + record_data="1.2.3.4") + assert_change_success(result["changes"], zone=shared_zone, index=2, + record_name=without_group_name, + input_name=without_group_fqdn, + change_type="DeleteRecordSet", record_data=None) + assert_change_success(result["changes"], zone=shared_zone, index=3, + record_name=with_group_name, + input_name=with_group_fqdn, + record_data="1.2.3.4") + assert_change_success(result["changes"], zone=shared_zone, index=4, + record_name=with_group_name, + input_name=with_group_fqdn, + change_type="DeleteRecordSet", record_data=None) for (zoneId, recordSetId) in to_delete: get_recordset = shared_client.get_recordset(zoneId, recordSetId, status=200) - if get_recordset['recordSet']['name'] == with_group_name: - assert_that(get_recordset['recordSet']['ownerGroupId'], is_(shared_record_group['id'])) + if get_recordset["recordSet"]["name"] == with_group_name: + assert_that(get_recordset["recordSet"]["ownerGroupId"], is_(shared_record_group["id"])) else: - assert_that(get_recordset['recordSet']['ownerGroupId'], is_(batch_change_input['ownerGroupId'])) + assert_that(get_recordset["recordSet"]["ownerGroupId"], is_(batch_change_input["ownerGroupId"])) finally: clear_zoneid_rsid_tuple_list(to_delete, shared_client) @@ -3307,8 +3366,8 @@ def test_create_batch_change_for_shared_zone_with_invalid_owner_group_id_fails(s "ownerGroupId": "non-existent-owner-group-id" } - errors = shared_client.create_batch_change(batch_change_input, status=400)['errors'] - assert_that(errors, contains('Group with ID "non-existent-owner-group-id" was not found')) + errors = shared_client.create_batch_change(batch_change_input, status=400)["errors"] + assert_that(errors, contains_exactly('Group with ID "non-existent-owner-group-id" was not found')) def test_create_batch_change_for_shared_zone_with_unauthorized_owner_group_id_fails(shared_zone_test_context): @@ -3322,12 +3381,12 @@ def test_create_batch_change_for_shared_zone_with_unauthorized_owner_group_id_fa "changes": [ get_change_A_AAAA_json("no-owner-group-id.shared.", address="4.3.2.1") ], - "ownerGroupId": ok_group['id'] + "ownerGroupId": ok_group["id"] } - errors = shared_client.create_batch_change(batch_change_input, status=400)['errors'] - assert_that(errors, contains('User "sharedZoneUser" must be a member of group "' + ok_group[ - 'id'] + '" to apply this group to batch changes.')) + errors = shared_client.create_batch_change(batch_change_input, status=400)["errors"] + assert_that(errors, contains_exactly('User "sharedZoneUser" must be a member of group "' + ok_group[ + "id"] + '" to apply this group to batch changes.')) def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_context): @@ -3348,35 +3407,36 @@ def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_con ok_group = shared_zone_test_context.ok_group shared_zone = shared_zone_test_context.shared_zone ok_zone = shared_zone_test_context.ok_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] # record sets to setup private_update_name = generate_record_name() - private_update_fqdn = private_update_name + ".ok." - private_update = get_recordset_json(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) + private_update_fqdn = private_update_name + f".{ok_zone_name}" + private_update = create_recordset(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_no_group_name = generate_record_name() shared_update_no_group_fqdn = shared_update_no_group_name + ".shared." - shared_update_no_owner_group = get_recordset_json(shared_zone, shared_update_no_group_name, "A", - [{"address": "1.1.1.1"}], 200) + shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", + [{"address": "1.1.1.1"}], 200) shared_update_group_name = generate_record_name() shared_update_group_fqdn = shared_update_group_name + ".shared." - shared_update_existing_owner_group = get_recordset_json(shared_zone, shared_update_group_name, "A", - [{"address": "1.1.1.1"}], 200, shared_group['id']) + shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", + [{"address": "1.1.1.1"}], 200, shared_group["id"]) private_delete_name = generate_record_name() - private_delete_fqdn = private_delete_name + ".ok." - private_delete = get_recordset_json(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) + private_delete_fqdn = private_delete_name + f".{ok_zone_name}" + private_delete = create_recordset(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) shared_delete_name = generate_record_name() shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = get_recordset_json(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) to_delete_ok = {} to_delete_shared = {} private_create_name = generate_record_name() - private_create_fqdn = private_create_name + ".ok." + private_create_fqdn = private_create_name + f".{ok_zone_name}" shared_create_name = generate_record_name() shared_create_fqdn = shared_create_name + ".shared." batch_change_input = { @@ -3392,90 +3452,90 @@ def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_con get_change_A_AAAA_json(private_delete_fqdn, change_type="DeleteRecordSet"), get_change_A_AAAA_json(shared_delete_fqdn, change_type="DeleteRecordSet") ], - "ownerGroupId": ok_group['id'] + "ownerGroupId": ok_group["id"] } try: for rs in [private_update, private_delete]: create_rs = ok_client.create_recordset(rs, status=202) - ok_client.wait_until_recordset_change_status(create_rs, 'Complete') + ok_client.wait_until_recordset_change_status(create_rs, "Complete") for rs in [shared_update_no_owner_group, shared_update_existing_owner_group, shared_delete]: create_rs = shared_client.create_recordset(rs, status=202) - shared_client.wait_until_recordset_change_status(create_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_rs, "Complete") result = ok_client.create_batch_change(batch_change_input, status=202) completed_batch = ok_client.wait_until_batch_change_completed(result) - assert_that(completed_batch['ownerGroupId'], is_(ok_group['id'])) + assert_that(completed_batch["ownerGroupId"], is_(ok_group["id"])) # set here because multiple items in the batch combine to one RS - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes'] if - private_delete_name not in change['recordName'] and change['zoneId'] == ok_zone['id']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"] if + private_delete_name not in change["recordName"] and change["zoneId"] == ok_zone["id"]] to_delete_ok = set(record_set_list) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes'] if - shared_delete_name not in change['recordName'] and change['zoneId'] == shared_zone['id']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"] if + shared_delete_name not in change["recordName"] and change["zoneId"] == shared_zone["id"]] to_delete_shared = set(record_set_list) - assert_change_success_response_values(completed_batch['changes'], zone=ok_zone, index=0, - record_name=private_create_name, - input_name=private_create_fqdn, record_data="1.1.1.1") - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=1, - record_name=shared_create_name, - input_name=shared_create_fqdn, record_data="1.1.1.1") - assert_change_success_response_values(completed_batch['changes'], zone=ok_zone, index=2, - record_name=private_update_name, - input_name=private_update_fqdn, record_data="1.1.1.1", ttl=300) - assert_change_success_response_values(completed_batch['changes'], zone=ok_zone, index=3, - record_name=private_update_name, - input_name=private_update_fqdn, record_data=None, - change_type="DeleteRecordSet") - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=4, - record_name=shared_update_no_group_name, - input_name=shared_update_no_group_fqdn, record_data="1.1.1.1", - ttl=300) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=5, - record_name=shared_update_no_group_name, - input_name=shared_update_no_group_fqdn, record_data=None, - change_type="DeleteRecordSet") - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=6, - record_name=shared_update_group_name, - input_name=shared_update_group_fqdn, - record_data="1.1.1.1", ttl=300) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=7, - record_name=shared_update_group_name, - input_name=shared_update_group_fqdn, record_data=None, - change_type="DeleteRecordSet") - assert_change_success_response_values(completed_batch['changes'], zone=ok_zone, index=8, - record_name=private_delete_name, - input_name=private_delete_fqdn, record_data=None, - change_type="DeleteRecordSet") - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=9, - record_name=shared_delete_name, - input_name=shared_delete_fqdn, record_data=None, - change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=ok_zone, index=0, + record_name=private_create_name, + input_name=private_create_fqdn, record_data="1.1.1.1") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=1, + record_name=shared_create_name, + input_name=shared_create_fqdn, record_data="1.1.1.1") + assert_change_success(completed_batch["changes"], zone=ok_zone, index=2, + record_name=private_update_name, + input_name=private_update_fqdn, record_data="1.1.1.1", ttl=300) + assert_change_success(completed_batch["changes"], zone=ok_zone, index=3, + record_name=private_update_name, + input_name=private_update_fqdn, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=4, + record_name=shared_update_no_group_name, + input_name=shared_update_no_group_fqdn, record_data="1.1.1.1", + ttl=300) + assert_change_success(completed_batch["changes"], zone=shared_zone, index=5, + record_name=shared_update_no_group_name, + input_name=shared_update_no_group_fqdn, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=6, + record_name=shared_update_group_name, + input_name=shared_update_group_fqdn, + record_data="1.1.1.1", ttl=300) + assert_change_success(completed_batch["changes"], zone=shared_zone, index=7, + record_name=shared_update_group_name, + input_name=shared_update_group_fqdn, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=ok_zone, index=8, + record_name=private_delete_name, + input_name=private_delete_fqdn, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=9, + record_name=shared_delete_name, + input_name=shared_delete_fqdn, record_data=None, + change_type="DeleteRecordSet") # verify record set owner group for result_rs in to_delete_ok: rs_result = ok_client.get_recordset(result_rs[0], result_rs[1], status=200) - assert_that(rs_result['recordSet'], is_not(has_key("ownerGroupId"))) + assert_that(rs_result["recordSet"], is_not(has_key("ownerGroupId"))) for result_rs in to_delete_shared: rs_result = shared_client.get_recordset(result_rs[0], result_rs[1], status=200) - if rs_result['recordSet']['name'] == shared_update_group_name: - assert_that(rs_result['recordSet']['ownerGroupId'], is_(shared_group['id'])) + if rs_result["recordSet"]["name"] == shared_update_group_name: + assert_that(rs_result["recordSet"]["ownerGroupId"], is_(shared_group["id"])) else: - assert_that(rs_result['recordSet']['ownerGroupId'], is_(ok_group['id'])) + assert_that(rs_result["recordSet"]["ownerGroupId"], is_(ok_group["id"])) finally: for tup in to_delete_ok: delete_result = ok_client.delete_recordset(tup[0], tup[1], status=202) - ok_client.wait_until_recordset_change_status(delete_result, 'Complete') + ok_client.wait_until_recordset_change_status(delete_result, "Complete") for tup in to_delete_shared: delete_result = shared_client.delete_recordset(tup[0], tup[1], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_context): @@ -3487,35 +3547,36 @@ def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_ shared_group = shared_zone_test_context.shared_record_group shared_zone = shared_zone_test_context.shared_zone ok_zone = shared_zone_test_context.ok_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] # record sets to setup private_update_name = generate_record_name() - private_update_fqdn = private_update_name + ".ok." - private_update = get_recordset_json(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) + private_update_fqdn = private_update_name + f".{ok_zone_name}" + private_update = create_recordset(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_no_group_name = generate_record_name() shared_update_no_group_fqdn = shared_update_no_group_name + ".shared." - shared_update_no_owner_group = get_recordset_json(shared_zone, shared_update_no_group_name, "A", - [{"address": "1.1.1.1"}], 200) + shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", + [{"address": "1.1.1.1"}], 200) shared_update_group_name = generate_record_name() shared_update_group_fqdn = shared_update_group_name + ".shared." - shared_update_existing_owner_group = get_recordset_json(shared_zone, shared_update_group_name, "A", - [{"address": "1.1.1.1"}], 200, shared_group['id']) + shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", + [{"address": "1.1.1.1"}], 200, shared_group["id"]) private_delete_name = generate_record_name() - private_delete_fqdn = private_delete_name + ".ok." - private_delete = get_recordset_json(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) + private_delete_fqdn = private_delete_name + f".{ok_zone_name}" + private_delete = create_recordset(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) shared_delete_name = generate_record_name() shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = get_recordset_json(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) to_delete_ok = [] to_delete_shared = [] private_create_name = generate_record_name() - private_create_fqdn = private_create_name + ".ok." + private_create_fqdn = private_create_name + f".{ok_zone_name}" shared_create_name = generate_record_name() shared_create_fqdn = shared_create_name + ".shared." batch_change_input = { @@ -3536,24 +3597,24 @@ def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_ try: for rs in [private_update, private_delete]: create_rs = ok_client.create_recordset(rs, status=202) - to_delete_ok.append(ok_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet']['id']) + to_delete_ok.append(ok_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"]["id"]) for rs in [shared_update_no_owner_group, shared_update_existing_owner_group, shared_delete]: create_rs = shared_client.create_recordset(rs, status=202) to_delete_shared.append( - shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet']['id']) + shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"]["id"]) response = ok_client.create_batch_change(batch_change_input, status=400) assert_successful_change_in_error_response(response[0], input_name=private_create_fqdn) assert_failed_change_in_error_response(response[1], input_name=shared_create_fqdn, error_messages=[ - 'Zone "shared." is a shared zone, so owner group ID must be specified for record "' + shared_create_name + '".']) + "Zone \"shared.\" is a shared zone, so owner group ID must be specified for record \"" + shared_create_name + "\"."]) assert_successful_change_in_error_response(response[2], input_name=private_update_fqdn, ttl=300) assert_successful_change_in_error_response(response[3], change_type="DeleteRecordSet", input_name=private_update_fqdn) assert_failed_change_in_error_response(response[4], input_name=shared_update_no_group_fqdn, error_messages=[ - 'Zone "shared." is a shared zone, so owner group ID must be specified for record "' + shared_update_no_group_name + '".'], + "Zone \"shared.\" is a shared zone, so owner group ID must be specified for record \"" + shared_update_no_group_name + "\"."], ttl=300) assert_successful_change_in_error_response(response[5], change_type="DeleteRecordSet", input_name=shared_update_no_group_fqdn) @@ -3568,12 +3629,12 @@ def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_ finally: for rsId in to_delete_ok: - delete_result = ok_client.delete_recordset(ok_zone['id'], rsId, status=202) - ok_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = ok_client.delete_recordset(ok_zone["id"], rsId, status=202) + ok_client.wait_until_recordset_change_status(delete_result, "Complete") for rsId in to_delete_shared: - delete_result = shared_client.delete_recordset(shared_zone['id'], rsId, status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(shared_zone["id"], rsId, status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_batch_delete_recordset_for_unassociated_user_in_owner_group_succeeds(shared_zone_test_context): @@ -3587,8 +3648,8 @@ def test_create_batch_delete_recordset_for_unassociated_user_in_owner_group_succ shared_delete_name = generate_record_name() shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = get_recordset_json(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, - shared_group['id']) + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, + shared_group["id"]) batch_change_input = { "changes": [ get_change_A_AAAA_json(shared_delete_fqdn, change_type="DeleteRecordSet") @@ -3596,15 +3657,15 @@ def test_create_batch_delete_recordset_for_unassociated_user_in_owner_group_succ } create_rs = shared_client.create_recordset(shared_delete, status=202) - shared_client.wait_until_recordset_change_status(create_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_rs, "Complete") result = ok_client.create_batch_change(batch_change_input, status=202) completed_batch = ok_client.wait_until_batch_change_completed(result) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=0, - record_name=shared_delete_name, - input_name=shared_delete_fqdn, record_data=None, - change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=0, + record_name=shared_delete_name, + input_name=shared_delete_fqdn, record_data=None, + change_type="DeleteRecordSet") def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_fails(shared_zone_test_context): @@ -3619,8 +3680,8 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ shared_delete_name = generate_record_name() shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = get_recordset_json(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, - shared_group['id']) + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, + shared_group["id"]) batch_change_input = { "changes": [ @@ -3630,7 +3691,7 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ try: create_rs = shared_client.create_recordset(shared_delete, status=202) - shared_client.wait_until_recordset_change_status(create_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_rs, "Complete") response = unassociated_client.create_batch_change(batch_change_input, status=400) @@ -3640,8 +3701,8 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ finally: if create_rs: - delete_rs = shared_client.delete_recordset(shared_zone['id'], create_rs['recordSet']['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = shared_client.delete_recordset(shared_zone["id"], create_rs["recordSet"]["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_create_batch_delete_recordset_for_zone_admin_not_in_owner_group_succeeds(shared_zone_test_context): @@ -3655,7 +3716,7 @@ def test_create_batch_delete_recordset_for_zone_admin_not_in_owner_group_succeed shared_delete_name = generate_record_name() shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = get_recordset_json(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, ok_group['id']) + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, ok_group["id"]) batch_change_input = { "changes": [ @@ -3664,15 +3725,15 @@ def test_create_batch_delete_recordset_for_zone_admin_not_in_owner_group_succeed } create_rs = ok_client.create_recordset(shared_delete, status=202) - shared_client.wait_until_recordset_change_status(create_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_rs, "Complete") result = shared_client.create_batch_change(batch_change_input, status=202) completed_batch = shared_client.wait_until_batch_change_completed(result) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=0, - record_name=shared_delete_name, - input_name=shared_delete_fqdn, record_data=None, - change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=0, + record_name=shared_delete_name, + input_name=shared_delete_fqdn, record_data=None, + change_type="DeleteRecordSet") def test_create_batch_update_record_in_shared_zone_for_unassociated_user_in_owner_group_succeeds( @@ -3688,8 +3749,8 @@ def test_create_batch_update_record_in_shared_zone_for_unassociated_user_in_owne shared_update_name = generate_record_name() shared_update_fqdn = shared_update_name + ".shared." - shared_update = get_recordset_json(shared_zone, shared_update_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200, - shared_record_group['id']) + shared_update = create_recordset(shared_zone, shared_update_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200, + shared_record_group["id"]) batch_change_input = { "changes": [ @@ -3700,23 +3761,23 @@ def test_create_batch_update_record_in_shared_zone_for_unassociated_user_in_owne try: create_rs = shared_client.create_recordset(shared_update, status=202) - shared_client.wait_until_recordset_change_status(create_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_rs, "Complete") result = ok_client.create_batch_change(batch_change_input, status=202) completed_batch = ok_client.wait_until_batch_change_completed(result) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=0, record_name=shared_update_name, - ttl=300, - record_type="MX", input_name=shared_update_fqdn, - record_data={'preference': 1, 'exchange': 'foo.bar.'}) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=1, record_name=shared_update_name, - record_type="MX", input_name=shared_update_fqdn, record_data=None, - change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=0, record_name=shared_update_name, + ttl=300, + record_type="MX", input_name=shared_update_fqdn, + record_data={"preference": 1, "exchange": "foo.bar."}) + assert_change_success(completed_batch["changes"], zone=shared_zone, index=1, record_name=shared_update_name, + record_type="MX", input_name=shared_update_fqdn, record_data=None, + change_type="DeleteRecordSet") finally: if create_rs: - delete_rs = shared_client.delete_recordset(shared_zone['id'], create_rs['recordSet']['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = shared_client.delete_recordset(shared_zone["id"], create_rs["recordSet"]["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_create_batch_with_global_acl_rule_applied_succeeds(shared_zone_test_context): @@ -3730,71 +3791,73 @@ def test_create_batch_with_global_acl_rule_applied_succeeds(shared_zone_test_con classless_base_zone = shared_zone_test_context.classless_base_zone create_a_rs = None create_ptr_rs = None - dummy_group_id = shared_zone_test_context.dummy_group['id'] + dummy_group_id = shared_zone_test_context.dummy_group["id"] + dummy_group_name = shared_zone_test_context.dummy_group["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix a_name = generate_record_name() a_fqdn = a_name + ".shared." - a_record = get_recordset_json(shared_zone, a_name, "A", [{"address": '1.1.1.1'}], 200, "shared-zone-group") + a_record = create_recordset(shared_zone, a_name, "A", [{"address": "1.1.1.1"}], 200, "shared-zone-group") - ptr_record = get_recordset_json(classless_base_zone, "44", "PTR", [{'ptrdname': 'foo.'}], 200, None) + ptr_record = create_recordset(classless_base_zone, "44", "PTR", [{"ptrdname": "foo."}], 200, None) batch_change_input = { "ownerGroupId": dummy_group_id, "changes": [ - get_change_A_AAAA_json(a_fqdn, record_type="A", ttl=200, address="192.0.2.44"), - get_change_PTR_json("192.0.2.44", ptrdname=a_fqdn), + get_change_A_AAAA_json(a_fqdn, record_type="A", ttl=200, address=f"{ip4_prefix}.44"), + get_change_PTR_json(f"{ip4_prefix}.44", ptrdname=a_fqdn), get_change_A_AAAA_json(a_fqdn, record_type="A", address="1.1.1.1", change_type="DeleteRecordSet"), - get_change_PTR_json("192.0.2.44", change_type="DeleteRecordSet") + get_change_PTR_json(f"{ip4_prefix}.44", change_type="DeleteRecordSet") ] } try: create_a_rs = shared_client.create_recordset(a_record, status=202) - shared_client.wait_until_recordset_change_status(create_a_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_a_rs, "Complete") create_ptr_rs = ok_client.create_recordset(ptr_record, status=202) - ok_client.wait_until_recordset_change_status(create_ptr_rs, 'Complete') + ok_client.wait_until_recordset_change_status(create_ptr_rs, "Complete") result = dummy_client.create_batch_change(batch_change_input, status=202) completed_batch = dummy_client.wait_until_batch_change_completed(result) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=0, - record_name=a_name, ttl=200, - record_type="A", input_name=a_fqdn, record_data="192.0.2.44") - assert_change_success_response_values(completed_batch['changes'], zone=classless_base_zone, index=1, - record_name="44", - record_type="PTR", input_name="192.0.2.44", - record_data=a_fqdn) - assert_change_success_response_values(completed_batch['changes'], zone=shared_zone, index=2, - record_name=a_name, ttl=200, - record_type="A", input_name=a_fqdn, record_data=None, - change_type="DeleteRecordSet") - assert_change_success_response_values(completed_batch['changes'], zone=classless_base_zone, index=3, - record_name="44", - record_type="PTR", input_name="192.0.2.44", record_data=None, - change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=shared_zone, index=0, + record_name=a_name, ttl=200, + record_type="A", input_name=a_fqdn, record_data=f"{ip4_prefix}.44") + assert_change_success(completed_batch["changes"], zone=classless_base_zone, index=1, + record_name="44", + record_type="PTR", input_name=f"{ip4_prefix}.44", + record_data=a_fqdn) + assert_change_success(completed_batch["changes"], zone=shared_zone, index=2, + record_name=a_name, ttl=200, + record_type="A", input_name=a_fqdn, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(completed_batch["changes"], zone=classless_base_zone, index=3, + record_name="44", + record_type="PTR", input_name=f"{ip4_prefix}.44", record_data=None, + change_type="DeleteRecordSet") finally: if create_a_rs: - retrieved = shared_client.get_recordset(shared_zone['id'], create_a_rs['recordSet']['id']) - retrieved_rs = retrieved['recordSet'] + retrieved = shared_client.get_recordset(shared_zone["id"], create_a_rs["recordSet"]["id"]) + retrieved_rs = retrieved["recordSet"] - assert_that(retrieved_rs['ownerGroupId'], is_('shared-zone-group')) - assert_that(retrieved_rs['ownerGroupName'], is_('testSharedZoneGroup')) + assert_that(retrieved_rs["ownerGroupId"], is_("shared-zone-group")) + assert_that(retrieved_rs["ownerGroupName"], is_("testSharedZoneGroup")) - delete_a_rs = shared_client.delete_recordset(shared_zone['id'], create_a_rs['recordSet']['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_a_rs, 'Complete') + delete_a_rs = shared_client.delete_recordset(shared_zone["id"], create_a_rs["recordSet"]["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_a_rs, "Complete") if create_ptr_rs: - retrieved = dummy_client.get_recordset(shared_zone['id'], create_ptr_rs['recordSet']['id']) - retrieved_rs = retrieved['recordSet'] + retrieved = dummy_client.get_recordset(shared_zone["id"], create_ptr_rs["recordSet"]["id"]) + retrieved_rs = retrieved["recordSet"] - assert_that(retrieved_rs, is_not(has_key('ownerGroupId'))) - assert_that(retrieved_rs, is_not(has_key('dummy-group'))) + assert_that(retrieved_rs, is_not(has_key("ownerGroupId"))) + assert_that(retrieved_rs, is_not(has_key({dummy_group_name}))) - delete_ptr_rs = ok_client.delete_recordset(classless_base_zone['id'], create_ptr_rs['recordSet']['id'], + delete_ptr_rs = ok_client.delete_recordset(classless_base_zone["id"], create_ptr_rs["recordSet"]["id"], status=202) - ok_client.wait_until_recordset_change_status(delete_ptr_rs, 'Complete') + ok_client.wait_until_recordset_change_status(delete_ptr_rs, "Complete") def test_create_batch_with_irrelevant_global_acl_rule_applied_fails(shared_zone_test_context): @@ -3804,32 +3867,34 @@ def test_create_batch_with_irrelevant_global_acl_rule_applied_fails(shared_zone_ test_user_client = shared_zone_test_context.test_user_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone + ip4_prefix = shared_zone_test_context.ip4_classless_prefix + create_a_rs = None a_name = generate_record_name() a_fqdn = a_name + ".shared." - a_record = get_recordset_json(shared_zone, a_name, "A", [{"address": '1.1.1.1'}], 200, "shared-zone-group") + a_record = create_recordset(shared_zone, a_name, "A", [{"address": "1.1.1.1"}], 200, "shared-zone-group") batch_change_input = { "changes": [ - get_change_A_AAAA_json(a_fqdn, record_type="A", address="192.0.2.45"), + get_change_A_AAAA_json(a_fqdn, record_type="A", address=f"{ip4_prefix}.45"), get_change_A_AAAA_json(a_fqdn, record_type="A", change_type="DeleteRecordSet"), ] } try: create_a_rs = shared_client.create_recordset(a_record, status=202) - shared_client.wait_until_recordset_change_status(create_a_rs, 'Complete') + shared_client.wait_until_recordset_change_status(create_a_rs, "Complete") response = test_user_client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(response[0], input_name=a_fqdn, record_type="A", - change_type="Add", record_data="192.0.2.45", + change_type="Add", record_data=f"{ip4_prefix}.45", error_messages=['User "testuser" is not authorized. Contact record owner group: testSharedZoneGroup at email to make DNS changes.']) finally: if create_a_rs: - delete_a_rs = shared_client.delete_recordset(shared_zone['id'], create_a_rs['recordSet']['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_a_rs, 'Complete') + delete_a_rs = shared_client.delete_recordset(shared_zone["id"], create_a_rs["recordSet"]["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_a_rs, "Complete") @pytest.mark.manual_batch_review @@ -3846,39 +3911,41 @@ def test_create_batch_with_zone_name_requiring_manual_review(shared_zone_test_co get_change_A_AAAA_json("update-test-batch.zone.requires.review."), get_change_A_AAAA_json("delete-test-batch.zone.requires.review.", change_type="DeleteRecordSet") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } response = None try: response = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(response['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - for i in xrange(0, 3): - assert_that(get_batch['changes'][i]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][i]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview')) + get_batch = client.get_batch_change(response["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + for i in range(0, 3): + assert_that(get_batch["changes"][i]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][i]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) finally: # Clean up so data doesn't change if response: - rejecter.reject_batch_change(response['id'], status=200) + rejecter.reject_batch_change(response["id"], status=200) + def test_create_batch_delete_record_for_invalid_record_data_fails(shared_zone_test_context): """ Test delete record set fails for non-existent record and non-existent record data """ client = shared_zone_test_context.ok_vinyldns_client + ok_zone_name = shared_zone_test_context.ok_zone["name"] a_delete_name = generate_record_name() - a_delete_fqdn = a_delete_name + ".ok." - a_delete = get_recordset_json(shared_zone_test_context.ok_zone, a_delete_fqdn, "A", [{"address": "1.1.1.1"}]) + a_delete_fqdn = a_delete_name + f".{ok_zone_name}" + a_delete = create_recordset(shared_zone_test_context.ok_zone, a_delete_fqdn, "A", [{"address": "1.1.1.1"}]) batch_change_input = { "comments": "test delete record failures", "changes": [ - get_change_A_AAAA_json("delete-non-existent-record.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"delete-non-existent-record.{ok_zone_name}", change_type="DeleteRecordSet"), get_change_A_AAAA_json(a_delete_fqdn, address="4.5.6.7", change_type="DeleteRecordSet") ] } @@ -3887,14 +3954,14 @@ def test_create_batch_delete_record_for_invalid_record_data_fails(shared_zone_te try: create_rs = client.create_recordset(a_delete, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_rs, "Complete")) errors = client.create_batch_change(batch_change_input, status=400) - assert_failed_change_in_error_response(errors[0], input_name="delete-non-existent-record.ok.", record_data="1.1.1.1", change_type="DeleteRecordSet", - error_messages=['Record "delete-non-existent-record.ok." Does Not Exist: cannot delete a record that does not exist.']) + assert_failed_change_in_error_response(errors[0], input_name=f"delete-non-existent-record.{ok_zone_name}", record_data="1.1.1.1", change_type="DeleteRecordSet", + error_messages=['Record f"delete-non-existent-record.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) assert_failed_change_in_error_response(errors[1], input_name=a_delete_fqdn, record_data="4.5.6.7", change_type="DeleteRecordSet", - error_messages=['Record data 4.5.6.7 does not exist for "' + a_delete_fqdn + '".']) + error_messages=["Record data 4.5.6.7 does not exist for \"" + a_delete_fqdn + "\"."]) finally: clear_recordset_list(to_delete, client) @@ -3908,26 +3975,27 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): ok_client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone dummy_client = shared_zone_test_context.dummy_vinyldns_client - dummy_group_id = shared_zone_test_context.dummy_group['id'] + dummy_group_id = shared_zone_test_context.dummy_group["id"] + ok_zone_name = shared_zone_test_context.ok_zone["name"] - a_delete_acl = generate_acl_rule('Delete', groupId=dummy_group_id, recordMask='.*', recordTypes=['A']) - txt_write_acl = generate_acl_rule('Write', groupId=dummy_group_id, recordMask='.*', recordTypes=['TXT']) + a_delete_acl = generate_acl_rule("Delete", groupId=dummy_group_id, recordMask=".*", recordTypes=["A"]) + txt_write_acl = generate_acl_rule("Write", groupId=dummy_group_id, recordMask=".*", recordTypes=["TXT"]) a_update_name = generate_record_name() - a_update_fqdn = a_update_name + ".ok." - a_update = get_recordset_json(ok_zone, a_update_name, "A", [{"address": "1.1.1.1"}]) + a_update_fqdn = a_update_name + f".{ok_zone_name}" + a_update = create_recordset(ok_zone, a_update_name, "A", [{"address": "1.1.1.1"}]) a_delete_name = generate_record_name() - a_delete_fqdn = a_delete_name + ".ok." - a_delete = get_recordset_json(ok_zone, a_delete_name, "A", [{"address": "1.1.1.1"}]) + a_delete_fqdn = a_delete_name + f".{ok_zone_name}" + a_delete = create_recordset(ok_zone, a_delete_name, "A", [{"address": "1.1.1.1"}]) txt_update_name = generate_record_name() - txt_update_fqdn = txt_update_name + ".ok." - txt_update = get_recordset_json(ok_zone, txt_update_name, "TXT", [{"text": "test"}]) + txt_update_fqdn = txt_update_name + f".{ok_zone_name}" + txt_update = create_recordset(ok_zone, txt_update_name, "TXT", [{"text": "test"}]) txt_delete_name = generate_record_name() - txt_delete_fqdn = txt_delete_name + ".ok." - txt_delete = get_recordset_json(ok_zone, txt_delete_name, "TXT", [{"text": "test"}]) + txt_delete_fqdn = txt_delete_name + f".{ok_zone_name}" + txt_delete = create_recordset(ok_zone, txt_delete_name, "TXT", [{"text": "test"}]) batch_change_input = { "comments": "Testing DeleteRecord access levels", @@ -3947,7 +4015,7 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): for create_json in [a_update, a_delete, txt_update, txt_delete]: create_result = ok_client.create_recordset(create_json, status=202) - to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, "Complete")) response = dummy_client.create_batch_change(batch_change_input, status=400) @@ -3963,6 +4031,7 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): clear_ok_acl_rules(shared_zone_test_context) clear_recordset_list(to_delete, ok_client) + @pytest.mark.skip_production def test_create_batch_multi_record_update_succeeds(shared_zone_test_context): """ @@ -3970,67 +4039,68 @@ def test_create_batch_multi_record_update_succeeds(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] # record sets to setup a_update_record_set_name = generate_record_name() - a_update_record_set_fqdn = a_update_record_set_name + ".ok." - a_update_record_set = get_recordset_json(ok_zone, a_update_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_update_record_set_fqdn = a_update_record_set_name + f".{ok_zone_name}" + a_update_record_set = create_recordset(ok_zone, a_update_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_update_record_set_name = generate_record_name() - txt_update_record_set_fqdn = txt_update_record_set_name + ".ok." - txt_update_record_set = get_recordset_json(ok_zone, txt_update_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_update_record_set_fqdn = txt_update_record_set_name + f".{ok_zone_name}" + txt_update_record_set = create_recordset(ok_zone, txt_update_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) a_update_record_full_name = generate_record_name() - a_update_record_full_fqdn = a_update_record_full_name + ".ok." - a_update_record_full = get_recordset_json(ok_zone, a_update_record_full_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_update_record_full_fqdn = a_update_record_full_name + f".{ok_zone_name}" + a_update_record_full = create_recordset(ok_zone, a_update_record_full_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_update_record_full_name = generate_record_name() - txt_update_record_full_fqdn = txt_update_record_full_name + ".ok." - txt_update_record_full = get_recordset_json(ok_zone, txt_update_record_full_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_update_record_full_fqdn = txt_update_record_full_name + f".{ok_zone_name}" + txt_update_record_full = create_recordset(ok_zone, txt_update_record_full_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) a_update_record_name = generate_record_name() - a_update_record_fqdn = a_update_record_name + ".ok." - a_update_record = get_recordset_json(ok_zone, a_update_record_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_update_record_fqdn = a_update_record_name + f".{ok_zone_name}" + a_update_record = create_recordset(ok_zone, a_update_record_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_update_record_name = generate_record_name() - txt_update_record_fqdn = txt_update_record_name + ".ok." - txt_update_record = get_recordset_json(ok_zone, txt_update_record_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_update_record_fqdn = txt_update_record_name + f".{ok_zone_name}" + txt_update_record = create_recordset(ok_zone, txt_update_record_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) a_update_record_only_name = generate_record_name() - a_update_record_only_fqdn = a_update_record_only_name + ".ok." - a_update_record_only = get_recordset_json(ok_zone, a_update_record_only_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_update_record_only_fqdn = a_update_record_only_name + f".{ok_zone_name}" + a_update_record_only = create_recordset(ok_zone, a_update_record_only_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_update_record_only_name = generate_record_name() - txt_update_record_only_fqdn = txt_update_record_only_name + ".ok." - txt_update_record_only = get_recordset_json(ok_zone, txt_update_record_only_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_update_record_only_fqdn = txt_update_record_only_name + f".{ok_zone_name}" + txt_update_record_only = create_recordset(ok_zone, txt_update_record_only_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) a_delete_record_set_name = generate_record_name() - a_delete_record_set_fqdn = a_delete_record_set_name + ".ok." - a_delete_record_set = get_recordset_json(ok_zone, a_delete_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_delete_record_set_fqdn = a_delete_record_set_name + f".{ok_zone_name}" + a_delete_record_set = create_recordset(ok_zone, a_delete_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_delete_record_set_name = generate_record_name() - txt_delete_record_set_fqdn = txt_delete_record_set_name + ".ok." - txt_delete_record_set = get_recordset_json(ok_zone, txt_delete_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_delete_record_set_fqdn = txt_delete_record_set_name + f".{ok_zone_name}" + txt_delete_record_set = create_recordset(ok_zone, txt_delete_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) a_delete_record_name = generate_record_name() - a_delete_record_fqdn = a_delete_record_name + ".ok." - a_delete_record = get_recordset_json(ok_zone, a_delete_record_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_delete_record_fqdn = a_delete_record_name + f".{ok_zone_name}" + a_delete_record = create_recordset(ok_zone, a_delete_record_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_delete_record_name = generate_record_name() - txt_delete_record_fqdn = txt_delete_record_name + ".ok." - txt_delete_record = get_recordset_json(ok_zone, txt_delete_record_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_delete_record_fqdn = txt_delete_record_name + f".{ok_zone_name}" + txt_delete_record = create_recordset(ok_zone, txt_delete_record_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) cname_delete_record_name = generate_record_name() - cname_delete_record_fqdn = cname_delete_record_name + ".ok." - cname_delete_record = get_recordset_json(ok_zone, cname_delete_record_name, "CNAME", [{"cname": "cAsEiNSeNsItIve.cNaMe."}], 200) + cname_delete_record_fqdn = cname_delete_record_name + f".{ok_zone_name}" + cname_delete_record = create_recordset(ok_zone, cname_delete_record_name, "CNAME", [{"cname": "cAsEiNSeNsItIve.cNaMe."}], 200) a_delete_record_and_record_set_name = generate_record_name() - a_delete_record_and_record_set_fqdn = a_delete_record_and_record_set_name + ".ok." - a_delete_record_and_record_set = get_recordset_json(ok_zone, a_delete_record_and_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) + a_delete_record_and_record_set_fqdn = a_delete_record_and_record_set_name + f".{ok_zone_name}" + a_delete_record_and_record_set = create_recordset(ok_zone, a_delete_record_and_record_set_name, "A", [{"address": "1.1.1.1"}, {"address": "1.1.1.2"}], 200) txt_delete_record_and_record_set_name = generate_record_name() - txt_delete_record_and_record_set_fqdn = txt_delete_record_and_record_set_name + ".ok." - txt_delete_record_and_record_set = get_recordset_json(ok_zone, txt_delete_record_and_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) + txt_delete_record_and_record_set_fqdn = txt_delete_record_and_record_set_name + f".{ok_zone_name}" + txt_delete_record_and_record_set = create_recordset(ok_zone, txt_delete_record_and_record_set_name, "TXT", [{"text": "hello"}, {"text": "again"}], 200) batch_change_input = { "comments": "this is optional", @@ -4094,58 +4164,85 @@ def test_create_batch_multi_record_update_succeeds(shared_zone_test_context): for rs in [a_update_record_set, txt_update_record_set, a_update_record_full, txt_update_record_full, a_update_record, txt_update_record, a_update_record_only, txt_update_record_only, a_delete_record_set, txt_delete_record_set, a_delete_record, txt_delete_record, cname_delete_record, a_delete_record_and_record_set, txt_delete_record_and_record_set]: create_rs = client.create_recordset(rs, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_rs, "Complete")) initial_result = client.create_batch_change(batch_change_input, status=202) result = client.wait_until_batch_change_completed(initial_result) - assert_that(result['status'], is_('Complete')) + assert_that(result["status"], is_("Complete")) # Check batch change response - assert_change_success_response_values(result['changes'], zone=ok_zone, index=0, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=1, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=2, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data="4.5.6.7") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=3, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=4, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", record_data="some-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=5, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", record_data="more-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=0, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=1, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=ok_zone, index=2, input_name=a_update_record_set_fqdn, record_name=a_update_record_set_name, record_data="4.5.6.7") + assert_change_success(result["changes"], zone=ok_zone, index=3, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", + record_data=None, change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=4, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", + record_data="some-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=5, input_name=txt_update_record_set_fqdn, record_name=txt_update_record_set_name, record_type="TXT", + record_data="more-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=6, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.1.1.1", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=7, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.1.1.2", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=8, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=9, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="4.5.6.7") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=10, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", record_data="hello", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=11, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", record_data="again", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=12, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", record_data="some-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=13, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", record_data="more-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=6, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.1.1.1", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=7, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.1.1.2", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=8, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=ok_zone, index=9, input_name=a_update_record_full_fqdn, record_name=a_update_record_full_name, record_data="4.5.6.7") + assert_change_success(result["changes"], zone=ok_zone, index=10, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", + record_data="hello", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=11, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", + record_data="again", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=12, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", + record_data="some-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=13, input_name=txt_update_record_full_fqdn, record_name=txt_update_record_full_name, record_type="TXT", + record_data="more-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=14, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="1.1.1.1", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=15, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="1.2.3.4") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=16, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="4.5.6.7") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=17, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", record_data="hello", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=18, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", record_data="some-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=19, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", record_data="more-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=14, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="1.1.1.1", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=15, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=ok_zone, index=16, input_name=a_update_record_fqdn, record_name=a_update_record_name, record_data="4.5.6.7") + assert_change_success(result["changes"], zone=ok_zone, index=17, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", record_data="hello", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=18, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", + record_data="some-multi-text") + assert_change_success(result["changes"], zone=ok_zone, index=19, input_name=txt_update_record_fqdn, record_name=txt_update_record_name, record_type="TXT", + record_data="more-multi-text") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=20, input_name=a_update_record_only_fqdn, record_name=a_update_record_only_name, record_data="1.1.1.1", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=21, input_name=txt_update_record_only_fqdn, record_name=txt_update_record_only_name, record_type="TXT", record_data="hello", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=20, input_name=a_update_record_only_fqdn, record_name=a_update_record_only_name, record_data="1.1.1.1", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=21, input_name=txt_update_record_only_fqdn, record_name=txt_update_record_only_name, record_type="TXT", + record_data="hello", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=22, input_name=a_delete_record_set_fqdn, record_name=a_delete_record_set_name, record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=23, input_name=txt_delete_record_set_fqdn, record_name=txt_delete_record_set_name, record_type="TXT", record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=24, input_name=a_delete_record_fqdn, record_name=a_delete_record_name, record_data="1.1.1.1", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=25, input_name=a_delete_record_fqdn, record_name=a_delete_record_name, record_data="1.1.1.2", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=26, input_name=txt_delete_record_fqdn, record_name=txt_delete_record_name, record_type="TXT", record_data="hello", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=27, input_name=txt_delete_record_fqdn, record_name=txt_delete_record_name, record_type="TXT", record_data="again", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=28, input_name=cname_delete_record_fqdn, record_name=cname_delete_record_name, record_type="CNAME", record_data="caseinsensitive.cname.", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=22, input_name=a_delete_record_set_fqdn, record_name=a_delete_record_set_name, record_data=None, + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=23, input_name=txt_delete_record_set_fqdn, record_name=txt_delete_record_set_name, record_type="TXT", + record_data=None, change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=24, input_name=a_delete_record_fqdn, record_name=a_delete_record_name, record_data="1.1.1.1", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=25, input_name=a_delete_record_fqdn, record_name=a_delete_record_name, record_data="1.1.1.2", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=26, input_name=txt_delete_record_fqdn, record_name=txt_delete_record_name, record_type="TXT", record_data="hello", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=27, input_name=txt_delete_record_fqdn, record_name=txt_delete_record_name, record_type="TXT", record_data="again", + change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=28, input_name=cname_delete_record_fqdn, record_name=cname_delete_record_name, record_type="CNAME", + record_data="caseinsensitive.cname.", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=29, input_name=a_delete_record_and_record_set_fqdn, record_name=a_delete_record_and_record_set_name, record_data="1.1.1.1", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=30, input_name=a_delete_record_and_record_set_fqdn, record_name=a_delete_record_and_record_set_name, record_data=None, change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=31, input_name=txt_delete_record_and_record_set_fqdn, record_name=txt_delete_record_and_record_set_name, record_type="TXT", record_data="hello", change_type="DeleteRecordSet") - assert_change_success_response_values(result['changes'], zone=ok_zone, index=32, input_name=txt_delete_record_and_record_set_fqdn, record_name=txt_delete_record_and_record_set_name, record_type="TXT", record_data=None, change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=29, input_name=a_delete_record_and_record_set_fqdn, record_name=a_delete_record_and_record_set_name, + record_data="1.1.1.1", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=30, input_name=a_delete_record_and_record_set_fqdn, record_name=a_delete_record_and_record_set_name, + record_data=None, change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=31, input_name=txt_delete_record_and_record_set_fqdn, record_name=txt_delete_record_and_record_set_name, + record_type="TXT", record_data="hello", change_type="DeleteRecordSet") + assert_change_success(result["changes"], zone=ok_zone, index=32, input_name=txt_delete_record_and_record_set_fqdn, record_name=txt_delete_record_and_record_set_name, + record_type="TXT", record_data=None, change_type="DeleteRecordSet") # Perform look up to verify record set data for rs in to_delete: - rs_name = rs['recordSet']['name'] - rs_id = rs['recordSet']['id'] - zone_id = rs['zone']['id'] + rs_name = rs["recordSet"]["name"] + rs_id = rs["recordSet"]["id"] + zone_id = rs["zone"]["id"] # deletes should not exist if rs_name in [a_delete_record_set_name, txt_delete_record_set_name, a_delete_record_name, @@ -4153,32 +4250,33 @@ def test_create_batch_multi_record_update_succeeds(shared_zone_test_context): client.get_recordset(zone_id, rs_id, status=404) else: result_rs = client.get_recordset(zone_id, rs_id, status=200) - records = result_rs['recordSet']['records'] + records = result_rs["recordSet"]["records"] # full deletes with updates if rs_name in [a_update_record_set_name, a_update_record_full_name]: - assert_that(records, contains({"address": "1.2.3.4"}, {"address": "4.5.6.7"})) - assert_that(records, is_not(contains({"address": "1.1.1.1"}, {"address": "1.1.1.2"}))) + assert_that(records, contains_exactly({"address": "1.2.3.4"}, {"address": "4.5.6.7"})) + assert_that(records, is_not(contains_exactly({"address": "1.1.1.1"}, {"address": "1.1.1.2"}))) elif rs_name in [txt_update_record_set_name, txt_update_record_full_name]: - assert_that(records, contains({"text": "some-multi-text"}, {"text": "more-multi-text"})) - assert_that(records, is_not(contains({"text": "hello"}, {"text": "again"}))) + assert_that(records, contains_exactly({"text": "some-multi-text"}, {"text": "more-multi-text"})) + assert_that(records, is_not(contains_exactly({"text": "hello"}, {"text": "again"}))) # single entry delete with adds elif rs_name == a_update_record_name: - assert_that(records, contains({"address": "1.1.1.2"}, {"address": "1.2.3.4"}, {"address": "4.5.6.7"})) - assert_that(records, is_not(contains({"address": "1.1.1.1"}))) + assert_that(records, contains_exactly({"address": "1.1.1.2"}, {"address": "1.2.3.4"}, {"address": "4.5.6.7"})) + assert_that(records, is_not(contains_exactly({"address": "1.1.1.1"}))) elif rs_name == txt_update_record_name: - assert_that(records, contains({"text": "again"}, {"text": "some-multi-text"}, {"text": "more-multi-text"})) - assert_that(records, is_not(contains({"text": "hello"}))) + assert_that(records, contains_exactly({"text": "again"}, {"text": "some-multi-text"}, {"text": "more-multi-text"})) + assert_that(records, is_not(contains_exactly({"text": "hello"}))) elif rs_name == a_update_record_only_name: - assert_that(records, contains({"address": "1.1.1.2"})) - assert_that(records, is_not(contains({"address": "1.1.1.1"}))) + assert_that(records, contains_exactly({"address": "1.1.1.2"})) + assert_that(records, is_not(contains_exactly({"address": "1.1.1.1"}))) elif rs_name == txt_update_record_only_name: - assert_that(records, contains({"text": "again"})) - assert_that(records, is_not(contains({"text": "hello"}))) + assert_that(records, contains_exactly({"text": "again"})) + assert_that(records, is_not(contains_exactly({"text": "hello"}))) finally: clear_recordset_list(to_delete, client) + def test_create_batch_deletes_succeeds(shared_zone_test_context): """ Test creating batch change with DeleteRecordSet with valid record data succeeds @@ -4186,19 +4284,20 @@ def test_create_batch_deletes_succeeds(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone ok_group = shared_zone_test_context.ok_group + ok_zone_name = shared_zone_test_context.ok_zone["name"] rs_name = generate_record_name() rs_name_2 = generate_record_name() multi_rs_name = generate_record_name() multi_rs_name_2 = generate_record_name() - rs_fqdn = rs_name + ".ok." - rs_fqdn_2 = rs_name_2 + ".ok." - multi_rs_fqdn = multi_rs_name + ".ok." - multi_rs_fqdn_2 = multi_rs_name_2 + ".ok." - rs_to_create = get_recordset_json(ok_zone, rs_name, "A", [{"address": "1.2.3.4"}], 200, ok_group['id']) - rs_to_create_2 = get_recordset_json(ok_zone, rs_name_2, "A", [{"address": "1.2.3.4"}], 200, ok_group['id']) - multi_record_rs_to_create = get_recordset_json(ok_zone, multi_rs_name, "A", [{"address": "1.2.3.4"}, {"address": "1.1.1.1"}], 200, ok_group['id']) - multi_record_rs_to_create_2 = get_recordset_json(ok_zone, multi_rs_name_2, "A", [{"address": "1.2.3.4"}, {"address": "1.1.1.1"}], 200, ok_group['id']) + rs_fqdn = rs_name + f".{ok_zone_name}" + rs_fqdn_2 = rs_name_2 + f".{ok_zone_name}" + multi_rs_fqdn = multi_rs_name + f".{ok_zone_name}" + multi_rs_fqdn_2 = multi_rs_name_2 + f".{ok_zone_name}" + rs_to_create = create_recordset(ok_zone, rs_name, "A", [{"address": "1.2.3.4"}], 200, ok_group["id"]) + rs_to_create_2 = create_recordset(ok_zone, rs_name_2, "A", [{"address": "1.2.3.4"}], 200, ok_group["id"]) + multi_record_rs_to_create = create_recordset(ok_zone, multi_rs_name, "A", [{"address": "1.2.3.4"}, {"address": "1.1.1.1"}], 200, ok_group["id"]) + multi_record_rs_to_create_2 = create_recordset(ok_zone, multi_rs_name_2, "A", [{"address": "1.2.3.4"}, {"address": "1.1.1.1"}], 200, ok_group["id"]) batch_change_input = { "comments": "this is optional", @@ -4217,23 +4316,24 @@ def test_create_batch_deletes_succeeds(shared_zone_test_context): create_rs_2 = client.create_recordset(rs_to_create_2, status=202) create_multi_rs = client.create_recordset(multi_record_rs_to_create, status=202) create_multi_rs_2 = client.create_recordset(multi_record_rs_to_create_2, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_rs, 'Complete')) - to_delete.append(client.wait_until_recordset_change_status(create_rs_2, 'Complete')) - to_delete.append(client.wait_until_recordset_change_status(create_multi_rs, 'Complete')) - to_delete.append(client.wait_until_recordset_change_status(create_multi_rs_2, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_rs, "Complete")) + to_delete.append(client.wait_until_recordset_change_status(create_rs_2, "Complete")) + to_delete.append(client.wait_until_recordset_change_status(create_multi_rs, "Complete")) + to_delete.append(client.wait_until_recordset_change_status(create_multi_rs_2, "Complete")) result = client.create_batch_change(batch_change_input, status=202) client.wait_until_batch_change_completed(result) - client.get_recordset(create_rs['zone']['id'], create_rs['recordSet']['id'], status=404) - client.get_recordset(create_rs_2['zone']['id'], create_rs_2['recordSet']['id'], status=404) - updated_rs = client.get_recordset(create_multi_rs['zone']['id'], create_multi_rs['recordSet']['id'], status=200)['recordSet'] - assert_that(updated_rs['records'], is_([{'address': '1.1.1.1'}])) - client.get_recordset(create_multi_rs_2['zone']['id'], create_multi_rs_2['recordSet']['id'], status=404) + client.get_recordset(create_rs["zone"]["id"], create_rs["recordSet"]["id"], status=404) + client.get_recordset(create_rs_2["zone"]["id"], create_rs_2["recordSet"]["id"], status=404) + updated_rs = client.get_recordset(create_multi_rs["zone"]["id"], create_multi_rs["recordSet"]["id"], status=200)["recordSet"] + assert_that(updated_rs["records"], is_([{"address": "1.1.1.1"}])) + client.get_recordset(create_multi_rs_2["zone"]["id"], create_multi_rs_2["recordSet"]["id"], status=404) finally: clear_recordset_list(to_delete, client) + @pytest.mark.serial @pytest.mark.skip_production def test_create_batch_change_with_multi_record_adds_with_multi_record_support(shared_zone_test_context): @@ -4243,43 +4343,45 @@ def test_create_batch_change_with_multi_record_adds_with_multi_record_support(sh client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone ok_group = shared_zone_test_context.ok_group + ok_zone_name = shared_zone_test_context.ok_zone["name"] + ip4_prefix = shared_zone_test_context.ip4_classless_prefix to_delete = [] rs_name = generate_record_name() - rs_fqdn = rs_name + ".ok." - rs_to_create = get_recordset_json(ok_zone, rs_name, "A", [{"address": "1.2.3.4"}], 200, ok_group['id']) + rs_fqdn = rs_name + f".{ok_zone_name}" + rs_to_create = create_recordset(ok_zone, rs_name, "A", [{"address": "1.2.3.4"}], 200, ok_group["id"]) batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("multi.ok.", address="1.2.3.4"), - get_change_A_AAAA_json("multi.ok.", address="4.5.6.7"), - get_change_PTR_json("192.0.2.44", ptrdname="multi.test"), - get_change_PTR_json("192.0.2.44", ptrdname="multi2.test"), - get_change_TXT_json("multi-txt.ok.", text="some-multi-text"), - get_change_TXT_json("multi-txt.ok.", text="more-multi-text"), - get_change_MX_json("multi-mx.ok.", preference=0), - get_change_MX_json("multi-mx.ok.", preference=1000, exchange="bar.foo."), + get_change_A_AAAA_json(f"multi.{ok_zone_name}", address="1.2.3.4"), + get_change_A_AAAA_json(f"multi.{ok_zone_name}", address="4.5.6.7"), + get_change_PTR_json(f"{ip4_prefix}.44", ptrdname="multi.test"), + get_change_PTR_json(f"{ip4_prefix}.44", ptrdname="multi2.test"), + get_change_TXT_json(f"multi-txt.{ok_zone_name}", text="some-multi-text"), + get_change_TXT_json(f"multi-txt.{ok_zone_name}", text="more-multi-text"), + get_change_MX_json(f"multi-mx.{ok_zone_name}", preference=0), + get_change_MX_json(f"multi-mx.{ok_zone_name}", preference=1000, exchange="bar.foo."), get_change_A_AAAA_json(rs_fqdn, address="1.1.1.1") ] } try: create_rs = client.create_recordset(rs_to_create, status=202) - to_delete.append(client.wait_until_recordset_change_status(create_rs, 'Complete')) + to_delete.append(client.wait_until_recordset_change_status(create_rs, "Complete")) response = client.create_batch_change(batch_change_input, status=400) - assert_successful_change_in_error_response(response[0], input_name="multi.ok.", record_data="1.2.3.4") - assert_successful_change_in_error_response(response[1], input_name="multi.ok.", record_data="4.5.6.7") - assert_successful_change_in_error_response(response[2], input_name="192.0.2.44", record_type="PTR", record_data="multi.test.") - assert_successful_change_in_error_response(response[3], input_name="192.0.2.44", record_type="PTR", record_data="multi2.test.") - assert_successful_change_in_error_response(response[4], input_name="multi-txt.ok.", record_type="TXT", record_data="some-multi-text") - assert_successful_change_in_error_response(response[5], input_name="multi-txt.ok.", record_type="TXT", record_data="more-multi-text") - assert_successful_change_in_error_response(response[6], input_name="multi-mx.ok.", record_type="MX", record_data={"preference": 0, "exchange": "foo.bar."}) - assert_successful_change_in_error_response(response[7], input_name="multi-mx.ok.", record_type="MX", record_data={"preference": 1000, "exchange": "bar.foo."}) + assert_successful_change_in_error_response(response[0], input_name=f"multi.{ok_zone_name}", record_data="1.2.3.4") + assert_successful_change_in_error_response(response[1], input_name=f"multi.{ok_zone_name}", record_data="4.5.6.7") + assert_successful_change_in_error_response(response[2], input_name=f"{ip4_prefix}.44", record_type="PTR", record_data="multi.test.") + assert_successful_change_in_error_response(response[3], input_name=f"{ip4_prefix}.44", record_type="PTR", record_data="multi2.test.") + assert_successful_change_in_error_response(response[4], input_name=f"multi-txt.{ok_zone_name}", record_type="TXT", record_data="some-multi-text") + assert_successful_change_in_error_response(response[5], input_name=f"multi-txt.{ok_zone_name}", record_type="TXT", record_data="more-multi-text") + assert_successful_change_in_error_response(response[6], input_name=f"multi-mx.{ok_zone_name}", record_type="MX", record_data={"preference": 0, "exchange": "foo.bar."}) + assert_successful_change_in_error_response(response[7], input_name=f"multi-mx.{ok_zone_name}", record_type="MX", record_data={"preference": 1000, "exchange": "bar.foo."}) assert_failed_change_in_error_response(response[8], input_name=rs_fqdn, record_data="1.1.1.1", - error_messages=['Record "' + rs_fqdn + '" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) + error_messages=["Record \"" + rs_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) finally: clear_recordset_list(to_delete, client) diff --git a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py index 1fb0aa6b1..cf5937d33 100644 --- a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py @@ -18,25 +18,25 @@ def test_get_batch_change_success(shared_zone_test_context): batch_change = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(batch_change) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) - result = client.get_batch_change(batch_change['id'], status=200) + result = client.get_batch_change(batch_change["id"], status=200) assert_that(result, is_(completed_batch)) - assert_that(result['userId'], is_('ok')) - assert_that(result['userName'], is_('ok')) - assert_that(result, has_key('createdTimestamp')) - assert_that(result['status'], is_('Complete')) - assert_that(result['approvalStatus'], is_('AutoApproved')) - assert_that(result, is_not(has_key('reviewerId'))) - assert_that(result, is_not(has_key('reviewerUserName'))) - assert_that(result, is_not(has_key('reviewComment'))) - assert_that(result, is_not(has_key('reviewTimestamp'))) + assert_that(result["userId"], is_("ok")) + assert_that(result["userName"], is_("ok")) + assert_that(result, has_key("createdTimestamp")) + assert_that(result["status"], is_("Complete")) + assert_that(result["approvalStatus"], is_("AutoApproved")) + assert_that(result, is_not(has_key("reviewerId"))) + assert_that(result, is_not(has_key("reviewerUserName"))) + assert_that(result, is_not(has_key("reviewComment"))) + assert_that(result, is_not(has_key("reviewTimestamp"))) finally: for result_rs in to_delete: try: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -52,25 +52,25 @@ def test_get_batch_change_with_record_owner_group_success(shared_zone_test_conte "changes": [ get_change_A_AAAA_json("testing-get-batch-with-owner-group.shared.", address="1.1.1.1") ], - "ownerGroupId": group['id'] + "ownerGroupId": group["id"] } to_delete = [] try: batch_change = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(batch_change) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) - result = client.get_batch_change(batch_change['id'], status=200) + result = client.get_batch_change(batch_change["id"], status=200) assert_that(result, is_(completed_batch)) - assert_that(result['ownerGroupId'], is_(group['id'])) - assert_that(result['ownerGroupName'], is_(group['name'])) + assert_that(result["ownerGroupId"], is_(group["id"])) + assert_that(result["ownerGroupName"], is_(group["name"])) finally: for result_rs in to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_test_context): @@ -80,11 +80,11 @@ def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_te """ client = shared_zone_test_context.shared_zone_vinyldns_client temp_group = { - 'name': 'test-get-batch-record-owner-group2', - 'email': 'test@test.com', - 'description': 'for testing that a get batch change still works when record owner group is deleted', - 'members': [ { 'id': 'sharedZoneUser'} ], - 'admins': [ { 'id': 'sharedZoneUser'} ] + "name": "test-get-batch-record-owner-group2", + "email": "test@test.com", + "description": "for testing that a get batch change still works when record owner group is deleted", + "members": [ { "id": "sharedZoneUser"} ], + "admins": [ { "id": "sharedZoneUser"} ] } rs_name = generate_record_name() @@ -99,36 +99,36 @@ def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_te "changes": [ get_change_A_AAAA_json(rs_fqdn, address="1.1.1.1") ], - "ownerGroupId": group_to_delete['id'] + "ownerGroupId": group_to_delete["id"] } batch_change = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(batch_change) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] record_to_delete = set(record_set_list) # delete records and owner group temp = record_to_delete.copy() for result_rs in temp: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") record_to_delete.remove(result_rs) temp.clear() - client.delete_group(group_to_delete['id'], status=200) - del completed_batch['ownerGroupName'] + client.delete_group(group_to_delete["id"], status=200) + del completed_batch["ownerGroupName"] # the batch should not be updated with deleted group data - result = client.get_batch_change(batch_change['id'], status=200) + result = client.get_batch_change(batch_change["id"], status=200) assert_that(result, is_(completed_batch)) - assert_that(result['ownerGroupId'], is_(group_to_delete['id'])) - assert_that(result, is_not(has_key('ownerGroupName'))) + assert_that(result["ownerGroupId"], is_(group_to_delete["id"])) + assert_that(result, is_not(has_key("ownerGroupName"))) finally: for result_rs in record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_batch_change_failure(shared_zone_test_context): @@ -160,15 +160,15 @@ def test_get_batch_change_with_unauthorized_user_fails(shared_zone_test_context) batch_change = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(batch_change) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] to_delete = set(record_set_list) - error = dummy_client.get_batch_change(batch_change['id'], status=403) - assert_that(error, is_("User does not have access to item " + batch_change['id'])) + error = dummy_client.get_batch_change(batch_change["id"], status=403) + assert_that(error, is_("User does not have access to item " + batch_change["id"])) finally: for result_rs in to_delete: try: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass diff --git a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py index 4b199fc57..07b603d28 100644 --- a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py +++ b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py @@ -1,5 +1,5 @@ import pytest -from hamcrest import * + from utils import * from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient @@ -7,7 +7,10 @@ from vinyldns_python import VinylDNSClient @pytest.fixture(scope="module") def list_fixture(shared_zone_test_context): - return shared_zone_test_context.list_batch_summaries_context + ctx = shared_zone_test_context.list_batch_summaries_context + ctx.setup(shared_zone_test_context) + yield ctx + ctx.tear_down(shared_zone_test_context) def test_list_batch_change_summaries_success(list_fixture): @@ -50,9 +53,9 @@ def test_list_batch_change_summaries_with_next_id(list_fixture): list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=1, start_from=1, max_items=1, next_id=2) - next_page_result = client.list_batch_change_summaries(status=200, start_from=batch_change_summaries_result['nextId']) + next_page_result = client.list_batch_change_summaries(status=200, start_from=batch_change_summaries_result["nextId"]) - list_fixture.check_batch_change_summaries_page_accuracy(next_page_result, size=1, start_from=batch_change_summaries_result['nextId']) + list_fixture.check_batch_change_summaries_page_accuracy(next_page_result, size=1, start_from=batch_change_summaries_result["nextId"]) @pytest.mark.manual_batch_review @@ -67,7 +70,7 @@ def test_list_batch_change_summaries_with_pending_status(shared_zone_test_contex "changes": [ get_change_A_AAAA_json("listing-batch-with-owner-group.non-existent-zone.", address="1.1.1.1") ], - "ownerGroupId": group['id'] + "ownerGroupId": group["id"] } pending_bc = None @@ -76,21 +79,21 @@ def test_list_batch_change_summaries_with_pending_status(shared_zone_test_contex batch_change_summaries_result = client.list_batch_change_summaries(status=200, approval_status="PendingReview") - for batchChange in batch_change_summaries_result['batchChanges']: - assert_that(batchChange['approvalStatus'], is_('PendingReview')) - assert_that(batchChange['status'], is_('PendingReview')) - assert_that(batchChange['totalChanges'], equal_to(1)) + for batchChange in batch_change_summaries_result["batchChanges"]: + assert_that(batchChange["approvalStatus"], is_("PendingReview")) + assert_that(batchChange["status"], is_("PendingReview")) + assert_that(batchChange["totalChanges"], equal_to(1)) finally: if pending_bc: rejecter = shared_zone_test_context.support_user_client - rejecter.reject_batch_change(pending_bc['id'], status=200) + rejecter.reject_batch_change(pending_bc["id"], status=200) def test_list_batch_change_summaries_with_list_batch_change_summaries_with_no_changes_passes(): """ Test successfully getting an empty list of summaries when user has no batch changes """ - client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listZeroSummariesAccessKey', 'listZeroSummariesSecretKey') + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listZeroSummariesAccessKey", "listZeroSummariesSecretKey") batch_change_summaries_result = client.list_batch_change_summaries(status=200)["batchChanges"] assert_that(batch_change_summaries_result, has_length(0)) @@ -103,11 +106,11 @@ def test_list_batch_change_summaries_with_deleted_record_owner_group_passes(shar """ client = shared_zone_test_context.shared_zone_vinyldns_client temp_group = { - 'name': 'test-list-summaries-deleted-owner-group', - 'email': 'test@test.com', - 'description': 'for testing that list summaries still works when record owner group is deleted', - 'members': [{'id': 'sharedZoneUser'}], - 'admins': [{'id': 'sharedZoneUser'}] + "name": "test-list-summaries-deleted-owner-group", + "email": "test@test.com", + "description": "for testing that list summaries still works when record owner group is deleted", + "members": [{"id": "sharedZoneUser"}], + "admins": [{"id": "sharedZoneUser"}] } record_to_delete = [] @@ -120,39 +123,39 @@ def test_list_batch_change_summaries_with_deleted_record_owner_group_passes(shar "changes": [ get_change_A_AAAA_json("list-batch-with-deleted-owner-group.shared.", address="1.1.1.1") ], - "ownerGroupId": group_to_delete['id'] + "ownerGroupId": group_to_delete["id"] } batch_change = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(batch_change) - record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + record_set_list = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] record_to_delete = set(record_set_list) # delete records and owner group temp = record_to_delete.copy() for result_rs in temp: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") record_to_delete.remove(result_rs) temp.clear() # delete group - client.delete_group(group_to_delete['id'], status=200) + client.delete_group(group_to_delete["id"], status=200) batch_change_summaries_result = client.list_batch_change_summaries(status=200)["batchChanges"] - under_test = [item for item in batch_change_summaries_result if item['id'] == completed_batch['id']] + under_test = [item for item in batch_change_summaries_result if item["id"] == completed_batch["id"]] assert_that(under_test, has_length(1)) under_test = under_test[0] - assert_that(under_test['ownerGroupId'], is_(group_to_delete['id'])) - assert_that(under_test, is_not(has_key('ownerGroupName'))) + assert_that(under_test["ownerGroupId"], is_(group_to_delete["id"])) + assert_that(under_test, is_not(has_key("ownerGroupName"))) finally: for result_rs in record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") def test_list_batch_change_summaries_with_ignore_access_true_only_shows_requesting_users_records(shared_zone_test_context): @@ -169,7 +172,7 @@ def test_list_batch_change_summaries_with_ignore_access_true_only_shows_requesti "changes": [ get_change_A_AAAA_json("ok-batch-with-owner-group.shared.", address="1.1.1.1") ], - "ownerGroupId": group['id'] + "ownerGroupId": group["id"] } ok_record_to_delete = [] @@ -178,18 +181,18 @@ def test_list_batch_change_summaries_with_ignore_access_true_only_shows_requesti ok_batch_change = ok_client.create_batch_change(ok_batch_change_input, status=202) ok_completed_batch = ok_client.wait_until_batch_change_completed(ok_batch_change) - ok_record_set_list = [(change['zoneId'], change['recordSetId']) for change in ok_completed_batch['changes']] + ok_record_set_list = [(change["zoneId"], change["recordSetId"]) for change in ok_completed_batch["changes"]] ok_record_to_delete = set(ok_record_set_list) ok_batch_change_summaries_result = ok_client.list_batch_change_summaries(ignore_access=True, status=200)["batchChanges"] - ok_under_test = [item for item in ok_batch_change_summaries_result if (item['id'] == ok_completed_batch['id'])] + ok_under_test = [item for item in ok_batch_change_summaries_result if (item["id"] == ok_completed_batch["id"])] assert_that(ok_under_test, has_length(1)) finally: for result_rs in ok_record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.skip_production @@ -205,7 +208,7 @@ def test_list_batch_change_summaries_with_pending_status(shared_zone_test_contex "changes": [ get_change_A_AAAA_json("listing-batch-with-owner-group.non-existent-zone.", address="1.1.1.1") ], - "ownerGroupId": group['id'] + "ownerGroupId": group["id"] } pending_bc = None @@ -214,11 +217,11 @@ def test_list_batch_change_summaries_with_pending_status(shared_zone_test_contex batch_change_summaries_result = client.list_batch_change_summaries(status=200, approval_status="PendingReview") - for batchChange in batch_change_summaries_result['batchChanges']: - assert_that(batchChange['approvalStatus'], is_('PendingReview')) - assert_that(batchChange['status'], is_('PendingReview')) - assert_that(batchChange['totalChanges'], equal_to(1)) + for batchChange in batch_change_summaries_result["batchChanges"]: + assert_that(batchChange["approvalStatus"], is_("PendingReview")) + assert_that(batchChange["status"], is_("PendingReview")) + assert_that(batchChange["totalChanges"], equal_to(1)) finally: if pending_bc: rejecter = shared_zone_test_context.support_user_client - rejecter.reject_batch_change(pending_bc['id'], status=200) + rejecter.reject_batch_change(pending_bc["id"], status=200) diff --git a/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py b/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py index cc75e8d43..8cc2f463d 100644 --- a/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py @@ -1,4 +1,5 @@ -from hamcrest import * +import pytest + from utils import * @@ -13,25 +14,26 @@ def test_reject_pending_batch_change_success(shared_zone_test_context): "changes": [ get_change_A_AAAA_json("zone.discovery.failure.", address="4.3.2.1") ], - "ownerGroupId": shared_zone_test_context.ok_group['id'] + "ownerGroupId": shared_zone_test_context.ok_group["id"] } result = client.create_batch_change(batch_change_input, status=202) - get_batch = client.get_batch_change(result['id']) - assert_that(get_batch['status'], is_('PendingReview')) - assert_that(get_batch['approvalStatus'], is_('PendingReview')) - assert_that(get_batch['changes'][0]['status'], is_('NeedsReview')) - assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError')) + get_batch = client.get_batch_change(result["id"]) + assert_that(get_batch["status"], is_("PendingReview")) + assert_that(get_batch["approvalStatus"], is_("PendingReview")) + assert_that(get_batch["changes"][0]["status"], is_("NeedsReview")) + assert_that(get_batch["changes"][0]["validationErrors"][0]["errorType"], is_("ZoneDiscoveryError")) - rejector.reject_batch_change(result['id'], status=200) - get_batch = client.get_batch_change(result['id']) + rejector.reject_batch_change(result["id"], status=200) + get_batch = client.get_batch_change(result["id"]) + + assert_that(get_batch["status"], is_("Rejected")) + assert_that(get_batch["approvalStatus"], is_("ManuallyRejected")) + assert_that(get_batch["reviewerId"], is_("support-user-id")) + assert_that(get_batch["reviewerUserName"], is_("support-user")) + assert_that(get_batch, has_key("reviewTimestamp")) + assert_that(get_batch["changes"][0]["status"], is_("Rejected")) + assert_that(get_batch, not (has_key("cancelledTimestamp"))) - assert_that(get_batch['status'], is_('Rejected')) - assert_that(get_batch['approvalStatus'], is_('ManuallyRejected')) - assert_that(get_batch['reviewerId'], is_('support-user-id')) - assert_that(get_batch['reviewerUserName'], is_('support-user')) - assert_that(get_batch, has_key('reviewTimestamp')) - assert_that(get_batch['changes'][0]['status'], is_('Rejected')) - assert_that(get_batch, not(has_key('cancelledTimestamp'))) @pytest.mark.manual_batch_review def test_reject_batch_change_with_invalid_batch_change_id_fails(shared_zone_test_context): @@ -44,6 +46,7 @@ def test_reject_batch_change_with_invalid_batch_change_id_fails(shared_zone_test error = client.reject_batch_change("some-id", status=404) assert_that(error, is_("Batch change with id some-id cannot be found")) + @pytest.mark.manual_batch_review def test_reject_batch_change_with_comments_exceeding_max_length_fails(shared_zone_test_context): """ @@ -52,11 +55,12 @@ def test_reject_batch_change_with_comments_exceeding_max_length_fails(shared_zon client = shared_zone_test_context.ok_vinyldns_client reject_batch_change_input = { - "reviewComment": "a"*1025 + "reviewComment": "a" * 1025 } - errors = client.reject_batch_change("some-id", reject_batch_change_input, status=400)['errors'] + errors = client.reject_batch_change("some-id", reject_batch_change_input, status=400)["errors"] assert_that(errors, contains_inanyorder("Comment length must not exceed 1024 characters.")) + @pytest.mark.manual_batch_review def test_reject_batch_change_fails_with_forbidden_error_for_non_system_admins(shared_zone_test_context): """ @@ -65,7 +69,7 @@ def test_reject_batch_change_fails_with_forbidden_error_for_non_system_admins(sh client = shared_zone_test_context.ok_vinyldns_client batch_change_input = { "changes": [ - get_change_A_AAAA_json("no-owner-group-id.ok.", address="4.3.2.1") + get_change_A_AAAA_json(f"no-owner-group-id.ok{shared_zone_test_context.partition_id}.", address="4.3.2.1") ] } to_delete = [] @@ -73,12 +77,13 @@ def test_reject_batch_change_fails_with_forbidden_error_for_non_system_admins(sh try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] - error = client.reject_batch_change(completed_batch['id'], status=403) - assert_that(error, is_("User does not have access to item " + completed_batch['id'])) + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] + error = client.reject_batch_change(completed_batch["id"], status=403) + assert_that(error, is_("User does not have access to item " + completed_batch["id"])) finally: clear_zoneid_rsid_tuple_list(to_delete, client) + @pytest.mark.manual_batch_review def test_reject_batch_change_fails_when_not_pending_approval(shared_zone_test_context): """ @@ -88,7 +93,7 @@ def test_reject_batch_change_fails_when_not_pending_approval(shared_zone_test_co rejector = shared_zone_test_context.support_user_client batch_change_input = { "changes": [ - get_change_A_AAAA_json("reject-completed-change-test.ok.", address="4.3.2.1") + get_change_A_AAAA_json(f"reject-completed-change-test.ok{shared_zone_test_context.partition_id}.", address="4.3.2.1") ] } to_delete = [] @@ -96,11 +101,9 @@ def test_reject_batch_change_fails_when_not_pending_approval(shared_zone_test_co try: result = client.create_batch_change(batch_change_input, status=202) completed_batch = client.wait_until_batch_change_completed(result) - to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] - error = rejector.reject_batch_change(completed_batch['id'], status=400) - assert_that(error, is_("Batch change " + completed_batch['id'] + + to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] + error = rejector.reject_batch_change(completed_batch["id"], status=400) + assert_that(error, is_("Batch change " + completed_batch["id"] + " is not pending review, so it cannot be rejected.")) finally: clear_zoneid_rsid_tuple_list(to_delete, client) - - diff --git a/modules/api/functional_test/live_tests/conftest.py b/modules/api/functional_test/live_tests/conftest.py index 6a0e21448..59c1e140f 100644 --- a/modules/api/functional_test/live_tests/conftest.py +++ b/modules/api/functional_test/live_tests/conftest.py @@ -1,7 +1,37 @@ +import logging +from pathlib import Path +from typing import MutableMapping + import pytest +from shared_zone_test_context import SharedZoneTestContext + +STATE_FILE = Path("testing_state.json") + +logger = logging.getLogger(__name__) + +ctx_cache: MutableMapping[str, SharedZoneTestContext] = {} + @pytest.fixture(scope="session") -def shared_zone_test_context(request): - from shared_zone_test_context import SharedZoneTestContext - return SharedZoneTestContext("tmp.out") +def shared_zone_test_context(tmp_path_factory, worker_id): + if worker_id == "master": + partition_id = "1" + else: + partition_id = str(int(worker_id.replace("gw", "")) + 1) + + if ctx_cache.get(partition_id) is not None: + return ctx_cache[partition_id] + + ctx = ctx_cache[partition_id] = SharedZoneTestContext(partition_id) + ctx.setup() + yield ctx + del ctx_cache[partition_id] + ctx.tear_down() + + +@pytest.hookimpl(tryfirst=True) +def pytest_keyboard_interrupt(): + print("cleaning up state due to interrupt") + for partition_id, context in ctx_cache.items(): + context.tear_down() diff --git a/modules/api/functional_test/live_tests/internal/status_test.py b/modules/api/functional_test/live_tests/internal/status_test.py index f42a2ed63..8b67b4489 100644 --- a/modules/api/functional_test/live_tests/internal/status_test.py +++ b/modules/api/functional_test/live_tests/internal/status_test.py @@ -1,3 +1,5 @@ +import copy + import pytest import time @@ -15,10 +17,10 @@ def test_get_status_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result = client.get_status() - assert_that([True, False], has_item(result['processingDisabled'])) - assert_that(["green","blue"], has_item(result['color'])) - assert_that(result['keyName'], not_none()) - assert_that(result['version'], not_none()) + assert_that([True, False], has_item(result["processingDisabled"])) + assert_that(["green","blue"], has_item(result["color"])) + assert_that(result["keyName"], not_none()) + assert_that(result["version"], not_none()) @pytest.mark.serial @@ -29,48 +31,48 @@ def test_toggle_processing(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - ok_zone = shared_zone_test_context.ok_zone + ok_zone = copy.deepcopy(shared_zone_test_context.ok_zone) # disable processing client.post_status(True) status = client.get_status() - assert_that(status['processingDisabled'], is_(True)) + assert_that(status["processingDisabled"], is_(True)) client.post_status(False) status = client.get_status() - assert_that(status['processingDisabled'], is_(False)) + assert_that(status["processingDisabled"], is_(False)) # Create changes to make sure we can process after the toggle # attempt to perform an update - ok_zone['email'] = 'foo@bar.com' + ok_zone["email"] = "foo@bar.com" zone_change_result = client.update_zone(ok_zone, status=202) # attempt to a create a record new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'test-status-disable-processing', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "test-status-disable-processing", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } record_change = client.create_recordset(new_rs, status=202) - assert_that(record_change['status'], is_('Pending')) + assert_that(record_change["status"], is_("Pending")) # Make sure that the changes are processed client.wait_until_zone_change_status_synced(zone_change_result) - client.wait_until_recordset_change_status(record_change, 'Complete') + client.wait_until_recordset_change_status(record_change, "Complete") - recordset_length = len(client.list_recordsets_by_zone(ok_zone['id'])['recordSets']) + recordset_length = len(client.list_recordsets_by_zone(ok_zone["id"])["recordSets"]) - client.delete_recordset(ok_zone['id'], record_change['recordSet']['id'], status=202) - client.wait_until_recordset_deleted(ok_zone['id'], record_change['recordSet']['id']) - assert_that(client.list_recordsets_by_zone(ok_zone['id'])['recordSets'], has_length(recordset_length - 1)) + client.delete_recordset(ok_zone["id"], record_change["recordSet"]["id"], status=202) + client.wait_until_recordset_deleted(ok_zone["id"], record_change["recordSet"]["id"]) + assert_that(client.list_recordsets_by_zone(ok_zone["id"])["recordSets"], has_length(recordset_length - 1)) diff --git a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py index e457f191e..441525cdd 100644 --- a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py +++ b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py @@ -1,42 +1,45 @@ -import time -from hamcrest import * from utils import * -from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient -class ListBatchChangeSummariesTestContext(): - def __init__(self, shared_zone_test_context): - # Note: this fixture is designed so it will load summaries instead of creating them - self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listBatchSummariesAccessKey', - 'listBatchSummariesSecretKey') +class ListBatchChangeSummariesTestContext: + to_delete: set = None + completed_changes: list = [] + group: object = None + is_setup: bool = False + + def __init__(self): + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listBatchSummariesAccessKey", "listBatchSummariesSecretKey") + + def setup(self, shared_zone_test_context): self.completed_changes = [] self.to_delete = None - acl_rule = generate_acl_rule('Write', userId='list-batch-summaries-id') + acl_rule = generate_acl_rule("Write", userId="list-batch-summaries-id") add_ok_acl_rules(shared_zone_test_context, [acl_rule]) initial_db_check = self.client.list_batch_change_summaries(status=200) - self.group = self.client.get_group('list-summaries-group', status=200) + self.group = self.client.get_group("list-summaries-group", status=200) + ok_zone_name = shared_zone_test_context.ok_zone batch_change_input_one = { "comments": "first", "changes": [ - get_change_CNAME_json("test-first.ok.", cname="one.") + get_change_CNAME_json(f"test-first.{ok_zone_name}", cname="one.") ] } batch_change_input_two = { "comments": "second", "changes": [ - get_change_CNAME_json("test-second.ok.", cname="two.") + get_change_CNAME_json(f"test-second.{ok_zone_name}", cname="two.") ] } batch_change_input_three = { "comments": "last", "changes": [ - get_change_CNAME_json("test-last.ok.", cname="three.") + get_change_CNAME_json(f"test-last.{ok_zone_name}", cname="three.") ] } @@ -45,61 +48,56 @@ class ListBatchChangeSummariesTestContext(): record_set_list = [] self.completed_changes = [] - if len(initial_db_check['batchChanges']) == 0: - print "\r\n!!! CREATING NEW SUMMARIES" + if len(initial_db_check["batchChanges"]) == 0: + print("\r\n!!! CREATING NEW SUMMARIES") # make some batch changes - for input in batch_change_inputs: - change = self.client.create_batch_change(input, status=202) + for batch_change_input in batch_change_inputs: + change = self.client.create_batch_change(batch_change_input, status=202) - if 'Review' not in change['status']: + if "Review" not in change["status"]: completed = self.client.wait_until_batch_change_completed(change) - assert_that(completed["comments"], equal_to(input["comments"])) - record_set_list += [(change['zoneId'], change['recordSetId']) for change in completed['changes']] + assert_that(completed["comments"], equal_to(batch_change_input["comments"])) + record_set_list += [(change["zoneId"], change["recordSetId"]) for change in completed["changes"]] # sleep for consistent ordering of timestamps, must be at least one second apart time.sleep(1) - self.completed_changes = self.client.list_batch_change_summaries(status=200)['batchChanges'] + self.completed_changes = self.client.list_batch_change_summaries(status=200)["batchChanges"] assert_that(len(self.completed_changes), equal_to(len(batch_change_inputs))) else: - print "\r\n!!! USING EXISTING SUMMARIES" - self.completed_changes = initial_db_check['batchChanges'] + print("\r\n!!! USING EXISTING SUMMARIES") + self.completed_changes = initial_db_check["batchChanges"] self.to_delete = set(record_set_list) + self.is_setup = True - def tear_down(self, shared_zone_test_context): - for result_rs in self.to_delete: - delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(result_rs[0], result_rs[1], - status=202) - shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete') - clear_ok_acl_rules(shared_zone_test_context) + def tear_down(self): + self.client.tear_down() - def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False, - max_items=100, approval_status=False): + def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False, max_items=100, approval_status=False): # validate fields if next_id: - assert_that(summaries_page, has_key('nextId')) + assert_that(summaries_page, has_key("nextId")) else: - assert_that(summaries_page, is_not(has_key('nextId'))) + assert_that(summaries_page, is_not(has_key("nextId"))) if start_from: - assert_that(summaries_page['startFrom'], is_(start_from)) + assert_that(summaries_page["startFrom"], is_(start_from)) else: - assert_that(summaries_page, is_not(has_key('startFrom'))) + assert_that(summaries_page, is_not(has_key("startFrom"))) if approval_status: - assert_that(summaries_page, has_key('approvalStatus')) + assert_that(summaries_page, has_key("approvalStatus")) else: - assert_that(summaries_page, is_not(has_key('approvalStatus'))) - assert_that(summaries_page['maxItems'], is_(max_items)) + assert_that(summaries_page, is_not(has_key("approvalStatus"))) + assert_that(summaries_page["maxItems"], is_(max_items)) # validate actual page - list_batch_change_summaries = summaries_page['batchChanges'] + list_batch_change_summaries = summaries_page["batchChanges"] assert_that(list_batch_change_summaries, has_length(size)) for i, summary in enumerate(list_batch_change_summaries): assert_that(summary["userId"], equal_to("list-batch-summaries-id")) assert_that(summary["userName"], equal_to("list-batch-summaries-user")) assert_that(summary["comments"], equal_to(self.completed_changes[i + start_from]["comments"])) - assert_that(summary["createdTimestamp"], - equal_to(self.completed_changes[i + start_from]["createdTimestamp"])) + assert_that(summary["createdTimestamp"], equal_to(self.completed_changes[i + start_from]["createdTimestamp"])) assert_that(summary["totalChanges"], equal_to(self.completed_changes[i + start_from]["totalChanges"])) assert_that(summary["status"], equal_to(self.completed_changes[i + start_from]["status"])) assert_that(summary["id"], equal_to(self.completed_changes[i + start_from]["id"])) diff --git a/modules/api/functional_test/live_tests/list_groups_test_context.py b/modules/api/functional_test/live_tests/list_groups_test_context.py index 76a410060..5de19f227 100644 --- a/modules/api/functional_test/live_tests/list_groups_test_context.py +++ b/modules/api/functional_test/live_tests/list_groups_test_context.py @@ -1,35 +1,29 @@ -from hamcrest import * from utils import * -from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient class ListGroupsTestContext(object): - def __init__(self): - self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, access_key='listGroupAccessKey', - secret_key='listGroupSecretKey') - self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'supportUserAccessKey', - 'supportUserSecretKey') + def __init__(self, partition_id: str): + self.partition_id = partition_id + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, access_key="listGroupAccessKey", secret_key="listGroupSecretKey") + self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "supportUserAccessKey", "supportUserSecretKey") def build(self): try: for runner in range(0, 50): new_group = { - 'name': "test-list-my-groups-{0:0>3}".format(runner), - 'email': 'test@test.com', - 'members': [{'id': 'list-group-user'}], - 'admins': [{'id': 'list-group-user'}] + "name": "test-list-my-groups-{0:0>3}{0}".format(runner, self.partition_id), + "email": "test@test.com", + "members": [{"id": "list-group-user"}], + "admins": [{"id": "list-group-user"}] } self.client.create_group(new_group, status=200) - except: - # teardown if there was any issue in setup - try: - self.tear_down() - except: - pass + self.tear_down() raise def tear_down(self): clear_zones(self.client) clear_groups(self.client) + self.client.tear_down() + self.support_user_client.tear_down() diff --git a/modules/api/functional_test/live_tests/list_recordsets_test_context.py b/modules/api/functional_test/live_tests/list_recordsets_test_context.py index b940127f3..466694356 100644 --- a/modules/api/functional_test/live_tests/list_recordsets_test_context.py +++ b/modules/api/functional_test/live_tests/list_recordsets_test_context.py @@ -1,89 +1,88 @@ -from hamcrest import * from utils import * -from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient class ListRecordSetsTestContext(object): - def __init__(self): - self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listRecordsAccessKey', 'listRecordsSecretKey') + def __init__(self, partition_id: str): + self.partition_id = partition_id + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listRecordsAccessKey", "listRecordsSecretKey") self.zone = None self.all_records = [] self.group = None - get_zone = self.client.get_zone_by_name('list-records.', status=(200, 404)) - if get_zone and 'zone' in get_zone: - self.zone = get_zone['zone'] - self.all_records = self.client.list_recordsets_by_zone(self.zone['id'])['recordSets'] - my_groups = self.client.list_my_groups(group_name_filter='list-records-group') - if my_groups and 'groups' in my_groups and len(my_groups['groups']) > 0: - self.group = my_groups['groups'][0] + get_zone = self.client.get_zone_by_name(f"list-records{partition_id}.", status=(200, 404)) + if get_zone and "zone" in get_zone: + self.zone = get_zone["zone"] + self.all_records = self.client.list_recordsets_by_zone(self.zone["id"])["recordSets"] + my_groups = self.client.list_my_groups(group_name_filter="list-records-group") + if my_groups and "groups" in my_groups and len(my_groups["groups"]) > 0: + self.group = my_groups["groups"][0] def build(self): - # Only call this if the context needs to be built - self.tear_down() + partition_id = self.partition_id group = { - 'name': 'list-records-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'list-records-user'}], - 'admins': [{'id': 'list-records-user'}] + "name": f"list-records-group{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "list-records-user"}], + "admins": [{"id": "list-records-user"}] } self.group = self.client.create_group(group, status=200) zone_change = self.client.create_zone( { - 'name': 'list-records.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-records{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) - self.client.wait_until_zone_active(zone_change[u'zone'][u'id']) - self.zone = zone_change[u'zone'] - self.all_records = self.client.list_recordsets_by_zone(self.zone['id'])['recordSets'] + self.client.wait_until_zone_active(zone_change["zone"]["id"]) + self.zone = zone_change["zone"] + self.all_records = self.client.list_recordsets_by_zone(self.zone["id"])["recordSets"] def tear_down(self): clear_zones(self.client) clear_groups(self.client) + self.client.tear_down() - def check_recordsets_page_accuracy(self, list_results_page, size, offset, nextId=False, startFrom=False, maxItems=100, recordTypeFilter=False, nameSort="ASC"): + def check_recordsets_page_accuracy(self, list_results_page, size, offset, next_id=False, start_from=False, max_items=100, record_type_filter=False, name_sort="ASC"): # validate fields - if nextId: - assert_that(list_results_page, has_key('nextId')) + if next_id: + assert_that(list_results_page, has_key("nextId")) else: - assert_that(list_results_page, is_not(has_key('nextId'))) - if startFrom: - assert_that(list_results_page['startFrom'], is_(startFrom)) + assert_that(list_results_page, is_not(has_key("nextId"))) + if start_from: + assert_that(list_results_page["startFrom"], is_(start_from)) else: - assert_that(list_results_page, is_not(has_key('startFrom'))) - if recordTypeFilter: - assert_that(list_results_page, has_key('recordTypeFilter')) + assert_that(list_results_page, is_not(has_key("startFrom"))) + if record_type_filter: + assert_that(list_results_page, has_key("recordTypeFilter")) else: - assert_that(list_results_page, is_not(has_key('recordTypeFilter'))) - assert_that(list_results_page['maxItems'], is_(maxItems)) - assert_that(list_results_page['nameSort'], is_(nameSort)) + assert_that(list_results_page, is_not(has_key("recordTypeFilter"))) + assert_that(list_results_page["maxItems"], is_(max_items)) + assert_that(list_results_page["nameSort"], is_(name_sort)) # validate actual page - list_results_recordsets_page = list_results_page['recordSets'] + list_results_recordsets_page = list_results_page["recordSets"] assert_that(list_results_recordsets_page, has_length(size)) for i in range(len(list_results_recordsets_page)): - assert_that(list_results_recordsets_page[i]['name'], is_(self.all_records[i+offset]['name'])) - verify_recordset(list_results_recordsets_page[i], self.all_records[i+offset]) - assert_that(list_results_recordsets_page[i]['accessLevel'], is_('Delete')) + assert_that(list_results_recordsets_page[i]["name"], is_(self.all_records[i + offset]["name"])) + verify_recordset(list_results_recordsets_page[i], self.all_records[i + offset]) + assert_that(list_results_recordsets_page[i]["accessLevel"], is_("Delete")) - def check_recordsets_parameters(self, list_results_page, nextId=False, startFrom=False, maxItems=100, recordTypeFilter=False, nameSort="ASC"): + def check_recordsets_parameters(self, list_results_page, next_id=False, start_from=False, max_items=100, record_type_filter=False, name_sort="ASC"): # validate fields - if nextId: - assert_that(list_results_page, has_key('nextId')) + if next_id: + assert_that(list_results_page, has_key("nextId")) else: - assert_that(list_results_page, is_not(has_key('nextId'))) - if startFrom: - assert_that(list_results_page['startFrom'], is_(startFrom)) + assert_that(list_results_page, is_not(has_key("nextId"))) + if start_from: + assert_that(list_results_page["startFrom"], is_(start_from)) else: - assert_that(list_results_page, is_not(has_key('startFrom'))) - if recordTypeFilter: - assert_that(list_results_page, has_key('recordTypeFilter')) + assert_that(list_results_page, is_not(has_key("startFrom"))) + if record_type_filter: + assert_that(list_results_page, has_key("recordTypeFilter")) else: - assert_that(list_results_page, is_not(has_key('recordTypeFilter'))) - assert_that(list_results_page['maxItems'], is_(maxItems)) - assert_that(list_results_page['nameSort'], is_(nameSort)) + assert_that(list_results_page, is_not(has_key("recordTypeFilter"))) + assert_that(list_results_page["maxItems"], is_(max_items)) + assert_that(list_results_page["nameSort"], is_(name_sort)) diff --git a/modules/api/functional_test/live_tests/list_zones_test_context.py b/modules/api/functional_test/live_tests/list_zones_test_context.py index 9d98d6005..854769c17 100644 --- a/modules/api/functional_test/live_tests/list_zones_test_context.py +++ b/modules/api/functional_test/live_tests/list_zones_test_context.py @@ -1,78 +1,77 @@ -from hamcrest import * from utils import * -from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient class ListZonesTestContext(object): - def __init__(self): - self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listZonesAccessKey', 'listZonesSecretKey') + def __init__(self, partition_id): + self.partition_id = partition_id + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listZonesAccessKey", "listZonesSecretKey") def build(self): - self.tear_down() + partition_id = self.partition_id group = { - 'name': 'list-zones-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'list-zones-user'}], - 'admins': [{'id': 'list-zones-user'}] + "name": f"list-zones-group{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "list-zones-user"}], + "admins": [{"id": "list-zones-user"}] } list_zones_group = self.client.create_group(group, status=200) search_zone_1_change = self.client.create_zone( { - 'name': 'list-zones-test-searched-1.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': list_zones_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-zones-test-searched-1{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": list_zones_group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) search_zone_2_change = self.client.create_zone( { - 'name': 'list-zones-test-searched-2.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': list_zones_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-zones-test-searched-2{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": list_zones_group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) search_zone_3_change = self.client.create_zone( { - 'name': 'list-zones-test-searched-3.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': list_zones_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-zones-test-searched-3{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": list_zones_group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) non_search_zone_1_change = self.client.create_zone( { - 'name': 'list-zones-test-unfiltered-1.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': list_zones_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-zones-test-unfiltered-1{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": list_zones_group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) non_search_zone_2_change = self.client.create_zone( { - 'name': 'list-zones-test-unfiltered-2.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': list_zones_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' + "name": f"list-zones-test-unfiltered-2{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": list_zones_group["id"], + "isTest": True, + "backendId": "func-test-backend" }, status=202) - zone_changes = [search_zone_1_change, search_zone_2_change, search_zone_3_change, non_search_zone_1_change, - non_search_zone_2_change] + zone_changes = [search_zone_1_change, search_zone_2_change, search_zone_3_change, non_search_zone_1_change, non_search_zone_2_change] for change in zone_changes: - self.client.wait_until_zone_active(change[u'zone'][u'id']) + self.client.wait_until_zone_active(change["zone"]["id"]) def tear_down(self): clear_zones(self.client) clear_groups(self.client) + self.client.tear_down() diff --git a/modules/api/functional_test/live_tests/membership/create_group_test.py b/modules/api/functional_test/live_tests/membership/create_group_test.py index 4a989489a..660f5c773 100644 --- a/modules/api/functional_test/live_tests/membership/create_group_test.py +++ b/modules/api/functional_test/live_tests/membership/create_group_test.py @@ -12,28 +12,28 @@ def test_create_group_success(shared_zone_test_context): try: new_group = { - 'name': 'test-create-group-success', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-create-group-success", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) - assert_that(result['description'], is_(new_group['description'])) - assert_that(result['status'], is_('Active')) - assert_that(result['created'], not_none()) - assert_that(result['id'], not_none()) - assert_that(result['members'], has_length(1)) - assert_that(result['members'][0]['id'], is_('ok')) - assert_that(result['admins'], has_length(1)) - assert_that(result['admins'][0]['id'], is_('ok')) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) + assert_that(result["description"], is_(new_group["description"])) + assert_that(result["status"], is_("Active")) + assert_that(result["created"], not_none()) + assert_that(result["id"], not_none()) + assert_that(result["members"], has_length(1)) + assert_that(result["members"][0]["id"], is_("ok")) + assert_that(result["admins"], has_length(1)) + assert_that(result["admins"][0]["id"], is_("ok")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_creator_is_an_admin(shared_zone_test_context): @@ -45,28 +45,28 @@ def test_creator_is_an_admin(shared_zone_test_context): try: new_group = { - 'name': 'test-create-group-success', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [] + "name": "test-create-group-success", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) - assert_that(result['description'], is_(new_group['description'])) - assert_that(result['status'], is_('Active')) - assert_that(result['created'], not_none()) - assert_that(result['id'], not_none()) - assert_that(result['members'], has_length(1)) - assert_that(result['members'][0]['id'], is_('ok')) - assert_that(result['admins'], has_length(1)) - assert_that(result['admins'][0]['id'], is_('ok')) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) + assert_that(result["description"], is_(new_group["description"])) + assert_that(result["status"], is_("Active")) + assert_that(result["created"], not_none()) + assert_that(result["id"], not_none()) + assert_that(result["members"], has_length(1)) + assert_that(result["members"][0]["id"], is_("ok")) + assert_that(result["admins"], has_length(1)) + assert_that(result["admins"][0]["id"], is_("ok")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_create_group_without_name(shared_zone_test_context): @@ -76,12 +76,12 @@ def test_create_group_without_name(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_group = { - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - errors = client.create_group(new_group, status=400)['errors'] + errors = client.create_group(new_group, status=400)["errors"] assert_that(errors[0], is_("Missing Group.name")) @@ -92,12 +92,12 @@ def test_create_group_without_email(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_group = { - 'name': 'without-email', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "without-email", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - errors = client.create_group(new_group, status=400)['errors'] + errors = client.create_group(new_group, status=400)["errors"] assert_that(errors[0], is_("Missing Group.email")) @@ -108,11 +108,11 @@ def test_create_group_without_name_or_email(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_group = { - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - errors = client.create_group(new_group, status=400)['errors'] + errors = client.create_group(new_group, status=400)["errors"] assert_that(errors, has_length(2)) assert_that(errors, contains_inanyorder( "Missing Group.name", @@ -127,11 +127,11 @@ def test_create_group_without_members_or_admins(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_group = { - 'name': 'some-group-name', - 'email': 'test@test.com', - 'description': 'this is a description' + "name": "some-group-name", + "email": "test@test.com", + "description": "this is a description" } - errors = client.create_group(new_group, status=400)['errors'] + errors = client.create_group(new_group, status=400)["errors"] assert_that(errors, has_length(2)) assert_that(errors, contains_inanyorder( "Missing Group.members", @@ -148,25 +148,25 @@ def test_create_group_adds_admins_as_members(shared_zone_test_context): try: new_group = { - 'name': 'test-create-group-add-admins-as-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [], - 'admins': [{'id': 'ok'}] + "name": "test-create-group-add-admins-as-members", + "email": "test@test.com", + "description": "this is a description", + "members": [], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) - assert_that(result['description'], is_(new_group['description'])) - assert_that(result['status'], is_('Active')) - assert_that(result['created'], not_none()) - assert_that(result['id'], not_none()) - assert_that(result['members'][0]['id'], is_('ok')) - assert_that(result['admins'][0]['id'], is_('ok')) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) + assert_that(result["description"], is_(new_group["description"])) + assert_that(result["status"], is_("Active")) + assert_that(result["created"], not_none()) + assert_that(result["id"], not_none()) + assert_that(result["members"][0]["id"], is_("ok")) + assert_that(result["admins"][0]["id"], is_("ok")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_create_group_duplicate(shared_zone_test_context): @@ -177,11 +177,11 @@ def test_create_group_duplicate(shared_zone_test_context): result = None try: new_group = { - 'name': 'test-create-group-duplicate', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-create-group-duplicate", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) @@ -189,7 +189,7 @@ def test_create_group_duplicate(shared_zone_test_context): finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_create_group_no_members(shared_zone_test_context): @@ -201,19 +201,19 @@ def test_create_group_no_members(shared_zone_test_context): try: new_group = { - 'name': 'test-create-group-no-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [], - 'admins': [] + "name": "test-create-group-no-members", + "email": "test@test.com", + "description": "this is a description", + "members": [], + "admins": [] } result = client.create_group(new_group, status=200) - assert_that(result['members'][0]['id'], is_('ok')) - assert_that(result['admins'][0]['id'], is_('ok')) + assert_that(result["members"][0]["id"], is_("ok")) + assert_that(result["admins"][0]["id"], is_("ok")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_create_group_adds_admins_to_member_list(shared_zone_test_context): @@ -225,16 +225,16 @@ def test_create_group_adds_admins_to_member_list(shared_zone_test_context): try: new_group = { - 'name': 'test-create-group-add-admins-to-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'dummy'}] + "name": "test-create-group-add-admins-to-members", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "dummy"}] } result = client.create_group(new_group, status=200) - assert_that(map(lambda x: x['id'], result['members']), contains('ok', 'dummy')) - assert_that(result['admins'][0]['id'], is_('dummy')) + assert_that([x["id"] for x in result["members"]], contains_exactly("ok", "dummy")) + assert_that(result["admins"][0]["id"], is_("dummy")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/delete_group_test.py b/modules/api/functional_test/live_tests/membership/delete_group_test.py index 61609a690..a09f836aa 100644 --- a/modules/api/functional_test/live_tests/membership/delete_group_test.py +++ b/modules/api/functional_test/live_tests/membership/delete_group_test.py @@ -16,18 +16,18 @@ def test_delete_group_success(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-delete-group-success', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-delete-group-success", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - result = client.delete_group(saved_group['id'], status=200) - assert_that(result['status'], is_('Deleted')) + result = client.delete_group(saved_group["id"], status=200) + assert_that(result["status"], is_("Deleted")) finally: if result: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_delete_group_not_found(shared_zone_test_context): @@ -35,7 +35,7 @@ def test_delete_group_not_found(shared_zone_test_context): Tests that deleting a group that does not exist returns a 404 """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_group('doesntexist', status=404) + client.delete_group("doesntexist", status=404) def test_delete_group_that_is_already_deleted(shared_zone_test_context): @@ -48,20 +48,20 @@ def test_delete_group_that_is_already_deleted(shared_zone_test_context): try: new_group = { - 'name': 'test-delete-group-already', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-delete-group-already", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - client.delete_group(saved_group['id'], status=200) - client.delete_group(saved_group['id'], status=404) + client.delete_group(saved_group["id"], status=200) + client.delete_group(saved_group["id"], status=404) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_delete_admin_group(shared_zone_test_context): @@ -75,52 +75,52 @@ def test_delete_admin_group(shared_zone_test_context): try: #Create group new_group = { - 'name': 'test-delete-group-already', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-delete-group-already", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } result_group = client.create_group(new_group, status=200) - print result_group + print(result_group) #Create zone with that group ID as admin zone = { - 'name': 'one-time.', - 'email': 'test@test.com', - 'adminGroupId': result_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "one-time.", + "email": "test@test.com", + "adminGroupId": result_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result[u'zone'][u'id']) + result_zone = result["zone"] + client.wait_until_zone_active(result["zone"]["id"]) - client.delete_group(result_group['id'], status=400) + client.delete_group(result_group["id"], status=400) #Delete zone - client.delete_zone(result_zone['id'], status=202) - client.wait_until_zone_deleted(result_zone['id']) + client.delete_zone(result_zone["id"], status=202) + client.wait_until_zone_deleted(result_zone["id"]) #Should now be able to delete group - client.delete_group(result_group['id'], status=200) + client.delete_group(result_group["id"], status=200) finally: if result_zone: - client.delete_zone(result_zone['id'], status=(202,404)) + client.delete_zone(result_zone["id"], status=(202, 404)) if result_group: - client.delete_group(result_group['id'], status=(200,404)) + client.delete_group(result_group["id"], status=(200, 404)) def test_delete_group_not_authorized(shared_zone_test_context): """ @@ -130,14 +130,14 @@ def test_delete_group_not_authorized(shared_zone_test_context): not_admin_client = shared_zone_test_context.dummy_vinyldns_client try: new_group = { - 'name': 'test-delete-group-not-authorized', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-delete-group-not-authorized", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = ok_client.create_group(new_group, status=200) - not_admin_client.delete_group(saved_group['id'], status=403) + not_admin_client.delete_group(saved_group["id"], status=403) finally: if saved_group: - ok_client.delete_group(saved_group['id'], status=(200,404)) + ok_client.delete_group(saved_group["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py index 656df2174..268371d39 100644 --- a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py +++ b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py @@ -6,8 +6,8 @@ from hamcrest import * @pytest.fixture(scope="module") def group_activity_context(request, shared_zone_test_context): return { - 'created_group': shared_zone_test_context.group_activity_created, - 'updated_groups': shared_zone_test_context.group_activity_updated + "created_group": shared_zone_test_context.group_activity_created, + "updated_groups": shared_zone_test_context.group_activity_updated } @@ -18,26 +18,26 @@ def test_list_group_activity_start_from_success(group_activity_context, shared_z import json client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - updated_groups = group_activity_context['updated_groups'] + created_group = group_activity_context["created_group"] + updated_groups = group_activity_context["updated_groups"] # updated groups holds all the groups just updated, not the original group that has no dummy user # [0] = dummy000; [1] = dummy001; [2] = dummy002; [3] = dummy003, etc. # we grab 3 items, which when sorted by most recent will give the 3 most recent items - page_one = client.get_group_changes(created_group['id'], max_items=3, status=200) + page_one = client.get_group_changes(created_group["id"], max_items=3, status=200) # our start from will align with the created on the 3rd change in the list start_from_index = 2 - start_from = page_one['changes'][start_from_index]['created'] # start from a known good timestamp + start_from = page_one["changes"][start_from_index]["created"] # start from a known good timestamp # now, we say give me all changes since the start_from, which should yield 8-7-6-5-4 - result = client.get_group_changes(created_group['id'], start_from=start_from, max_items=5, status=200) + result = client.get_group_changes(created_group["id"], start_from=start_from, max_items=5, status=200) - assert_that(result['changes'], has_length(5)) - assert_that(result['maxItems'], is_(5)) - assert_that(result['startFrom'], is_(start_from)) - assert_that(result['nextId'], is_not(none())) + assert_that(result["changes"], has_length(5)) + assert_that(result["maxItems"], is_(5)) + assert_that(result["startFrom"], is_(start_from)) + assert_that(result["nextId"], is_not(none())) # we should have, in order, changes 8 7 6 5 4 # changes that came in worked off... @@ -45,8 +45,8 @@ def test_list_group_activity_start_from_success(group_activity_context, shared_z expected_start = 6 for i in range(0, 5): # The new group should be the later, the group it is replacing should be one back - assert_that(result['changes'][i]['newGroup'], is_(updated_groups[expected_start - i])) - assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[expected_start - i - 1])) + assert_that(result["changes"][i]["newGroup"], is_(updated_groups[expected_start - i])) + assert_that(result["changes"][i]["oldGroup"], is_(updated_groups[expected_start - i - 1])) def test_list_group_activity_start_from_fake_time(group_activity_context, shared_zone_test_context): @@ -55,21 +55,21 @@ def test_list_group_activity_start_from_fake_time(group_activity_context, shared """ client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - updated_groups = group_activity_context['updated_groups'] - start_from = '9999999999999' # start from a random timestamp far in the future + created_group = group_activity_context["created_group"] + updated_groups = group_activity_context["updated_groups"] + start_from = "9999999999999" # start from a random timestamp far in the future - result = client.get_group_changes(created_group['id'], start_from=start_from, max_items=5, status=200) + result = client.get_group_changes(created_group["id"], start_from=start_from, max_items=5, status=200) # there are 10 updates, proceeded by 1 create - assert_that(result['changes'], has_length(5)) - assert_that(result['maxItems'], is_(5)) - assert_that(result['startFrom'], is_(start_from)) - assert_that(result['nextId'], is_not(none())) + assert_that(result["changes"], has_length(5)) + assert_that(result["maxItems"], is_(5)) + assert_that(result["startFrom"], is_(start_from)) + assert_that(result["nextId"], is_not(none())) for i in range(0, 5): - assert_that(result['changes'][i]['newGroup'], is_(updated_groups[9 - i])) - assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[9 - i - 1])) + assert_that(result["changes"][i]["newGroup"], is_(updated_groups[9 - i])) + assert_that(result["changes"][i]["oldGroup"], is_(updated_groups[9 - i - 1])) def test_list_group_activity_max_item_success(group_activity_context, shared_zone_test_context): @@ -78,20 +78,20 @@ def test_list_group_activity_max_item_success(group_activity_context, shared_zon """ client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - updated_groups = group_activity_context['updated_groups'] + created_group = group_activity_context["created_group"] + updated_groups = group_activity_context["updated_groups"] - result = client.get_group_changes(created_group['id'], max_items=4, status=200) + result = client.get_group_changes(created_group["id"], max_items=4, status=200) # there are 200 updates, and 1 create - assert_that(result['changes'], has_length(4)) - assert_that(result['maxItems'], is_(4)) - assert_that(result, is_not(has_key('startFrom'))) - assert_that(result['nextId'], is_not(none())) + assert_that(result["changes"], has_length(4)) + assert_that(result["maxItems"], is_(4)) + assert_that(result, is_not(has_key("startFrom"))) + assert_that(result["nextId"], is_not(none())) for i in range(0, 4): - assert_that(result['changes'][i]['newGroup'], is_(updated_groups[9 - i])) - assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[9 - i - 1])) + assert_that(result["changes"][i]["newGroup"], is_(updated_groups[9 - i])) + assert_that(result["changes"][i]["oldGroup"], is_(updated_groups[9 - i - 1])) def test_list_group_activity_max_item_zero(group_activity_context, shared_zone_test_context): @@ -100,8 +100,8 @@ def test_list_group_activity_max_item_zero(group_activity_context, shared_zone_t """ client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - client.get_group_changes(created_group['id'], max_items=0, status=400) + created_group = group_activity_context["created_group"] + client.get_group_changes(created_group["id"], max_items=0, status=400) def test_list_group_activity_max_item_over_1000(group_activity_context, shared_zone_test_context): @@ -110,8 +110,8 @@ def test_list_group_activity_max_item_over_1000(group_activity_context, shared_z """ client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - client.get_group_changes(created_group['id'], max_items=1001, status=400) + created_group = group_activity_context["created_group"] + client.get_group_changes(created_group["id"], max_items=1001, status=400) def test_get_group_changes_paging(group_activity_context, shared_zone_test_context): @@ -120,39 +120,39 @@ def test_get_group_changes_paging(group_activity_context, shared_zone_test_conte """ client = shared_zone_test_context.ok_vinyldns_client - created_group = group_activity_context['created_group'] - updated_groups = group_activity_context['updated_groups'] + created_group = group_activity_context["created_group"] + updated_groups = group_activity_context["updated_groups"] - page_one = client.get_group_changes(created_group['id'], max_items=5, status=200) - page_two = client.get_group_changes(created_group['id'], start_from=page_one['nextId'], max_items=5, status=200) - page_three = client.get_group_changes(created_group['id'], start_from=page_two['nextId'], max_items=5, status=200) + page_one = client.get_group_changes(created_group["id"], max_items=5, status=200) + page_two = client.get_group_changes(created_group["id"], start_from=page_one["nextId"], max_items=5, status=200) + page_three = client.get_group_changes(created_group["id"], start_from=page_two["nextId"], max_items=5, status=200) - assert_that(page_one['changes'], has_length(5)) - assert_that(page_one['maxItems'], is_(5)) - assert_that(page_one, is_not(has_key('startFrom'))) - assert_that(page_one['nextId'], is_not(none())) + assert_that(page_one["changes"], has_length(5)) + assert_that(page_one["maxItems"], is_(5)) + assert_that(page_one, is_not(has_key("startFrom"))) + assert_that(page_one["nextId"], is_not(none())) for i in range(0, 5): - assert_that(page_one['changes'][i]['newGroup'], is_(updated_groups[9 - i])) - assert_that(page_one['changes'][i]['oldGroup'], is_(updated_groups[9 - i - 1])) + assert_that(page_one["changes"][i]["newGroup"], is_(updated_groups[9 - i])) + assert_that(page_one["changes"][i]["oldGroup"], is_(updated_groups[9 - i - 1])) - assert_that(page_two['changes'], has_length(5)) - assert_that(page_two['maxItems'], is_(5)) - assert_that(page_two['startFrom'], is_(page_one['nextId'])) - assert_that(page_two['nextId'], is_not(none())) + assert_that(page_two["changes"], has_length(5)) + assert_that(page_two["maxItems"], is_(5)) + assert_that(page_two["startFrom"], is_(page_one["nextId"])) + assert_that(page_two["nextId"], is_not(none())) # Do not compare the last item on the second page, as it is touches the original group for i in range(5, 9): - assert_that(page_two['changes'][i - 5]['newGroup'], is_(updated_groups[9 - i])) - assert_that(page_two['changes'][i - 5]['oldGroup'], is_(updated_groups[9 - i - 1])) + assert_that(page_two["changes"][i - 5]["newGroup"], is_(updated_groups[9 - i])) + assert_that(page_two["changes"][i - 5]["oldGroup"], is_(updated_groups[9 - i - 1])) # Last page should be only the very last change - assert_that(page_three['changes'], has_length(1)) - assert_that(page_three['maxItems'], is_(5)) - assert_that(page_three['startFrom'], is_(page_two['nextId'])) - assert_that(page_three, is_not(has_key('nextId'))) + assert_that(page_three["changes"], has_length(1)) + assert_that(page_three["maxItems"], is_(5)) + assert_that(page_three["startFrom"], is_(page_two["nextId"])) + assert_that(page_three, is_not(has_key("nextId"))) - assert_that(page_three['changes'][0]['newGroup'], is_(created_group)) + assert_that(page_three["changes"][0]["newGroup"], is_(created_group)) def test_get_group_changes_unauthed(shared_zone_test_context): @@ -165,15 +165,15 @@ def test_get_group_changes_unauthed(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-list-group-admins-unauthed-2', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-list-group-admins-unauthed-2", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - dummy_client.get_group_changes(saved_group['id'], status=403) - client.get_group_changes(saved_group['id'], status=200) + dummy_client.get_group_changes(saved_group["id"], status=403) + client.get_group_changes(saved_group["id"], status=200) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/get_group_test.py b/modules/api/functional_test/live_tests/membership/get_group_test.py index 8d51b9073..994d63609 100644 --- a/modules/api/functional_test/live_tests/membership/get_group_test.py +++ b/modules/api/functional_test/live_tests/membership/get_group_test.py @@ -14,25 +14,25 @@ def test_get_group_success(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-get-group-success', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-get-group-success", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - group = client.get_group(saved_group['id'], status=200) + group = client.get_group(saved_group["id"], status=200) - assert_that(group['name'], is_(saved_group['name'])) - assert_that(group['email'], is_(saved_group['email'])) - assert_that(group['description'], is_(saved_group['description'])) - assert_that(group['status'], is_(saved_group['status'])) - assert_that(group['created'], is_(saved_group['created'])) - assert_that(group['id'], is_(saved_group['id'])) + assert_that(group["name"], is_(saved_group["name"])) + assert_that(group["email"], is_(saved_group["email"])) + assert_that(group["description"], is_(saved_group["description"])) + assert_that(group["status"], is_(saved_group["status"])) + assert_that(group["created"], is_(saved_group["created"])) + assert_that(group["id"], is_(saved_group["id"])) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_get_group_not_found(shared_zone_test_context): @@ -40,7 +40,7 @@ def test_get_group_not_found(shared_zone_test_context): Tests that getting a group that does not exist returns a 404 """ client = shared_zone_test_context.ok_vinyldns_client - client.get_group('doesntexist', status=404) + client.get_group("doesntexist", status=404) def test_get_deleted_group(shared_zone_test_context): @@ -53,19 +53,19 @@ def test_get_deleted_group(shared_zone_test_context): try: new_group = { - 'name': 'test-get-deleted-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-get-deleted-group", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - client.delete_group(saved_group['id'], status=200) - client.get_group(saved_group['id'], status=404) + client.delete_group(saved_group["id"], status=200) + client.get_group(saved_group["id"], status=404) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_get_group_unauthed(shared_zone_test_context): @@ -78,16 +78,16 @@ def test_get_group_unauthed(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-get-group-unauthed', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-get-group-unauthed", + "email": "test@test.com", + "description": "this is a description", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - dummy_client.get_group(saved_group['id'], status=403) - client.get_group(saved_group['id'], status=200) + dummy_client.get_group(saved_group["id"], status=403) + client.get_group(saved_group["id"], status=200) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py index deeb598fb..6aedd9e29 100644 --- a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py +++ b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py @@ -16,35 +16,35 @@ def test_list_group_admins_success(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-list-group-admins-success', - 'email': 'test@test.com', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'}, { 'id': 'dummy'} ] + "name": "test-list-group-admins-success", + "email": "test@test.com", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"}, { "id": "dummy"} ] } saved_group = client.create_group(new_group, status=200) - admin_user_1_id = 'ok' - admin_user_2_id = 'dummy' + admin_user_1_id = "ok" + admin_user_2_id = "dummy" - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) - assert_that(result['admins'], has_length(2)) - assert_that([admin_user_1_id, admin_user_2_id], has_item(result['admins'][0]['id'])) - assert_that([admin_user_1_id, admin_user_2_id], has_item(result['admins'][1]['id'])) + assert_that(result["admins"], has_length(2)) + assert_that([admin_user_1_id, admin_user_2_id], has_item(result["admins"][0]["id"])) + assert_that([admin_user_1_id, admin_user_2_id], has_item(result["admins"][1]["id"])) - result = client.list_group_admins(saved_group['id'], status=200) + result = client.list_group_admins(saved_group["id"], status=200) - result = sorted(result['admins'], key=lambda user: user['userName']) + result = sorted(result["admins"], key=lambda user: user["userName"]) assert_that(result, has_length(2)) - assert_that(result[0]['userName'], is_('dummy')) - assert_that(result[0]['id'], is_('dummy')) - assert_that(result[0]['created'], not_none()) - assert_that(result[1]['userName'], is_('ok')) - assert_that(result[1]['id'], is_('ok')) - assert_that(result[1]['created'], not_none()) + assert_that(result[0]["userName"], is_("dummy")) + assert_that(result[0]["id"], is_("dummy")) + assert_that(result[0]["created"], not_none()) + assert_that(result[1]["userName"], is_("ok")) + assert_that(result[1]["id"], is_("ok")) + assert_that(result[1]["created"], not_none()) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_admins_group_not_found(shared_zone_test_context): @@ -53,7 +53,7 @@ def test_list_group_admins_group_not_found(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - client.list_group_admins('doesntexist', status=404) + client.list_group_admins("doesntexist", status=404) def test_list_group_admins_unauthed(shared_zone_test_context): @@ -66,15 +66,15 @@ def test_list_group_admins_unauthed(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-list-group-admins-unauthed', - 'email': 'test@test.com', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-admins-unauthed", + "email": "test@test.com", + "members": [ { "id": "ok"} ], + "admins": [ { "id": "ok"} ] } saved_group = client.create_group(new_group, status=200) - dummy_client.list_group_admins(saved_group['id'], status=403) - client.list_group_admins(saved_group['id'], status=200) + dummy_client.list_group_admins(saved_group["id"], status=403) + client.list_group_admins(saved_group["id"], status=200) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/list_group_members_test.py b/modules/api/functional_test/live_tests/membership/list_group_members_test.py index 59eceac9b..462315271 100644 --- a/modules/api/functional_test/live_tests/membership/list_group_members_test.py +++ b/modules/api/functional_test/live_tests/membership/list_group_members_test.py @@ -1,10 +1,5 @@ -import pytest -import json - from hamcrest import * -from vinyldns_python import VinylDNSClient - def test_list_group_members_success(shared_zone_test_context): """ @@ -16,48 +11,48 @@ def test_list_group_members_success(shared_zone_test_context): try: new_group = { - 'name': 'test-list-group-members-success', - 'email': 'test@test.com', - 'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-success", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } - members = sorted(['dummy', 'ok']) + members = ["dummy", "ok"] saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) - assert_that(result['members'], has_length(len(members))) + result = client.get_group(saved_group["id"], status=200) + assert_that(result["members"], has_length(len(members))) - result_member_ids = map(lambda member: member['id'], result['members']) - for id in members: - assert_that(result_member_ids, has_item(id)) + result_member_ids = [member["id"] for member in result["members"]] + for identifier in members: + assert_that(result_member_ids, has_item(identifier)) - result = client.list_members_group(saved_group['id'], status=200) - result = sorted(result['members'], key=lambda user: user['id']) + result = client.list_members_group(saved_group["id"], status=200) + result = sorted(result["members"], key=lambda user: user["id"]) assert_that(result, has_length(len(members))) dummy = result[0] - assert_that(dummy['id'], is_('dummy')) - assert_that(dummy['userName'], is_('dummy')) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy['lockStatus'], is_("Unlocked")) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) + assert_that(dummy["id"], is_("dummy")) + assert_that(dummy["userName"], is_("dummy")) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy["lockStatus"], is_("Unlocked")) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) ok = result[1] - assert_that(ok['id'], is_('ok')) - assert_that(ok['userName'], is_('ok')) - assert_that(ok['isAdmin'], is_(True)) - assert_that(ok['firstName'], is_('ok')) - assert_that(ok['lastName'], is_('ok')) - assert_that(ok['email'], is_('test@test.com')) - assert_that(ok['created'], is_not(none())) - assert_that(ok['lockStatus'], is_("Unlocked")) + assert_that(ok["id"], is_("ok")) + assert_that(ok["userName"], is_("ok")) + assert_that(ok["isAdmin"], is_(True)) + assert_that(ok["firstName"], is_("ok")) + assert_that(ok["lastName"], is_("ok")) + assert_that(ok["email"], is_("test@test.com")) + assert_that(ok["created"], is_not(none())) + assert_that(ok["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_not_found(shared_zone_test_context): @@ -67,7 +62,7 @@ def test_list_group_members_not_found(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - client.list_members_group('not_found', status=404) + client.list_members_group("not_found", status=404) def test_list_group_members_start_from(shared_zone_test_context): @@ -80,50 +75,49 @@ def test_list_group_members_start_from(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + member_id = "dummy{0:0>3}".format(runner) + members.append({"id": member_id}) new_group = { - 'name': 'test-list-group-members-start-from', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-start-from", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - assert_that(result['members'], has_item({'id': 'ok'})) - result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result["members"], has_length(len(members) + 1)) + assert_that(result["members"], has_item({"id": "ok"})) + result_member_ids = [member["id"] for member in result["members"]] for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], start_from='dummy050', status=200) + result = client.list_members_group(saved_group["id"], start_from="dummy050", status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result['startFrom'], is_('dummy050')) - assert_that(result['nextId'], is_('dummy150')) + assert_that(result["startFrom"], is_("dummy050")) + assert_that(result["nextId"], is_("dummy150")) assert_that(group_members, has_length(100)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i+51) #starts from dummy051 - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i + 51) # starts from dummy051 + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_start_from_non_user(shared_zone_test_context): @@ -136,50 +130,49 @@ def test_list_group_members_start_from_non_user(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + member_id = "dummy{0:0>3}".format(runner) + members.append({"id": member_id}) new_group = { - 'name': 'test-list-group-members-start-from-nonexistent', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-start-from-nonexistent", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], start_from='abc', status=200) + result = client.list_members_group(saved_group["id"], start_from="abc", status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result['startFrom'], is_('abc')) - assert_that(result['nextId'], is_('dummy099')) + assert_that(result["startFrom"], is_("abc")) + assert_that(result["nextId"], is_("dummy099")) assert_that(group_members, has_length(100)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_max_item(shared_zone_test_context): @@ -192,50 +185,48 @@ def test_list_group_members_max_item(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-max-items', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-max-items", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], max_items=10, status=200) + result = client.list_members_group(saved_group["id"], max_items=10, status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result['nextId'], is_('dummy009')) - assert_that(result['maxItems'], is_(10)) + assert_that(result["nextId"], is_("dummy009")) + assert_that(result["maxItems"], is_(10)) assert_that(group_members, has_length(10)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_max_item_default(shared_zone_test_context): @@ -248,49 +239,47 @@ def test_list_group_members_max_item_default(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-max-items-default', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-max-items-default", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], status=200) + result = client.list_members_group(saved_group["id"], status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result['nextId'], is_('dummy099')) + assert_that(result["nextId"], is_("dummy099")) assert_that(group_members, has_length(100)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + user_name = "name-" + id + assert_that(dummy["id"], is_(id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_max_item_zero(shared_zone_test_context): @@ -303,31 +292,29 @@ def test_list_group_members_max_item_zero(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-max-items-zero', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-max-items-zero", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - client.list_members_group(saved_group['id'], max_items=0, status=400) + client.list_members_group(saved_group["id"], max_items=0, status=400) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_max_item_over_1000(shared_zone_test_context): @@ -340,31 +327,29 @@ def test_list_group_members_max_item_over_1000(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-max-items-over-limit', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-max-items-over-limit", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - client.list_members_group(saved_group['id'], max_items=1001, status=400) + client.list_members_group(saved_group["id"], max_items=1001, status=400) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_next_id_correct(shared_zone_test_context): @@ -377,49 +362,47 @@ def test_list_group_members_next_id_correct(shared_zone_test_context): try: members = [] for runner in range(0, 200): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-next-id', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-next-id", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], status=200) + result = client.list_members_group(saved_group["id"], status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result['nextId'], is_('dummy099')) + assert_that(result["nextId"], is_("dummy099")) assert_that(group_members, has_length(100)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy['isAdmin'], is_(False)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy["isAdmin"], is_(False)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_next_id_exhausted(shared_zone_test_context): @@ -432,48 +415,46 @@ def test_list_group_members_next_id_exhausted(shared_zone_test_context): try: members = [] for runner in range(0, 5): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { - 'name': 'test-list-group-members-next-id-exhausted', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-next-id-exhausted", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - result = client.list_members_group(saved_group['id'], status=200) + result = client.list_members_group(saved_group["id"], status=200) - group_members = sorted(result['members'], key=lambda user: user['id']) + group_members = sorted(result["members"], key=lambda user: user["id"]) - assert_that(result, is_not(has_key('nextId'))) + assert_that(result, is_not(has_key("nextId"))) - assert_that(group_members, has_length(6)) # add one more for the admin - for i in range(0, len(group_members)-1): + assert_that(group_members, has_length(6)) # add one more for the admin + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_next_id_exhausted_two_pages(shared_zone_test_context): @@ -486,71 +467,70 @@ def test_list_group_members_next_id_exhausted_two_pages(shared_zone_test_context try: members = [] for runner in range(0, 19): - id = "dummy{0:0>3}".format(runner) - members.append({ 'id': id }) - members = sorted(members) + member_id = "dummy{0:0>3}".format(runner) + members.append({"id": member_id}) new_group = { - 'name': 'test-list-group-members-next-id-exhausted-two-pages', - 'email': 'test@test.com', - 'members': members, - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-next-id-exhausted-two-pages", + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) + result = client.get_group(saved_group["id"], status=200) # members has one more because admins are added as members - assert_that(result['members'], has_length(len(members) + 1)) - result_member_ids = map(lambda member: member['id'], result['members']) - assert_that(result_member_ids, has_item('ok')) + assert_that(result["members"], has_length(len(members) + 1)) + result_member_ids = [member["id"] for member in result["members"]] + assert_that(result_member_ids, has_item("ok")) for user in members: - assert_that(result_member_ids, has_item(user['id'])) + assert_that(result_member_ids, has_item(user["id"])) - first_page = client.list_members_group(saved_group['id'], max_items=10, status=200) + first_page = client.list_members_group(saved_group["id"], max_items=10, status=200) - group_members = sorted(first_page['members'], key=lambda user: user['id']) + group_members = sorted(first_page["members"], key=lambda user: user["id"]) - assert_that(first_page['nextId'], is_('dummy009')) - assert_that(first_page['maxItems'], is_(10)) + assert_that(first_page["nextId"], is_("dummy009")) + assert_that(first_page["maxItems"], is_(10)) assert_that(group_members, has_length(10)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) - assert_that(dummy['lockStatus'], is_("Unlocked")) + member_id = "dummy{0:0>3}".format(i) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) + assert_that(dummy["lockStatus"], is_("Unlocked")) - second_page = client.list_members_group(saved_group['id'], - start_from=first_page['nextId'], + second_page = client.list_members_group(saved_group["id"], + start_from=first_page["nextId"], max_items=10, status=200) - group_members = sorted(second_page['members'], key=lambda user: user['id']) + group_members = sorted(second_page["members"], key=lambda user: user["id"]) - assert_that(second_page, is_not(has_key('nextId'))) - assert_that(second_page['maxItems'], is_(10)) + assert_that(second_page, is_not(has_key("nextId"))) + assert_that(second_page["maxItems"], is_(10)) assert_that(group_members, has_length(10)) - for i in range(0, len(group_members)-1): + for i in range(0, len(group_members) - 1): dummy = group_members[i] - id = "dummy{0:0>3}".format(i+10) - user_name = "name-"+id - assert_that(dummy['id'], is_(id)) - assert_that(dummy['userName'], is_(user_name)) - assert_that(dummy, is_not(has_key('firstName'))) - assert_that(dummy, is_not(has_key('lastName'))) - assert_that(dummy, is_not(has_key('email'))) - assert_that(dummy['created'], is_not(none())) + member_id = "dummy{0:0>3}".format(i + 10) + user_name = "name-" + member_id + assert_that(dummy["id"], is_(member_id)) + assert_that(dummy["userName"], is_(user_name)) + assert_that(dummy, is_not(has_key("firstName"))) + assert_that(dummy, is_not(has_key("lastName"))) + assert_that(dummy, is_not(has_key("email"))) + assert_that(dummy["created"], is_not(none())) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_list_group_members_unauthed(shared_zone_test_context): @@ -563,15 +543,15 @@ def test_list_group_members_unauthed(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-list-group-members-unauthed', - 'email': 'test@test.com', - 'members': [ { 'id': 'ok'} ], - 'admins': [ { 'id': 'ok'} ] + "name": "test-list-group-members-unauthed", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - dummy_client.list_members_group(saved_group['id'], status=403) - client.list_members_group(saved_group['id'], status=200) + dummy_client.list_members_group(saved_group["id"], status=403) + client.list_members_group(saved_group["id"], status=200) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200,404)) + client.delete_group(saved_group["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py index 9a2cfb354..219455d92 100644 --- a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py +++ b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py @@ -17,16 +17,16 @@ def test_list_my_groups_no_parameters(list_my_groups_context): assert_that(results, has_length(3)) # 3 fields - assert_that(results['groups'], has_length(50)) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results, is_not(has_key('startFrom'))) - assert_that(results, is_not(has_key('nextId'))) - assert_that(results['maxItems'], is_(100)) + assert_that(results["groups"], has_length(50)) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results, is_not(has_key("startFrom"))) + assert_that(results, is_not(has_key("nextId"))) + assert_that(results["maxItems"], is_(100)) - results['groups'] = sorted(results['groups'], key=lambda x: x['name']) + results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) for i in range(0, 50): - assert_that(results['groups'][i]['name'], is_("test-list-my-groups-{0:0>3}".format(i))) + assert_that(results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i))) def test_get_my_groups_using_old_account_auth(list_my_groups_context): @@ -35,10 +35,10 @@ def test_get_my_groups_using_old_account_auth(list_my_groups_context): """ results = list_my_groups_context.client.list_my_groups(status=200) assert_that(results, has_length(3)) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results, is_not(has_key('startFrom'))) - assert_that(results, is_not(has_key('nextId'))) - assert_that(results['maxItems'], is_(100)) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results, is_not(has_key("startFrom"))) + assert_that(results, is_not(has_key("nextId"))) + assert_that(results["maxItems"], is_(100)) def test_list_my_groups_max_items(list_my_groups_context): @@ -49,11 +49,11 @@ def test_list_my_groups_max_items(list_my_groups_context): assert_that(results, has_length(4)) # 4 fields - assert_that(results, has_key('groups')) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results, is_not(has_key('startFrom'))) - assert_that(results, has_key('nextId')) - assert_that(results['maxItems'], is_(5)) + assert_that(results, has_key("groups")) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results, is_not(has_key("startFrom"))) + assert_that(results, has_key("nextId")) + assert_that(results["maxItems"], is_(5)) def test_list_my_groups_paging(list_my_groups_context): @@ -63,31 +63,31 @@ def test_list_my_groups_paging(list_my_groups_context): results = list_my_groups_context.client.list_my_groups(max_items=20, status=200) assert_that(results, has_length(4)) # 4 fields - assert_that(results, has_key('groups')) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results, is_not(has_key('startFrom'))) - assert_that(results, has_key('nextId')) - assert_that(results['maxItems'], is_(20)) + assert_that(results, has_key("groups")) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results, is_not(has_key("startFrom"))) + assert_that(results, has_key("nextId")) + assert_that(results["maxItems"], is_(20)) - while 'nextId' in results: + while "nextId" in results: prev = results - results = list_my_groups_context.client.list_my_groups(max_items=20, start_from=results['nextId'], status=200) + results = list_my_groups_context.client.list_my_groups(max_items=20, start_from=results["nextId"], status=200) - if 'nextId' in results: + if "nextId" in results: assert_that(results, has_length(5)) # 5 fields - assert_that(results, has_key('groups')) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results['startFrom'], is_(prev['nextId'])) - assert_that(results, has_key('nextId')) - assert_that(results['maxItems'], is_(20)) + assert_that(results, has_key("groups")) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results["startFrom"], is_(prev["nextId"])) + assert_that(results, has_key("nextId")) + assert_that(results["maxItems"], is_(20)) else: assert_that(results, has_length(4)) # 4 fields - assert_that(results, has_key('groups')) - assert_that(results, is_not(has_key('groupNameFilter'))) - assert_that(results['startFrom'], is_(prev['nextId'])) - assert_that(results, is_not(has_key('nextId'))) - assert_that(results['maxItems'], is_(20)) + assert_that(results, has_key("groups")) + assert_that(results, is_not(has_key("groupNameFilter"))) + assert_that(results["startFrom"], is_(prev["nextId"])) + assert_that(results, is_not(has_key("nextId"))) + assert_that(results["maxItems"], is_(20)) def test_list_my_groups_filter_matches(list_my_groups_context): @@ -98,16 +98,16 @@ def test_list_my_groups_filter_matches(list_my_groups_context): assert_that(results, has_length(4)) # 4 fields - assert_that(results['groups'], has_length(10)) - assert_that(results['groupNameFilter'], is_('test-list-my-groups-01')) - assert_that(results, is_not(has_key('startFrom'))) - assert_that(results, is_not(has_key('nextId'))) - assert_that(results['maxItems'], is_(100)) + assert_that(results["groups"], has_length(10)) + assert_that(results["groupNameFilter"], is_("test-list-my-groups-01")) + assert_that(results, is_not(has_key("startFrom"))) + assert_that(results, is_not(has_key("nextId"))) + assert_that(results["maxItems"], is_(100)) - results['groups'] = sorted(results['groups'], key=lambda x: x['name']) + results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) for i in range(0, 10): - assert_that(results['groups'][i]['name'], is_("test-list-my-groups-{0:0>3}".format(i + 10))) + assert_that(results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i + 10))) def test_list_my_groups_no_deleted(list_my_groups_context): @@ -116,17 +116,17 @@ def test_list_my_groups_no_deleted(list_my_groups_context): """ results = list_my_groups_context.client.list_my_groups(max_items=100, status=200) - assert_that(results, has_key('groups')) - for g in results['groups']: - assert_that(g['status'], is_not('Deleted')) + assert_that(results, has_key("groups")) + for g in results["groups"]: + assert_that(g["status"], is_not("Deleted")) - while 'nextId' in results: + while "nextId" in results: results = client.list_my_groups(max_items=20, group_name_filter="test-list-my-groups-", - start_from=results['nextId'], status=200) + start_from=results["nextId"], status=200) - assert_that(results, has_key('groups')) - for g in results['groups']: - assert_that(g['status'], is_not('Deleted')) + assert_that(results, has_key("groups")) + for g in results["groups"]: + assert_that(g["status"], is_not("Deleted")) def test_list_my_groups_with_ignore_access_true(list_my_groups_context): @@ -136,15 +136,15 @@ def test_list_my_groups_with_ignore_access_true(list_my_groups_context): results = list_my_groups_context.client.list_my_groups(ignore_access=True, status=200) - assert_that(len(results['groups']), greater_than(50)) - assert_that(results['maxItems'], is_(100)) - assert_that(results['ignoreAccess'], is_(True)) + assert_that(len(results["groups"]), greater_than(50)) + assert_that(results["maxItems"], is_(100)) + assert_that(results["ignoreAccess"], is_(True)) my_results = list_my_groups_context.client.list_my_groups(status=200) - my_results['groups'] = sorted(my_results['groups'], key=lambda x: x['name']) + my_results["groups"] = sorted(my_results["groups"], key=lambda x: x["name"]) for i in range(0, 50): - assert_that(my_results['groups'][i]['name'], is_("test-list-my-groups-{0:0>3}".format(i))) + assert_that(my_results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i))) def test_list_my_groups_as_support_user(list_my_groups_context): @@ -154,9 +154,9 @@ def test_list_my_groups_as_support_user(list_my_groups_context): results = list_my_groups_context.support_user_client.list_my_groups(status=200) - assert_that(len(results['groups']), greater_than(50)) - assert_that(results['maxItems'], is_(100)) - assert_that(results['ignoreAccess'], is_(False)) + assert_that(len(results["groups"]), greater_than(50)) + assert_that(results["maxItems"], is_(100)) + assert_that(results["ignoreAccess"], is_(False)) def test_list_my_groups_as_support_user_with_ignore_access_true(list_my_groups_context): @@ -166,6 +166,6 @@ def test_list_my_groups_as_support_user_with_ignore_access_true(list_my_groups_c results = list_my_groups_context.support_user_client.list_my_groups(ignore_access=True, status=200) - assert_that(len(results['groups']), greater_than(50)) - assert_that(results['maxItems'], is_(100)) - assert_that(results['ignoreAccess'], is_(True)) + assert_that(len(results["groups"]), greater_than(50)) + assert_that(results["maxItems"], is_(100)) + assert_that(results["ignoreAccess"], is_(True)) diff --git a/modules/api/functional_test/live_tests/membership/update_group_test.py b/modules/api/functional_test/live_tests/membership/update_group_test.py index 4f6aa649c..795044565 100644 --- a/modules/api/functional_test/live_tests/membership/update_group_test.py +++ b/modules/api/functional_test/live_tests/membership/update_group_test.py @@ -13,46 +13,46 @@ def test_update_group_success(shared_zone_test_context): try: new_group = { - 'name': 'test-update-group-success', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-group-success", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - group = client.get_group(saved_group['id'], status=200) + group = client.get_group(saved_group["id"], status=200) - assert_that(group['name'], is_(saved_group['name'])) - assert_that(group['email'], is_(saved_group['email'])) - assert_that(group['description'], is_(saved_group['description'])) - assert_that(group['status'], is_(saved_group['status'])) - assert_that(group['created'], is_(saved_group['created'])) - assert_that(group['id'], is_(saved_group['id'])) + assert_that(group["name"], is_(saved_group["name"])) + assert_that(group["email"], is_(saved_group["email"])) + assert_that(group["description"], is_(saved_group["description"])) + assert_that(group["status"], is_(saved_group["status"])) + assert_that(group["created"], is_(saved_group["created"])) + assert_that(group["id"], is_(saved_group["id"])) time.sleep(1) # sleep to ensure that update doesnt change created time update_group = { - 'id': group['id'], - 'name': 'updated-name', - 'email': 'update@test.com', - 'description': 'this is a new description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": group["id"], + "name": "updated-name", + "email": "update@test.com", + "description": "this is a new description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - group = client.update_group(update_group['id'], update_group, status=200) + group = client.update_group(update_group["id"], update_group, status=200) - assert_that(group['name'], is_(update_group['name'])) - assert_that(group['email'], is_(update_group['email'])) - assert_that(group['description'], is_(update_group['description'])) - assert_that(group['status'], is_(saved_group['status'])) - assert_that(group['created'], is_(saved_group['created'])) - assert_that(group['id'], is_(saved_group['id'])) - assert_that(group['members'][0]['id'], is_('ok')) - assert_that(group['admins'][0]['id'], is_('ok')) + assert_that(group["name"], is_(update_group["name"])) + assert_that(group["email"], is_(update_group["email"])) + assert_that(group["description"], is_(update_group["description"])) + assert_that(group["status"], is_(saved_group["status"])) + assert_that(group["created"], is_(saved_group["created"])) + assert_that(group["id"], is_(saved_group["id"])) + assert_that(group["members"][0]["id"], is_("ok")) + assert_that(group["admins"][0]["id"], is_("ok")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_update_group_without_name(shared_zone_test_context): @@ -63,27 +63,27 @@ def test_update_group_without_name(shared_zone_test_context): result = None try: new_group = { - 'name': 'test-update-without-name', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-without-name", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) update_group = { - 'id': result['id'], - 'email': 'update@test.com', - 'description': 'this is a new description' + "id": result["id"], + "email": "update@test.com", + "description": "this is a new description" } - errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + errors = client.update_group(update_group["id"], update_group, status=400)["errors"] assert_that(errors[0], is_("Missing Group.name")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_update_group_without_email(shared_zone_test_context): @@ -94,28 +94,28 @@ def test_update_group_without_email(shared_zone_test_context): result = None try: new_group = { - 'name': 'test-update-without-email', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-without-email", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) update_group = { - 'id': result['id'], - 'name': 'without-email', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": result["id"], + "name": "without-email", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + errors = client.update_group(update_group["id"], update_group, status=400)["errors"] assert_that(errors[0], is_("Missing Group.email")) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_updating_group_without_name_or_email(shared_zone_test_context): @@ -126,23 +126,23 @@ def test_updating_group_without_name_or_email(shared_zone_test_context): result = None try: new_group = { - 'name': 'test-update-without-name-and-email', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-without-name-and-email", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) update_group = { - 'id': result['id'], - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": result["id"], + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + errors = client.update_group(update_group["id"], update_group, status=400)["errors"] assert_that(errors, has_length(2)) assert_that(errors, contains_inanyorder( "Missing Group.name", @@ -150,7 +150,7 @@ def test_updating_group_without_name_or_email(shared_zone_test_context): )) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_updating_group_without_members_or_admins(shared_zone_test_context): @@ -162,23 +162,23 @@ def test_updating_group_without_members_or_admins(shared_zone_test_context): try: new_group = { - 'name': 'test-update-without-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-without-members", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(new_group, status=200) - assert_that(result['name'], is_(new_group['name'])) - assert_that(result['email'], is_(new_group['email'])) + assert_that(result["name"], is_(new_group["name"])) + assert_that(result["email"], is_(new_group["email"])) update_group = { - 'id': result['id'], - 'name': 'test-update-without-members', - 'email': 'test@test.com', - 'description': 'this is a description', + "id": result["id"], + "name": "test-update-without-members", + "email": "test@test.com", + "description": "this is a description", } - errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + errors = client.update_group(update_group["id"], update_group, status=400)["errors"] assert_that(errors, has_length(2)) assert_that(errors, contains_inanyorder( "Missing Group.members", @@ -186,7 +186,7 @@ def test_updating_group_without_members_or_admins(shared_zone_test_context): )) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) def test_update_group_adds_admins_as_members(shared_zone_test_context): @@ -199,42 +199,42 @@ def test_update_group_adds_admins_as_members(shared_zone_test_context): try: new_group = { - 'name': 'test-update-group-admins-as-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-group-admins-as-members", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - group = client.get_group(saved_group['id'], status=200) + group = client.get_group(saved_group["id"], status=200) - assert_that(group['name'], is_(saved_group['name'])) - assert_that(group['email'], is_(saved_group['email'])) - assert_that(group['description'], is_(saved_group['description'])) - assert_that(group['status'], is_(saved_group['status'])) - assert_that(group['created'], is_(saved_group['created'])) - assert_that(group['id'], is_(saved_group['id'])) + assert_that(group["name"], is_(saved_group["name"])) + assert_that(group["email"], is_(saved_group["email"])) + assert_that(group["description"], is_(saved_group["description"])) + assert_that(group["status"], is_(saved_group["status"])) + assert_that(group["created"], is_(saved_group["created"])) + assert_that(group["id"], is_(saved_group["id"])) update_group = { - 'id': group['id'], - 'name': 'test-update-group-admins-as-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}, {'id': 'dummy'}] + "id": group["id"], + "name": "test-update-group-admins-as-members", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } - group = client.update_group(update_group['id'], update_group, status=200) + group = client.update_group(update_group["id"], update_group, status=200) - assert_that(group['members'], has_length(2)) - assert_that(['ok', 'dummy'], has_item(group['members'][0]['id'])) - assert_that(['ok', 'dummy'], has_item(group['members'][1]['id'])) - assert_that(group['admins'], has_length(2)) - assert_that(['ok', 'dummy'], has_item(group['admins'][0]['id'])) - assert_that(['ok', 'dummy'], has_item(group['admins'][1]['id'])) + assert_that(group["members"], has_length(2)) + assert_that(["ok", "dummy"], has_item(group["members"][0]["id"])) + assert_that(["ok", "dummy"], has_item(group["members"][1]["id"])) + assert_that(group["admins"], has_length(2)) + assert_that(["ok", "dummy"], has_item(group["admins"][0]["id"])) + assert_that(["ok", "dummy"], has_item(group["admins"][1]["id"])) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_update_group_conflict(shared_zone_test_context): @@ -247,40 +247,40 @@ def test_update_group_conflict(shared_zone_test_context): conflict_group = None try: new_group = { - 'name': 'test_update_group_conflict', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test_update_group_conflict", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } conflict_group = client.create_group(new_group, status=200) - assert_that(conflict_group['name'], is_(new_group['name'])) + assert_that(conflict_group["name"], is_(new_group["name"])) other_group = { - 'name': 'change_me', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "change_me", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result = client.create_group(other_group, status=200) - assert_that(result['name'], is_(other_group['name'])) + assert_that(result["name"], is_(other_group["name"])) # change the name of the other_group to the first group (conflict) update_group = { - 'id': result['id'], - 'name': 'test_update_group_conflict', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": result["id"], + "name": "test_update_group_conflict", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - client.update_group(update_group['id'], update_group, status=409) + client.update_group(update_group["id"], update_group, status=409) finally: if result: - client.delete_group(result['id'], status=(200, 404)) + client.delete_group(result["id"], status=(200, 404)) if conflict_group: - client.delete_group(conflict_group['id'], status=(200, 404)) + client.delete_group(conflict_group["id"], status=(200, 404)) def test_update_group_not_found(shared_zone_test_context): @@ -291,14 +291,14 @@ def test_update_group_not_found(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client update_group = { - 'id': 'test-update-group-not-found', - 'name': 'test-update-group-not-found', - 'email': 'update@test.com', - 'description': 'this is a new description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": "test-update-group-not-found", + "name": "test-update-group-not-found", + "email": "update@test.com", + "description": "this is a new description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - client.update_group(update_group['id'], update_group, status=404) + client.update_group(update_group["id"], update_group, status=404) def test_update_group_deleted(shared_zone_test_context): @@ -311,27 +311,27 @@ def test_update_group_deleted(shared_zone_test_context): try: new_group = { - 'name': 'test-update-group-deleted', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-group-deleted", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - client.delete_group(saved_group['id'], status=200) + client.delete_group(saved_group["id"], status=200) update_group = { - 'id': saved_group['id'], - 'name': 'test-update-group-deleted-updated', - 'email': 'update@test.com', - 'description': 'this is a new description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-update-group-deleted-updated", + "email": "update@test.com", + "description": "this is a new description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - client.update_group(update_group['id'], update_group, status=404) + client.update_group(update_group["id"], update_group, status=404) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_add_member_via_update_group_success(shared_zone_test_context): @@ -343,29 +343,29 @@ def test_add_member_via_update_group_success(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-add-member-to-via-update-group-success', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-add-member-to-via-update-group-success", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) updated_group = { - 'id': saved_group['id'], - 'name': 'test-add-member-to-via-update-group-success', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-add-member-to-via-update-group-success", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) - expected_members = ['ok', 'dummy'] - assert_that(saved_group['members'], has_length(2)) - assert_that(expected_members, has_item(saved_group['members'][0]['id'])) - assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) + expected_members = ["ok", "dummy"] + assert_that(saved_group["members"], has_length(2)) + assert_that(expected_members, has_item(saved_group["members"][0]["id"])) + assert_that(expected_members, has_item(saved_group["members"][1]["id"])) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_add_member_to_group_twice_via_update_group(shared_zone_test_context): @@ -377,30 +377,30 @@ def test_add_member_to_group_twice_via_update_group(shared_zone_test_context): saved_group = None try: new_group = { - 'name': 'test-add-member-to-group-twice-success-via-update-group', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-add-member-to-group-twice-success-via-update-group", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) updated_group = { - 'id': saved_group['id'], - 'name': 'test-add-member-to-group-twice-success-via-update-group', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-add-member-to-group-twice-success-via-update-group", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) - saved_group = client.update_group(updated_group['id'], updated_group, status=200) - expected_members = ['ok', 'dummy'] - assert_that(saved_group['members'], has_length(2)) - assert_that(expected_members, has_item(saved_group['members'][0]['id'])) - assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) + expected_members = ["ok", "dummy"] + assert_that(saved_group["members"], has_length(2)) + assert_that(expected_members, has_item(saved_group["members"][0]["id"])) + assert_that(expected_members, has_item(saved_group["members"][1]["id"])) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_add_not_found_member_to_group_via_update_group(shared_zone_test_context): @@ -413,27 +413,27 @@ def test_add_not_found_member_to_group_via_update_group(shared_zone_test_context try: new_group = { - 'name': 'test-add-not-found-member-to-group-via-update-group', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-add-not-found-member-to-group-via-update-group", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - result = client.get_group(saved_group['id'], status=200) - assert_that(result['members'], has_length(1)) + result = client.get_group(saved_group["id"], status=200) + assert_that(result["members"], has_length(1)) updated_group = { - 'id': saved_group['id'], - 'name': 'test-add-not-found-member-to-group-via-update-group', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'not_found'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-add-not-found-member-to-group-via-update-group", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "not_found"}], + "admins": [{"id": "ok"}] } - client.update_group(updated_group['id'], updated_group, status=404) + client.update_group(updated_group["id"], updated_group, status=404) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_remove_member_via_update_group_success(shared_zone_test_context): @@ -446,28 +446,28 @@ def test_remove_member_via_update_group_success(shared_zone_test_context): try: new_group = { - 'name': 'test-remove-member-via-update-group-success', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}] + "name": "test-remove-member-via-update-group-success", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) - assert_that(saved_group['members'], has_length(2)) + assert_that(saved_group["members"], has_length(2)) updated_group = { - 'id': saved_group['id'], - 'name': 'test-remove-member-via-update-group-success', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-remove-member-via-update-group-success", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) - assert_that(saved_group['members'], has_length(1)) - assert_that(saved_group['members'][0]['id'], is_('ok')) + assert_that(saved_group["members"], has_length(1)) + assert_that(saved_group["members"][0]["id"], is_("ok")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_remove_member_and_admin(shared_zone_test_context): @@ -479,30 +479,30 @@ def test_remove_member_and_admin(shared_zone_test_context): try: new_group = { - 'name': 'test-remove-member-and-admin', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}, {'id': 'dummy'}] + "name": "test-remove-member-and-admin", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } saved_group = client.create_group(new_group, status=200) - assert_that(saved_group['members'], has_length(2)) + assert_that(saved_group["members"], has_length(2)) updated_group = { - 'id': saved_group['id'], - 'name': 'test-remove-member-and-admin', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-remove-member-and-admin", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) - assert_that(saved_group['members'], has_length(1)) - assert_that(saved_group['members'][0]['id'], is_('ok')) - assert_that(saved_group['admins'], has_length(1)) - assert_that(saved_group['admins'][0]['id'], is_('ok')) + assert_that(saved_group["members"], has_length(1)) + assert_that(saved_group["members"][0]["id"], is_("ok")) + assert_that(saved_group["admins"], has_length(1)) + assert_that(saved_group["admins"][0]["id"], is_("ok")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_remove_member_but_not_admin_keeps_member(shared_zone_test_context): @@ -514,32 +514,32 @@ def test_remove_member_but_not_admin_keeps_member(shared_zone_test_context): try: new_group = { - 'name': 'test-remove-member-not-admin-keeps-member', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}, {'id': 'dummy'}] + "name": "test-remove-member-not-admin-keeps-member", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } saved_group = client.create_group(new_group, status=200) - assert_that(saved_group['members'], has_length(2)) + assert_that(saved_group["members"], has_length(2)) updated_group = { - 'id': saved_group['id'], - 'name': 'test-remove-member-not-admin-keeps-member', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}, {'id': 'dummy'}] + "id": saved_group["id"], + "name": "test-remove-member-not-admin-keeps-member", + "email": "test@test.com", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) - expected_members = ['ok', 'dummy'] - assert_that(saved_group['members'], has_length(2)) - assert_that(expected_members, has_item(saved_group['members'][0]['id'])) - assert_that(expected_members, has_item(saved_group['members'][1]['id'])) - assert_that(expected_members, has_item(saved_group['admins'][0]['id'])) - assert_that(expected_members, has_item(saved_group['admins'][1]['id'])) + expected_members = ["ok", "dummy"] + assert_that(saved_group["members"], has_length(2)) + assert_that(expected_members, has_item(saved_group["members"][0]["id"])) + assert_that(expected_members, has_item(saved_group["members"][1]["id"])) + assert_that(expected_members, has_item(saved_group["admins"][0]["id"])) + assert_that(expected_members, has_item(saved_group["admins"][1]["id"])) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_remove_admin_keeps_member(shared_zone_test_context): @@ -551,33 +551,33 @@ def test_remove_admin_keeps_member(shared_zone_test_context): try: new_group = { - 'name': 'test-remove-admin-keeps-member', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}, {'id': 'dummy'}] + "name": "test-remove-admin-keeps-member", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } saved_group = client.create_group(new_group, status=200) - assert_that(saved_group['members'], has_length(2)) + assert_that(saved_group["members"], has_length(2)) updated_group = { - 'id': saved_group['id'], - 'name': 'test-remove-admin-keeps-member', - 'email': 'test@test.com', - 'members': [{'id': 'ok'}, {'id': 'dummy'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "test-remove-admin-keeps-member", + "email": "test@test.com", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } - saved_group = client.update_group(updated_group['id'], updated_group, status=200) + saved_group = client.update_group(updated_group["id"], updated_group, status=200) - expected_members = ['ok', 'dummy'] - assert_that(saved_group['members'], has_length(2)) - assert_that(expected_members, has_item(saved_group['members'][0]['id'])) - assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + expected_members = ["ok", "dummy"] + assert_that(saved_group["members"], has_length(2)) + assert_that(expected_members, has_item(saved_group["members"][0]["id"])) + assert_that(expected_members, has_item(saved_group["members"][1]["id"])) - assert_that(saved_group['admins'], has_length(1)) - assert_that(saved_group['admins'][0]['id'], is_('ok')) + assert_that(saved_group["admins"], has_length(1)) + assert_that(saved_group["admins"][0]["id"], is_("ok")) finally: if saved_group: - client.delete_group(saved_group['id'], status=(200, 404)) + client.delete_group(saved_group["id"], status=(200, 404)) def test_update_group_not_authorized(shared_zone_test_context): @@ -588,26 +588,26 @@ def test_update_group_not_authorized(shared_zone_test_context): not_admin_client = shared_zone_test_context.dummy_vinyldns_client try: new_group = { - 'name': 'test-update-group-not-authorized', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-group-not-authorized", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = ok_client.create_group(new_group, status=200) update_group = { - 'id': saved_group['id'], - 'name': 'updated-name', - 'email': 'update@test.com', - 'description': 'this is a new description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "id": saved_group["id"], + "name": "updated-name", + "email": "update@test.com", + "description": "this is a new description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } - not_admin_client.update_group(update_group['id'], update_group, status=403) + not_admin_client.update_group(update_group["id"], update_group, status=403) finally: if saved_group: - ok_client.delete_group(saved_group['id'], status=(200, 404)) + ok_client.delete_group(saved_group["id"], status=(200, 404)) def test_update_group_adds_admins_to_member_list(shared_zone_test_context): @@ -620,20 +620,20 @@ def test_update_group_adds_admins_to_member_list(shared_zone_test_context): try: new_group = { - 'name': 'test-update-group-add-admins-to-members', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}], - 'admins': [{'id': 'ok'}] + "name": "test-update-group-add-admins-to-members", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = ok_client.create_group(new_group, status=200) - saved_group['admins'] = [{'id': 'dummy'}] - result = ok_client.update_group(saved_group['id'], saved_group, status=200) + saved_group["admins"] = [{"id": "dummy"}] + result = ok_client.update_group(saved_group["id"], saved_group, status=200) - assert_that(map(lambda x: x['id'], result['members']), contains('ok', 'dummy')) - assert_that(result['admins'][0]['id'], is_('dummy')) + assert_that([x["id"] for x in result["members"]], contains_exactly("ok", "dummy")) + assert_that(result["admins"][0]["id"], is_("dummy")) finally: if result: - dummy_client.delete_group(result['id'], status=(200, 404)) + dummy_client.delete_group(result["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/production_verify_test.py b/modules/api/functional_test/live_tests/production_verify_test.py index 87cd57a64..0b0df124b 100644 --- a/modules/api/functional_test/live_tests/production_verify_test.py +++ b/modules/api/functional_test/live_tests/production_verify_test.py @@ -1,14 +1,4 @@ -import pytest -import sys -import dns.query -import dns.tsigkeyring -import dns.update - from utils import * -from hamcrest import * -from vinyldns_python import VinylDNSClient -from test_data import TestData -from dns.resolver import * def test_verify_production(shared_zone_test_context): @@ -20,51 +10,51 @@ def test_verify_production(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_create_recordset_with_dns_verify', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_create_recordset_with_dns_verify", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.1.1.1', is_in(records)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.1.1.1", is_in(records)) + assert_that("10.2.2.2", is_in(records)) - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(2)) - assert_that('10.1.1.1', is_in(rdata_strings)) - assert_that('10.2.2.2', is_in(rdata_strings)) + assert_that("10.1.1.1", is_in(rdata_strings)) + assert_that("10.2.2.2", is_in(rdata_strings)) finally: if result_rs: try: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_deleted(delete_result['zoneId'], delete_result['id']) + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_deleted(delete_result["zoneId"], delete_result["id"]) except: - pass \ No newline at end of file + pass diff --git a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py index 621b74e07..9be902265 100644 --- a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py @@ -1,9 +1,7 @@ import pytest -from dns.resolver import * -from hamcrest import * -from utils import * -from test_data import TestData +from live_tests.test_data import TestData +from utils import * def test_create_recordset_with_dns_verify(shared_zone_test_context): @@ -14,52 +12,52 @@ def test_create_recordset_with_dns_verify(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_create_recordset_with_dns_verify', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_create_recordset_with_dns_verify", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.1.1.1', is_in(records)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.1.1.1", is_in(records)) + assert_that("10.2.2.2", is_in(records)) - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(2)) - assert_that('10.1.1.1', is_in(rdata_strings)) - assert_that('10.2.2.2', is_in(rdata_strings)) + assert_that("10.1.1.1", is_in(rdata_strings)) + assert_that("10.2.2.2", is_in(rdata_strings)) finally: if result_rs: try: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -72,35 +70,35 @@ def test_create_naptr_origin_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'ok.', - 'type': 'NAPTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "ok.", + "type": "NAPTR", + "ttl": 100, + "records": [ { - 'order': 10, - 'preference': 100, - 'flags': 'S', - 'service': 'SIP+D2T', - 'regexp': '', - 'replacement': '_sip._udp.ok.' + "order": 10, + "preference": 100, + "flags": "S", + "service": "SIP+D2T", + "regexp": '', + "replacement": "_sip._udp.ok." } ] } result = client.create_recordset(new_rs, status=202) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) def test_create_naptr_non_origin_record(shared_zone_test_context): @@ -111,35 +109,35 @@ def test_create_naptr_non_origin_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'testnaptr', - 'type': 'NAPTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "testnaptr", + "type": "NAPTR", + "ttl": 100, + "records": [ { - 'order': 10, - 'preference': 100, - 'flags': 'S', - 'service': 'SIP+D2T', - 'regexp': '', - 'replacement': '_sip._udp.ok.' + "order": 10, + "preference": 100, + "flags": "S", + "service": "SIP+D2T", + "regexp": '', + "replacement": "_sip._udp.ok." } ] } result = client.create_recordset(new_rs, status=202) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context): @@ -150,82 +148,39 @@ def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': '_sip._tcp._test-create-srv-ok', - 'type': 'SRV', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "_sip._tcp._test-create-srv-ok", + "type": "SRV", + "ttl": 100, + "records": [ { - 'priority': 1, - 'weight': 2, - 'port': 8000, - 'target': 'srv.' + "priority": 1, + "weight": 2, + "port": 8000, + "target": "srv." } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') - - -def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context): - """ - Test creating a new srv record set with service and protocol works - """ - client = shared_zone_test_context.ok_vinyldns_client - result_rs = None - try: - new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': '_sip._tcp._test-create-srv-ok', - 'type': 'SRV', - 'ttl': 100, - 'records': [ - { - 'priority': 1, - 'weight': 2, - 'port': 8000, - 'target': 'srv.' - } - ] - } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" - result = client.create_recordset(new_rs, status=202) - print str(result) - - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) - - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." - - verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." - - finally: - if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): @@ -236,36 +191,36 @@ def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'testAAAA', - 'type': 'AAAA', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "testAAAA", + "type": "AAAA", + "ttl": 100, + "records": [ { - 'address': '1::2' + "address": "1::2" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): @@ -276,36 +231,36 @@ def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test-create-aaaa-recordset-with-normal-record', - 'type': 'AAAA', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test-create-aaaa-recordset-with-normal-record", + "type": "AAAA", + "ttl": 100, + "records": [ { - 'address': '1:2:3:4:5:6:7:8' + "address": "1:2:3:4:5:6:7:8" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_recordset_conflict(shared_zone_test_context): @@ -314,16 +269,16 @@ def test_create_recordset_conflict(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test-create-recordset-conflict', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test-create-recordset-conflict", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -332,13 +287,13 @@ def test_create_recordset_conflict(shared_zone_test_context): try: result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] client.create_recordset(new_rs, status=409) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_recordset_conflict_with_case_insensitive_name(shared_zone_test_context): @@ -347,16 +302,16 @@ def test_create_recordset_conflict_with_case_insensitive_name(shared_zone_test_c """ client = shared_zone_test_context.ok_vinyldns_client first_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test-create-recordset-conflict-with-case-insensitive-name', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test-create-recordset-conflict-with-case-insensitive-name", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -365,14 +320,14 @@ def test_create_recordset_conflict_with_case_insensitive_name(shared_zone_test_c try: result = client.create_recordset(first_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - first_rs['name'] = 'test-create-recordset-conflict-with-case-insensitive-NAME' + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + first_rs["name"] = "test-create-recordset-conflict-with-case-insensitive-NAME" client.create_recordset(first_rs, status=409) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_recordset_conflict_with_trailing_dot_insensitive_name(shared_zone_test_context): @@ -380,34 +335,33 @@ def test_create_recordset_conflict_with_trailing_dot_insensitive_name(shared_zon Test creating a record set with the same name (but without a trailing dot) and type of an existing one returns a 409 """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone rs_name = generate_record_name() first_rs = { - 'zoneId': zone['id'], - 'name': rs_name, - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.parent_zone["id"], + "name": rs_name, + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result_rs = None try: result = client.create_recordset(first_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - first_rs['name'] = rs_name + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + first_rs["name"] = rs_name client.create_recordset(first_rs, status=409) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_recordset_conflict_with_dns(shared_zone_test_context): @@ -417,13 +371,13 @@ def test_create_recordset_conflict_with_dns(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'backend-conflict', - 'type': 'A', - 'ttl': 38400, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "backend-conflict", + "type": "A", + "ttl": 38400, + "records": [ { - 'address': '7.7.7.7' # records with different data should fail, these live in the dns hosts + "address": "7.7.7.7" # records with different data should fail, these live in the dns hosts } ] } @@ -431,7 +385,7 @@ def test_create_recordset_conflict_with_dns(shared_zone_test_context): try: dns_add(shared_zone_test_context.ok_zone, "backend-conflict", 200, "A", "1.2.3.4") result = client.create_recordset(new_rs, status=202) - client.wait_until_recordset_change_status(result, 'Failed') + client.wait_until_recordset_change_status(result, "Failed") finally: dns_delete(shared_zone_test_context.ok_zone, "backend-conflict", "A") @@ -446,46 +400,46 @@ def test_create_recordset_conflict_with_dns_different_type(shared_zone_test_cont result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'already-exists', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "already-exists", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'should succeed' + "text": "should succeed" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - text = [x['text'] for x in result_rs['records']] + text = [x["text"] for x in result_rs["records"]] assert_that(text, has_length(1)) - assert_that('should succeed', is_in(text)) + assert_that("should succeed", is_in(text)) - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that('"should succeed"', is_in(rdata_strings)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_recordset_zone_not_found(shared_zone_test_context): @@ -494,16 +448,16 @@ def test_create_recordset_zone_not_found(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': '1234', - 'name': 'test_create_recordset_zone_not_found', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": "1234", + "name": "test_create_recordset_zone_not_found", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -516,9 +470,9 @@ def test_create_missing_record_data(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - new_rs = dict({"no": "data"}, zoneId=shared_zone_test_context.system_test_zone['id']) + new_rs = dict({"no": "data"}, zoneId=shared_zone_test_context.system_test_zone["id"]) - errors = client.create_recordset(new_rs, status=400)['errors'] + errors = client.create_recordset(new_rs, status=400)["errors"] assert_that(errors, contains_inanyorder( "Missing RecordSet.name", "Missing RecordSet.type", @@ -533,21 +487,21 @@ def test_create_invalid_record_type(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_invalid_record_type', - 'type': 'invalid type', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_invalid_record_type", + "type": "invalid type", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } - errors = client.create_recordset(new_rs, status=400)['errors'] + errors = client.create_recordset(new_rs, status=400)["errors"] assert_that(errors, contains_inanyorder("Invalid RecordType")) @@ -558,24 +512,24 @@ def test_create_invalid_record_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_invalid_record.data', - 'type': 'A', - 'ttl': 5, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_invalid_record.data", + "type": "A", + "ttl": 5, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': 'not.ipv4' + "address": "not.ipv4" }, { # Currently, list validation is fail-fast, so the "Missing A.address" that should happen here never does - 'nonsense': 'gibberish' + "nonsense": "gibberish" } ] } - errors = client.create_recordset(new_rs, status=400)['errors'] + errors = client.create_recordset(new_rs, status=400)["errors"] assert_that(errors, contains_inanyorder( "A must be a valid IPv4 Address", @@ -587,22 +541,20 @@ def test_create_dotted_a_record_not_apex_fails(shared_zone_test_context): """ Test that creating a dotted host name A record set fails. """ - client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone dotted_host_a_record = { - 'zoneId': zone['id'], - 'name': 'hello.world', - 'type': 'A', - 'ttl': 500, - 'records': [{'address': '127.0.0.1'}] + "zoneId": shared_zone_test_context.parent_zone["id"], + "name": "hello.world", + "type": "A", + "ttl": 500, + "records": [{"address": "127.0.0.1"}] } + zone_name = shared_zone_test_context.parent_zone["name"] error = client.create_recordset(dotted_host_a_record, status=422) - assert_that(error, is_("Record with name " + dotted_host_a_record['name'] + " and type A is a dotted host which " - "is not allowed in zone " + zone[ - 'name'])) + assert_that(error, is_("Record with name " + dotted_host_a_record["name"] + " and type A is a dotted host which " + "is not allowed in zone " + zone_name)) def test_create_dotted_a_record_apex_succeeds(shared_zone_test_context): @@ -611,25 +563,26 @@ def test_create_dotted_a_record_apex_succeeds(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone + zone_id = shared_zone_test_context.parent_zone["id"] + zone_name = shared_zone_test_context.parent_zone["name"] apex_a_record = { - 'zoneId': zone['id'], - 'name': zone['name'].rstrip('.'), - 'type': 'A', - 'ttl': 500, - 'records': [{'address': '127.0.0.1'}] + "zoneId": zone_id, + "name": zone_name.rstrip("."), + "type": "A", + "ttl": 500, + "records": [{"address": "127.0.0.1"}] } apex_a_rs = None try: apex_a_response = client.create_recordset(apex_a_record, status=202) - apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] - assert_that(apex_a_rs['name'], is_(apex_a_record['name'] + '.')) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] + assert_that(apex_a_rs["name"], is_(apex_a_record["name"] + ".")) finally: if apex_a_rs: - delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -639,25 +592,26 @@ def test_create_dotted_a_record_apex_with_trailing_dot_succeeds(shared_zone_test """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone + zone_id = shared_zone_test_context.parent_zone["id"] + zone_name = shared_zone_test_context.parent_zone["name"] apex_a_record = { - 'zoneId': zone['id'], - 'name': zone['name'], - 'type': 'A', - 'ttl': 500, - 'records': [{'address': '127.0.0.1'}] + "zoneId": zone_id, + "name": zone_name, + "type": "A", + "ttl": 500, + "records": [{"address": "127.0.0.1"}] } apex_a_rs = None try: apex_a_response = client.create_recordset(apex_a_record, status=202) - apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] - assert_that(apex_a_rs['name'], is_(apex_a_record['name'])) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] + assert_that(apex_a_rs["name"], is_(apex_a_record["name"])) finally: if apex_a_rs: - delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_dotted_cname_record_fails(shared_zone_test_context): @@ -665,14 +619,13 @@ def test_create_dotted_cname_record_fails(shared_zone_test_context): Test that creating a CNAME record set with dotted host record name returns an error. """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone apex_cname_rs = { - 'zoneId': zone['id'], - 'name': 'dot.ted', - 'type': 'CNAME', - 'ttl': 500, - 'records': [{'cname': 'foo.bar.'}] + "zoneId": shared_zone_test_context.parent_zone["id"], + "name": "dot.ted", + "type": "CNAME", + "ttl": 500, + "records": [{"cname": "foo.bar."}] } error = client.create_recordset(apex_cname_rs, status=422) @@ -687,21 +640,21 @@ def test_create_cname_with_multiple_records(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_cname_with_multiple_records', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_cname_with_multiple_records", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.com' + "cname": "cname1.com" }, { - 'cname': 'cname2.com' + "cname": "cname2.com" } ] } - errors = client.create_recordset(new_rs, status=400)['errors'] + errors = client.create_recordset(new_rs, status=400)["errors"] assert_that(errors[0], is_("CNAME record sets cannot contain multiple records")) @@ -710,14 +663,14 @@ def test_create_cname_record_apex_fails(shared_zone_test_context): Test that creating a CNAME record set with record name matching zone name returns an error. """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.parent_zone - + zone_id = shared_zone_test_context.parent_zone["id"] + zone_name = shared_zone_test_context.parent_zone["name"] apex_cname_rs = { - 'zoneId': zone['id'], - 'name': zone['name'].rstrip('.'), - 'type': 'CNAME', - 'ttl': 500, - 'records': [{'cname': 'foo.bar.'}] + "zoneId": zone_id, + "name": zone_name.rstrip("."), + "type": "CNAME", + "ttl": 500, + "records": [{"cname": "foo.bar."}] } error = client.create_recordset(apex_cname_rs, status=422) @@ -726,18 +679,18 @@ def test_create_cname_record_apex_fails(shared_zone_test_context): def test_create_cname_pointing_to_origin_symbol_fails(shared_zone_test_context): """ - Test that creating a CNAME record set with name '@' fails + Test that creating a CNAME record set with name "@" fails """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': '@', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "@", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname.' + "cname": "cname." } ] } @@ -753,25 +706,25 @@ def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_cont client = shared_zone_test_context.ok_vinyldns_client a_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'duplicate-test-name', - 'type': 'A', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "duplicate-test-name", + "type": "A", + "ttl": 500, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] } cname_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'duplicate-test-name', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "duplicate-test-name", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.com' + "cname": "cname1.com" } ] } @@ -779,16 +732,16 @@ def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_cont a_record = None try: a_create = client.create_recordset(a_rs, status=202) - a_record = client.wait_until_recordset_change_status(a_create, 'Complete')['recordSet'] + a_record = client.wait_until_recordset_change_status(a_create, "Complete")["recordSet"] error = client.create_recordset(cname_rs, status=409) assert_that(error, is_( - 'RecordSet with name duplicate-test-name already exists in zone system-test., CNAME record cannot use duplicate name')) + "RecordSet with name duplicate-test-name already exists in zone system-test., CNAME record cannot use duplicate name")) finally: if a_record: - delete_result = client.delete_recordset(a_record['zoneId'], a_record['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(a_record["zoneId"], a_record["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_record_with_existing_cname_fails(shared_zone_test_context): @@ -798,25 +751,25 @@ def test_create_record_with_existing_cname_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client cname_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'duplicate-test-name', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "duplicate-test-name", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.com' + "cname": "cname1.com" } ] } a_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'duplicate-test-name', - 'type': 'A', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "duplicate-test-name", + "type": "A", + "ttl": 500, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] } @@ -824,16 +777,16 @@ def test_create_record_with_existing_cname_fails(shared_zone_test_context): cname_record = None try: cname_create = client.create_recordset(cname_rs, status=202) - cname_record = client.wait_until_recordset_change_status(cname_create, 'Complete')['recordSet'] + cname_record = client.wait_until_recordset_change_status(cname_create, "Complete")["recordSet"] error = client.create_recordset(a_rs, status=409) assert_that(error, - is_('RecordSet with name duplicate-test-name and type CNAME already exists in zone system-test.')) + is_("RecordSet with name duplicate-test-name and type CNAME already exists in zone system-test.")) finally: if cname_record: - delete_result = client.delete_recordset(cname_record['zoneId'], cname_record['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(cname_record["zoneId"], cname_record["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_cname_forces_record_to_be_absolute(shared_zone_test_context): @@ -843,13 +796,13 @@ def test_create_cname_forces_record_to_be_absolute(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_cname_with_multiple_records', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_cname_with_multiple_records", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.com' + "cname": "cname1.com" } ] } @@ -857,13 +810,13 @@ def test_create_cname_forces_record_to_be_absolute(shared_zone_test_context): result_rs = None try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'cname': 'cname1.com.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"cname": "cname1.com."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_cname_relative_fails(shared_zone_test_context): @@ -873,13 +826,13 @@ def test_create_cname_relative_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_cname_relative', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_cname_relative", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'relative' + "cname": "relative" } ] } @@ -894,13 +847,13 @@ def test_create_cname_does_not_change_absolute_record(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_create_cname_with_multiple_records', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_create_cname_with_multiple_records", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.' + "cname": "cname1." } ] } @@ -908,13 +861,13 @@ def test_create_cname_does_not_change_absolute_record(shared_zone_test_context): try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'cname': 'cname1.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"cname": "cname1."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_mx_forces_record_to_be_absolute(shared_zone_test_context): @@ -924,27 +877,27 @@ def test_create_mx_forces_record_to_be_absolute(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'mx_not_absolute', - 'type': 'MX', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "mx_not_absolute", + "type": "MX", + "ttl": 500, + "records": [ { - 'preference': 1, - 'exchange': 'foo' + "preference": 1, + "exchange": "foo" } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'preference': 1, 'exchange': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"preference": 1, "exchange": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_mx_does_not_change_if_absolute(shared_zone_test_context): @@ -954,27 +907,27 @@ def test_create_mx_does_not_change_if_absolute(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'mx_absolute', - 'type': 'MX', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "mx_absolute", + "type": "MX", + "ttl": 500, + "records": [ { - 'preference': 1, - 'exchange': 'foo.' + "preference": 1, + "exchange": "foo." } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'preference': 1, 'exchange': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"preference": 1, "exchange": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_ptr_forces_record_to_be_absolute(shared_zone_test_context): @@ -982,29 +935,27 @@ def test_create_ptr_forces_record_to_be_absolute(shared_zone_test_context): Test that ptr record data is made absolute after being created """ client = shared_zone_test_context.ok_vinyldns_client - reverse4_zone = shared_zone_test_context.ip4_reverse_zone - new_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '30.30', - 'type': 'PTR', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.ip4_reverse_zone["id"], + "name": "30.30", + "type": "PTR", + "ttl": 500, + "records": [ { - 'ptrdname': 'foo' + "ptrdname": "foo" } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'ptrdname': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"ptrdname": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_ptr_does_not_change_if_absolute(shared_zone_test_context): @@ -1012,29 +963,28 @@ def test_create_ptr_does_not_change_if_absolute(shared_zone_test_context): Test that ptr record data is unchanged if already absolute """ client = shared_zone_test_context.ok_vinyldns_client - reverse4_zone = shared_zone_test_context.ip4_reverse_zone new_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '30.30', - 'type': 'PTR', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.ip4_reverse_zone["id"], + "name": "30.30", + "type": "PTR", + "ttl": 500, + "records": [ { - 'ptrdname': 'foo.' + "ptrdname": "foo." } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'ptrdname': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"ptrdname": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_srv_forces_record_to_be_absolute(shared_zone_test_context): @@ -1044,29 +994,29 @@ def test_create_srv_forces_record_to_be_absolute(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'srv_not_absolute', - 'type': 'SRV', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "srv_not_absolute", + "type": "SRV", + "ttl": 500, + "records": [ { - 'priority': 1, - 'weight': 1, - 'port': 1, - 'target': 'foo' + "priority": 1, + "weight": 1, + "port": 1, + "target": "foo" } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'priority': 1, 'weight': 1, 'port': 1, 'target': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"priority": 1, "weight": 1, "port": 1, "target": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_srv_does_not_change_if_absolute(shared_zone_test_context): @@ -1076,32 +1026,32 @@ def test_create_srv_does_not_change_if_absolute(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'srv_absolute', - 'type': 'SRV', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "srv_absolute", + "type": "SRV", + "ttl": 500, + "records": [ { - 'priority': 1, - 'weight': 1, - 'port': 1, - 'target': 'foo.' + "priority": 1, + "weight": 1, + "port": 1, + "target": "foo." } ] } try: result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - assert_that(result_rs['records'], is_([{'priority': 1, 'weight': 1, 'port': 1, 'target': 'foo.'}])) + result_rs = result["recordSet"] + assert_that(result_rs["records"], is_([{"priority": 1, "weight": 1, "port": 1, "target": "foo."}])) finally: if result_rs: - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") -@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.FORWARD_RECORDS) def test_create_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): """ Test creating a new record set in an existing zone @@ -1110,31 +1060,31 @@ def test_create_recordset_forward_record_types(shared_zone_test_context, record_ result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) - new_rs['name'] = generate_record_name() + test_rs['type'] + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone["id"]) + new_rs["name"] = generate_record_name() + test_rs["type"] result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") @pytest.mark.serial -@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.REVERSE_RECORDS) def test_reverse_create_recordset_reverse_record_types(shared_zone_test_context, record_name, test_rs): """ Test creating a new record set in an existing reverse zone @@ -1143,26 +1093,26 @@ def test_reverse_create_recordset_reverse_record_types(shared_zone_test_context, result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone["id"]) result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") def test_create_invalid_length_recordset_name(shared_zone_test_context): @@ -1172,13 +1122,13 @@ def test_create_invalid_length_recordset_name(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'a' * 256, - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "a" * 256, + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] } @@ -1192,13 +1142,13 @@ def test_create_recordset_name_with_spaces(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'a a', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "a a", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] } @@ -1211,13 +1161,13 @@ def test_user_cannot_create_record_in_unowned_zone(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_record_set = { - 'zoneId': shared_zone_test_context.dummy_zone['id'], - 'name': 'test_user_cannot_create_record_in_unowned_zone', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.dummy_zone["id"], + "name": "test_user_cannot_create_record_in_unowned_zone", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.10.10.10' + "address": "10.10.10.10" } ] } @@ -1230,16 +1180,16 @@ def test_create_recordset_no_authorization(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_create_recordset_no_authorization', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_create_recordset_no_authorization", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -1251,50 +1201,50 @@ def test_create_ipv4_ptr_recordset_with_verify(shared_zone_test_context): Test creating a new IPv4 PTR recordset in an existing IPv4 reverse lookup zone """ client = shared_zone_test_context.ok_vinyldns_client - reverse4_zone = shared_zone_test_context.ip4_reverse_zone + reverse4_zone_id = shared_zone_test_context.ip4_reverse_zone["id"] result_rs = None try: new_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '30.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse4_zone_id, + "name": "30.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } - print "\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n" + print("\r\nCreating recordset in zone ip4_reverse_zone\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] - assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + records = result_rs["records"] + assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(shared_zone_test_context.ip4_reverse_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(1)) - assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + assert_that(rdata_strings[0], is_("ftp.vinyldns.")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_ipv4_ptr_recordset_in_forward_zone_fails(shared_zone_test_context): @@ -1303,13 +1253,13 @@ def test_create_ipv4_ptr_recordset_in_forward_zone_fails(shared_zone_test_contex """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': '35.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "35.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } @@ -1322,16 +1272,16 @@ def test_create_address_recordset_in_ipv4_reverse_zone_fails(shared_zone_test_co """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ip4_reverse_zone['id'], - 'name': 'test_create_address_recordset_in_ipv4_reverse_zone_fails', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip4_reverse_zone["id"], + "name": "test_create_address_recordset_in_ipv4_reverse_zone_fails", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -1343,47 +1293,46 @@ def test_create_ipv6_ptr_recordset(shared_zone_test_context): Test creating a new PTR record set in an existing IPv6 reverse lookup zone """ client = shared_zone_test_context.ok_vinyldns_client - reverse6_zone = shared_zone_test_context.ip6_reverse_zone result_rs = None try: new_rs = { - 'zoneId': reverse6_zone['id'], - 'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], + "name": "0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] - assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + records = result_rs["records"] + assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(reverse6_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(shared_zone_test_context.ip6_reverse_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(1)) - assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + assert_that(rdata_strings[0], is_("ftp.vinyldns.")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_ipv6_ptr_recordset_in_forward_zone_fails(shared_zone_test_context): @@ -1392,13 +1341,13 @@ def test_create_ipv6_ptr_recordset_in_forward_zone_fails(shared_zone_test_contex """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': '3.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "3.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } @@ -1411,16 +1360,16 @@ def test_create_address_recordset_in_ipv6_reverse_zone_fails(shared_zone_test_co """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], - 'name': 'test_create_address_recordset_in_ipv6_reverse_zone_fails', - 'type': 'AAAA', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], + "name": "test_create_address_recordset_in_ipv6_reverse_zone_fails", + "type": "AAAA", + "ttl": 100, + "records": [ { - 'address': 'fd69:27cc:fe91::60' + "address": "fd69:27cc:fe91::60" }, { - 'address': 'fd69:27cc:fe91:1:2:3:4:61' + "address": "fd69:27cc:fe91:1:2:3:4:61" } ] } @@ -1433,13 +1382,13 @@ def test_create_invalid_ipv6_ptr_recordset(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], - 'name': '0.6.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], + "name": "0.6.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } @@ -1451,111 +1400,112 @@ def test_at_create_recordset(shared_zone_test_context): Test creating a new record set with name @ in an existing zone """ client = shared_zone_test_context.ok_vinyldns_client - ok_zone = shared_zone_test_context.ok_zone + ok_zone_id = shared_zone_test_context.ok_zone["id"] + ok_zone_name = shared_zone_test_context.ok_zone["name"] result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': '@', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId":ok_zone_id, + "name": "@", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'someText' + "text": "someText" } ] } - print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + print("\r\nCreating recordset in zone 'ok'\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs - expected_rs['name'] = ok_zone['name'] + expected_rs["name"] = ok_zone_name verify_recordset(result_rs, expected_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('someText')) + assert_that(records[0]["text"], is_("someText")) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(ok_zone, ok_zone['name'], result_rs['type']) + answers = dns_resolve(shared_zone_test_context.ok_zone, ok_zone_name, result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that('"someText"', is_in(rdata_strings)) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zone_test_context): """ - Test creating a new record set with escape characters (i.e. "" and \) in the record data + Test creating a new record set with escape characters (i.e. "" and \\) in the record data """ client = shared_zone_test_context.ok_vinyldns_client - ok_zone = shared_zone_test_context.ok_zone + ok_zone_id = shared_zone_test_context.ok_zone["id"] result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'testing', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId":ok_zone_id, + "name": "testing", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'escaped\char"act"ers' + "text": 'escaped\\char"act"ers' } ] } - print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + print("\r\nCreating recordset in zone 'ok'\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs - expected_rs['name'] = 'testing' + expected_rs["name"] = "testing" verify_recordset(result_rs, expected_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('escaped\\char\"act\"ers')) + assert_that(records[0]["text"], is_('escaped\\char\"act\"ers')) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(ok_zone, 'testing', result_rs['type']) + answers = dns_resolve(shared_zone_test_context.ok_zone, "testing", result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that('\"escapedchar\\"act\\"ers\"', is_in(rdata_strings)) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) @pytest.mark.serial @@ -1566,45 +1516,45 @@ def test_create_record_with_existing_wildcard_succeeds(shared_zone_test_context) client = shared_zone_test_context.ok_vinyldns_client wildcard_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': '*', - 'type': 'TXT', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "*", + "type": "TXT", + "ttl": 500, + "records": [ { - 'text': 'wildcard func test 1' + "text": "wildcard func test 1" } ] } test_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'create-record-with-existing-wildcard-succeeds', - 'type': 'TXT', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "create-record-with-existing-wildcard-succeeds", + "type": "TXT", + "ttl": 500, + "records": [ { - 'text': 'wildcard this should be ok' + "text": "wildcard this should be ok" } ] } try: wildcard_create = client.create_recordset(wildcard_rs, status=202) - wildcard_rs = client.wait_until_recordset_change_status(wildcard_create, 'Complete')['recordSet'] + wildcard_rs = client.wait_until_recordset_change_status(wildcard_create, "Complete")["recordSet"] test_create = client.create_recordset(test_rs, status=202) - test_rs = client.wait_until_recordset_change_status(test_create, 'Complete')['recordSet'] + test_rs = client.wait_until_recordset_change_status(test_create, "Complete")["recordSet"] finally: try: - if 'id' in wildcard_rs: - delete_result = client.delete_recordset(wildcard_rs['zoneId'], wildcard_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + if "id" in wildcard_rs: + delete_result = client.delete_recordset(wildcard_rs["zoneId"], wildcard_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") finally: try: - if 'id' in test_rs: - delete_result = client.delete_recordset(test_rs['zoneId'], test_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + if "id" in test_rs: + delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -1616,26 +1566,26 @@ def test_create_record_with_existing_cname_wildcard_succeed(shared_zone_test_con """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.system_test_zone + zone_id = shared_zone_test_context.system_test_zone - wildcard_rs = get_recordset_json(zone, '*', 'CNAME', [{'cname': 'cname2.'}]) + wildcard_rs = create_recordset(zone_id, "*", "CNAME", [{"cname": "cname2."}]) - test_rs = get_recordset_json(zone, 'new_record', 'A', [{'address': '10.1.1.1'}]) + test_rs = create_recordset(zone_id, "new_record", "A", [{"address": "10.1.1.1"}]) try: wildcard_create = client.create_recordset(wildcard_rs, status=202) - wildcard_rs = client.wait_until_recordset_change_status(wildcard_create, 'Complete')['recordSet'] + wildcard_rs = client.wait_until_recordset_change_status(wildcard_create, "Complete")["recordSet"] test_create = client.create_recordset(test_rs, status=202) - test_rs = client.wait_until_recordset_change_status(test_create, 'Complete')['recordSet'] + test_rs = client.wait_until_recordset_change_status(test_create, "Complete")["recordSet"] finally: try: - delete_result = client.delete_recordset(wildcard_rs['zoneId'], wildcard_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(wildcard_rs["zoneId"], wildcard_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") finally: try: - delete_result = client.delete_recordset(test_rs['zoneId'], test_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -1644,17 +1594,20 @@ def test_create_long_txt_record_succeeds(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.system_test_zone - record_data = 'a' * 64761 - long_txt_rs = get_recordset_json(zone, 'long-txt-record', 'TXT', [{'text': record_data}]) + + # Anything larger than 255 will test the limits of TXT, 4000 is the value used by R53 + # (https://aws.amazon.com/premiumsupport/knowledge-center/route-53-configure-long-spf-txt-records/) + record_data = "a" * 4000 + long_txt_rs = create_recordset(zone, "long-txt-record", "TXT", [{"text": record_data}]) try: rs_create = client.create_recordset(long_txt_rs, status=202) - rs = client.wait_until_recordset_change_status(rs_create, 'Complete')['recordSet'] - assert_that(rs['records'][0]['text'], is_(record_data)) + rs = client.wait_until_recordset_change_status(rs_create, "Complete")["recordSet"] + assert_that(rs["records"][0]["text"], is_(record_data)) finally: try: - delete_result = client.delete_recordset(rs['zoneId'], rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(rs["zoneId"], rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -1665,13 +1618,13 @@ def test_txt_dotted_host_create_succeeds(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'record-with.dot', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "record-with.dot", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'should pass' + "text": "should pass" } ] } @@ -1679,12 +1632,12 @@ def test_txt_dotted_host_create_succeeds(shared_zone_test_context): try: rs_create = client.create_recordset(new_rs, status=202) - rs_result = client.wait_until_recordset_change_status(rs_create, 'Complete')['recordSet'] + rs_result = client.wait_until_recordset_change_status(rs_create, "Complete")["recordSet"] finally: if rs_result: - delete_result = client.delete_recordset(rs_result['zoneId'], rs_result['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(rs_result["zoneId"], rs_result["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_ns_create_for_admin_group_succeeds(shared_zone_test_context): @@ -1697,23 +1650,23 @@ def test_ns_create_for_admin_group_succeeds(shared_zone_test_context): try: new_rs = { - 'zoneId': zone['id'], - 'name': 'someNS', - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": "someNS", + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." } ] } result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) def test_ns_create_for_unapproved_server_fails(shared_zone_test_context): @@ -1724,16 +1677,16 @@ def test_ns_create_for_unapproved_server_fails(shared_zone_test_context): zone = shared_zone_test_context.parent_zone new_rs = { - 'zoneId': zone['id'], - 'name': 'someNS', - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": "someNS", + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." }, { - 'nsdname': 'this.is.bad.' + "nsdname": "this.is.bad." } ] } @@ -1748,13 +1701,13 @@ def test_ns_create_for_origin_fails(shared_zone_test_context): zone = shared_zone_test_context.parent_zone new_rs = { - 'zoneId': zone['id'], - 'name': '@', - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": "@", + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." } ] } @@ -1771,46 +1724,46 @@ def test_create_ipv4_ptr_recordset_with_verify_in_classless(shared_zone_test_con try: new_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '196', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse4_zone["id"], + "name": "196", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } - print "\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] - assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + records = result_rs["records"] + assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(reverse4_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(1)) - assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + assert_that(rdata_strings[0], is_("ftp.vinyldns.")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_ipv4_ptr_recordset_in_classless_outside_cidr(shared_zone_test_context): @@ -1821,19 +1774,19 @@ def test_create_ipv4_ptr_recordset_in_classless_outside_cidr(shared_zone_test_co reverse4_zone = shared_zone_test_context.classless_zone_delegation_zone new_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '190', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse4_zone["id"], + "name": "190", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } error = client.create_recordset(new_rs, status=422) - assert_that(error, is_('RecordSet 190 does not specify a valid IP address in zone 192/30.2.0.192.in-addr.arpa.')) + assert_that(error, is_("RecordSet 190 does not specify a valid IP address in zone 192/30.2.0.192.in-addr.arpa.")) def test_create_high_value_domain_fails(shared_zone_test_context): @@ -1844,13 +1797,13 @@ def test_create_high_value_domain_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone new_rs = { - 'zoneId': zone['id'], - 'name': 'high-value-domain', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": zone["id"], + "name": "high-value-domain", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '1.1.1.1' + "address": "1.1.1.1" } ] } @@ -1868,13 +1821,13 @@ def test_create_high_value_domain_fails_case_insensitive(shared_zone_test_contex client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone new_rs = { - 'zoneId': zone['id'], - 'name': 'hIgH-vAlUe-dOmAiN', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": zone["id"], + "name": "hIgH-vAlUe-dOmAiN", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '1.1.1.1' + "address": "1.1.1.1" } ] } @@ -1891,13 +1844,13 @@ def test_create_high_value_domain_fails_for_ip4_ptr(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client ptr = { - 'zoneId': shared_zone_test_context.classless_base_zone['id'], - 'name': '252', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.classless_base_zone["id"], + "name": "252", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'test.foo.' + "ptrdname": "test.foo." } ] } @@ -1914,13 +1867,13 @@ def test_create_high_value_domain_fails_for_ip6_ptr(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client ptr = { - 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], - 'name': 'f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], + "name": "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'test.foo.' + "ptrdname": "test.foo." } ] } @@ -1937,7 +1890,7 @@ def test_no_add_access_non_test_zone(shared_zone_test_context): client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.non_test_shared_zone - record = get_recordset_json(zone, 'non-test-zone-A', 'A', [{'address': '1.2.3.4'}]) + record = create_recordset(zone, "non-test-zone-A", "A", [{"address": "1.2.3.4"}]) client.create_recordset(record, status=403) @@ -1952,16 +1905,16 @@ def test_create_with_owner_group_in_private_zone_by_admin_passes(shared_zone_tes create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_owner_group_success', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_shared_owner_group_success", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] create_response = client.create_recordset(record_json, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(group['id'])) + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(group["id"])) finally: if create_rs: - delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_with_owner_group_in_shared_zone_by_admin_passes(shared_zone_test_context): @@ -1975,16 +1928,16 @@ def test_create_with_owner_group_in_shared_zone_by_admin_passes(shared_zone_test create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_admin_success', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_shared_admin_success", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] create_response = client.create_recordset(record_json, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(group['id'])) + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(group["id"])) finally: if create_rs: - delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1994,7 +1947,7 @@ def test_create_with_owner_group_in_private_zone_by_acl_passes(shared_zone_test_ """ client = shared_zone_test_context.dummy_vinyldns_client - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") zone = shared_zone_test_context.ok_zone group = shared_zone_test_context.dummy_group create_rs = None @@ -2002,18 +1955,18 @@ def test_create_with_owner_group_in_private_zone_by_acl_passes(shared_zone_test_ try: add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - record_json = get_recordset_json(zone, 'test_ownergroup_success-acl', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_ownergroup_success-acl", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] create_response = client.create_recordset(record_json, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(group['id'])) + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(group["id"])) finally: clear_ok_acl_rules(shared_zone_test_context) if create_rs: - delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(zone['id'], create_rs['id'], - status=202) - shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(zone["id"], create_rs["id"], + status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -2023,7 +1976,7 @@ def test_create_with_owner_group_in_shared_zone_by_acl_passes(shared_zone_test_c """ client = shared_zone_test_context.dummy_vinyldns_client - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.dummy_group create_rs = None @@ -2031,20 +1984,20 @@ def test_create_with_owner_group_in_shared_zone_by_acl_passes(shared_zone_test_c try: add_shared_zone_acl_rules(shared_zone_test_context, [acl_rule]) - record_json = get_recordset_json(zone, 'test_shared_success_acl', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_shared_success_acl", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] create_response = client.create_recordset(record_json, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(group['id'])) + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(group["id"])) finally: clear_shared_zone_acl_rules(shared_zone_test_context) if create_rs: - delete_result = shared_zone_test_context.shared_zone_vinyldns_client.delete_recordset(zone['id'], - create_rs['id'], - status=202) + delete_result = shared_zone_test_context.shared_zone_vinyldns_client.delete_recordset(zone["id"], + create_rs["id"], + status=202) shared_zone_test_context.shared_zone_vinyldns_client.wait_until_recordset_change_status(delete_result, - 'Complete') + "Complete") def test_create_in_shared_zone_without_owner_group_id_succeeds(shared_zone_test_context): @@ -2056,17 +2009,17 @@ def test_create_in_shared_zone_without_owner_group_id_succeeds(shared_zone_test_ zone = shared_zone_test_context.shared_zone create_rs = None - record_json = get_recordset_json(zone, 'test_shared_no_owner_group', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_shared_no_owner_group", "A", [{"address": "1.1.1.1"}]) try: create_response = dummy_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs, is_not(has_key('ownerGroupId'))) + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs, is_not(has_key("ownerGroupId"))) finally: if create_rs: - delete_result = dummy_client.delete_recordset(create_rs['zoneId'], create_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = dummy_client.delete_recordset(create_rs["zoneId"], create_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_in_shared_zone_by_unassociated_user_succeeds_if_record_type_is_approved(shared_zone_test_context): @@ -2078,20 +2031,20 @@ def test_create_in_shared_zone_by_unassociated_user_succeeds_if_record_type_is_a zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.dummy_group - record_json = get_recordset_json(zone, generate_record_name(), 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, generate_record_name(), "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] create_rs = None try: create_response = client.create_recordset(record_json, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(group['id'])) + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(group["id"])) finally: if create_rs: - delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_in_shared_zone_by_unassociated_user_fails_if_record_type_is_not_approved(shared_zone_test_context): @@ -2103,11 +2056,11 @@ def test_create_in_shared_zone_by_unassociated_user_fails_if_record_type_is_not_ zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.dummy_group - record_json = get_recordset_json(zone, 'test_shared_not_approved_record_type', 'MX', - [{'preference': 3, 'exchange': 'mx'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_shared_not_approved_record_type", "MX", + [{"preference": 3, "exchange": "mx"}]) + record_json["ownerGroupId"] = group["id"] error = client.create_recordset(record_json, status=403) - assert_that(error, is_('User dummy does not have access to create test-shared-not-approved-record-type.shared.')) + assert_that(error, is_("User dummy does not have access to create test-shared-not-approved-record-type.shared.")) def test_create_with_not_found_owner_group_fails(shared_zone_test_context): @@ -2118,8 +2071,8 @@ def test_create_with_not_found_owner_group_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - record_json = get_recordset_json(zone, 'test_shared_bad_owner', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = 'no-existo' + record_json = create_recordset(zone, "test_shared_bad_owner", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = "no-existo" error = client.create_recordset(record_json, status=422) assert_that(error, is_('Record owner group with id "no-existo" not found')) @@ -2133,10 +2086,10 @@ def test_create_with_owner_group_when_not_member_fails(shared_zone_test_context) zone = shared_zone_test_context.ok_zone group = shared_zone_test_context.dummy_group - record_json = get_recordset_json(zone, 'test_shared_not_group_member', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = group['id'] + record_json = create_recordset(zone, "test_shared_not_group_member", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = group["id"] error = client.create_recordset(record_json, status=422) - assert_that(error, is_('User not in record owner group with id "' + group['id'] + '"')) + assert_that(error, is_("User not in record owner group with id \"" + group["id"] + "\"")) @pytest.mark.serial @@ -2148,30 +2101,30 @@ def test_create_ds_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}, - {'keytag': 60485, 'algorithm': 5, 'digesttype': 2, - 'digest': 'D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}, + {"keytag": 60485, "algorithm": 5, "digesttype": 2, + "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} ] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data, ttl=3600) + record_json = create_recordset(zone, "dskey", "DS", record_data, ttl=3600) result_rs = None try: result = client.create_recordset(record_json, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # get result - get_result = client.get_recordset(result_rs['zoneId'], result_rs['id'])['recordSet'] + get_result = client.get_recordset(result_rs["zoneId"], result_rs["id"])["recordSet"] verify_recordset(get_result, record_json) # verifying recordset in dns backend - answers = dns_resolve(zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(zone, result_rs["name"], result_rs["type"]) assert_that(answers, has_length(2)) rdata_strings = [x.upper() for x in rdata(answers)] - assert_that('60485 5 1 2BB183AF5F22588179A53B0A98631FAD1A292118', is_in(rdata_strings)) - assert_that('60485 5 2 D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A', is_in(rdata_strings)) + assert_that("60485 5 1 2BB183AF5F22588179A53B0A98631FAD1A292118", is_in(rdata_strings)) + assert_that("60485 5 2 D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A", is_in(rdata_strings)) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) def test_create_ds_non_hex_digest(shared_zone_test_context): @@ -2181,9 +2134,9 @@ def test_create_ds_non_hex_digest(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [{'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53G'}] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data) - errors = client.create_recordset(record_json, status=400)['errors'] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53G"}] + record_json = create_recordset(zone, "dskey", "DS", record_data) + errors = client.create_recordset(record_json, status=400)["errors"] assert_that(errors, contains_inanyorder("Could not convert digest to valid hex")) @@ -2195,9 +2148,9 @@ def test_create_ds_unknown_algorithm(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 0, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data) - errors = client.create_recordset(record_json, status=400)['errors'] + {"keytag": 60485, "algorithm": 0, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "dskey", "DS", record_data) + errors = client.create_recordset(record_json, status=400)["errors"] assert_that(errors, contains_inanyorder("Algorithm 0 is not a supported DNSSEC algorithm")) @@ -2209,9 +2162,9 @@ def test_create_ds_unknown_digest_type(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 0, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data) - errors = client.create_recordset(record_json, status=400)['errors'] + {"keytag": 60485, "algorithm": 5, "digesttype": 0, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "dskey", "DS", record_data) + errors = client.create_recordset(record_json, status=400)["errors"] assert_that(errors, contains_inanyorder("Digest Type 0 is not a supported DS record digest type")) @@ -2223,8 +2176,8 @@ def test_create_ds_bad_ttl_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data, ttl=100) + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "dskey", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) assert_that(error, is_("DS record [dskey] must have TTL matching its linked NS (3600)")) @@ -2237,8 +2190,8 @@ def test_create_ds_no_ns_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, 'no-ns-exists', 'DS', record_data, ttl=3600) + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "no-ns-exists", "DS", record_data, ttl=3600) error = client.create_recordset(record_json, status=422) assert_that(error, is_( @@ -2253,8 +2206,8 @@ def test_create_apex_ds_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, '@', 'DS', record_data, ttl=100) + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "@", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) assert_that(error, is_("Record with name [example.com.] is an DS record at apex and cannot be added")) @@ -2267,8 +2220,8 @@ def test_create_dotted_ds_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}] - record_json = get_recordset_json(zone, 'dotted.ds', 'DS', record_data, ttl=100) + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_json = create_recordset(zone, "dotted.ds", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) assert_that(error, is_( "Record with name dotted.ds and type DS is a dotted host which is not allowed in zone example.com.")) diff --git a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py index a2add0951..2121c905f 100644 --- a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py @@ -8,7 +8,7 @@ from test_data import TestData import time -@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.FORWARD_RECORDS) def test_delete_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): """ Test deleting a recordset for forward record types @@ -17,43 +17,43 @@ def test_delete_recordset_forward_record_types(shared_zone_test_context, record_ result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) - new_rs['name'] = generate_record_name() + new_rs['type'] + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone["id"]) + new_rs["name"] = generate_record_name() + new_rs["type"] result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # now delete delete_rs = result_rs - result = client.delete_recordset(delete_rs['zoneId'], delete_rs['id'], status=202) - assert_that(result['status'], is_('Pending')) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result = client.delete_recordset(delete_rs["zoneId"], delete_rs["id"], status=202) + assert_that(result["status"], is_("Pending")) + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # retry until the recordset is not found - client.get_recordset(result_rs['zoneId'], result_rs['id'], retries=20, status=404) + client.get_recordset(result_rs["zoneId"], result_rs["id"], retries=20, status=404) result_rs = None finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - if result and 'status' in result: - client.wait_until_recordset_change_status(result, 'Complete') + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + if result and "status" in result: + client.wait_until_recordset_change_status(result, "Complete") @pytest.mark.serial -@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.REVERSE_RECORDS) def test_delete_recordset_reverse_record_types(shared_zone_test_context, record_name, test_rs): """ Test deleting a recordset for reverse record types @@ -62,38 +62,38 @@ def test_delete_recordset_reverse_record_types(shared_zone_test_context, record_ result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone["id"]) result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # now delete delete_rs = result_rs - result = client.delete_recordset(delete_rs['zoneId'], delete_rs['id'], status=202) - assert_that(result['status'], is_('Pending')) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result = client.delete_recordset(delete_rs["zoneId"], delete_rs["id"], status=202) + assert_that(result["status"], is_("Pending")) + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # retry until the recordset is not found - client.get_recordset(result_rs['zoneId'], result_rs['id'], retries=20, status=404) + client.get_recordset(result_rs["zoneId"], result_rs["id"], retries=20, status=404) result_rs = None finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") def test_delete_recordset_with_verify(shared_zone_test_context): @@ -104,53 +104,53 @@ def test_delete_recordset_with_verify(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_delete_recordset_with_verify', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_delete_recordset_with_verify", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } - print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.1.1.1', is_in(records)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.1.1.1", is_in(records)) + assert_that("10.2.2.2", is_in(records)) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(2)) - assert_that('10.1.1.1', is_in(rdata_strings)) - assert_that('10.2.2.2', is_in(rdata_strings)) + assert_that("10.1.1.1", is_in(rdata_strings)) + assert_that("10.2.2.2", is_in(rdata_strings)) # Delete the record set and verify that it is removed - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") - answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) not_found = len(answers) == 0 assert_that(not_found, is_(True)) @@ -158,8 +158,8 @@ def test_delete_recordset_with_verify(shared_zone_test_context): result_rs = None finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_user_can_delete_record_in_owned_zone(shared_zone_test_context): @@ -172,26 +172,26 @@ def test_user_can_delete_record_in_owned_zone(shared_zone_test_context): try: rs = client.create_recordset( { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_user_can_delete_record_in_owned_zone', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_user_can_delete_record_in_owned_zone", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.10.10.10' + "address": "10.10.10.10" } ] - }, status=202)['recordSet'] - client.wait_until_recordset_exists(rs['zoneId'], rs['id']) + }, status=202)["recordSet"] + client.wait_until_recordset_exists(rs["zoneId"], rs["id"]) - client.delete_recordset(rs['zoneId'], rs['id'], status=202) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=202) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) rs = None finally: if rs: try: - client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) finally: pass @@ -207,24 +207,24 @@ def test_user_cannot_delete_record_in_unowned_zone(shared_zone_test_context): try: rs = client.create_recordset( { - 'zoneId': shared_zone_test_context.dummy_zone['id'], - 'name': 'test-user-cannot-delete-record-in-unowned-zone', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.dummy_zone["id"], + "name": "test-user-cannot-delete-record-in-unowned-zone", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.10.10.10' + "address": "10.10.10.10" } ] - }, status=202)['recordSet'] + }, status=202)["recordSet"] - client.wait_until_recordset_exists(rs['zoneId'], rs['id']) - unauthorized_client.delete_recordset(rs['zoneId'], rs['id'], status=403) + client.wait_until_recordset_exists(rs["zoneId"], rs["id"]) + unauthorized_client.delete_recordset(rs["zoneId"], rs["id"], status=403) finally: if rs: try: - client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) finally: pass @@ -234,7 +234,7 @@ def test_delete_recordset_no_authorization(shared_zone_test_context): Test delete a recordset without authorization """ client = shared_zone_test_context.dummy_vinyldns_client - client.delete_recordset(shared_zone_test_context.ok_zone['id'], '1234', sign_request=False, status=401) + client.delete_recordset(shared_zone_test_context.ok_zone["id"], "1234", sign_request=False, status=401) @pytest.mark.serial @@ -248,27 +248,27 @@ def test_delete_ipv4_ptr_recordset(shared_zone_test_context): try: orig_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '30.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse4_zone["id"], + "name": "30.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } result = client.create_recordset(orig_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Deleting..." + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Deleting...") - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") result_rs = None finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_delete_ipv4_ptr_recordset_does_not_exist_fails(shared_zone_test_context): @@ -276,7 +276,7 @@ def test_delete_ipv4_ptr_recordset_does_not_exist_fails(shared_zone_test_context Test deleting a nonexistant IPv4 PTR recordset returns not found """ client =shared_zone_test_context.ok_vinyldns_client - client.delete_recordset(shared_zone_test_context.ip4_reverse_zone['id'], '4444', status=404) + client.delete_recordset(shared_zone_test_context.ip4_reverse_zone["id"], "4444", status=404) def test_delete_ipv6_ptr_recordset(shared_zone_test_context): @@ -287,27 +287,27 @@ def test_delete_ipv6_ptr_recordset(shared_zone_test_context): result_rs = None try: orig_rs = { - 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], - 'name': '0.7.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], + "name": "0.7.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } result = client.create_recordset(orig_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Deleting..." + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Deleting...") - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") result_rs = None finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_change_status(delete_result, "Complete") @@ -316,7 +316,7 @@ def test_delete_ipv6_ptr_recordset_does_not_exist_fails(shared_zone_test_context Test deleting a nonexistant IPv6 PTR recordset returns not found """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_recordset(shared_zone_test_context.ip6_reverse_zone['id'], '6666', status=404) + client.delete_recordset(shared_zone_test_context.ip6_reverse_zone["id"], "6666", status=404) def test_delete_recordset_zone_not_found(shared_zone_test_context): @@ -324,7 +324,7 @@ def test_delete_recordset_zone_not_found(shared_zone_test_context): Test deleting a recordset in a zone that doesn't exist should return a 404 """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_recordset('1234', '4567', status=404) + client.delete_recordset("1234", "4567", status=404) def test_delete_recordset_not_found(shared_zone_test_context): @@ -332,7 +332,7 @@ def test_delete_recordset_not_found(shared_zone_test_context): Test deleting a recordset that doesn't exist should return a 404 """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_recordset(shared_zone_test_context.ok_zone['id'], '1234', status=404) + client.delete_recordset(shared_zone_test_context.ok_zone["id"], "1234", status=404) @pytest.mark.serial @@ -344,46 +344,46 @@ def test_at_delete_recordset(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone result_rs = None new_rs = { - 'zoneId': ok_zone['id'], - 'name': '@', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "@", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'someText' + "text": "someText" } ] } - print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print json.dumps(result, indent=3) + print(json.dumps(result, indent=3)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs - expected_rs['name'] = ok_zone['name'] + expected_rs["name"] = ok_zone["name"] verify_recordset(result_rs, expected_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - records = result_rs['records'] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('someText')) + assert_that(records[0]["text"], is_("someText")) - print "\r\n\r\n!!!deleting recordset in dns backend" - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + print("\r\n\r\n!!!deleting recordset in dns backend") + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") # verify that the record does not exist in the backend dns server - answers = dns_resolve(ok_zone, ok_zone['name'], result_rs['type']) + answers = dns_resolve(ok_zone, ok_zone["name"], result_rs["type"]) not_found = len(answers) == 0 assert_that(not_found) @@ -399,50 +399,50 @@ def test_delete_recordset_with_different_dns_data(shared_zone_test_context): try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'test_delete_recordset_with_different_dns_data', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "test_delete_recordset_with_different_dns_data", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] } - print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + print("\r\nCreating recordset in zone " + str(ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - result_rs['records'][0]['address'] = "10.8.8.8" + result_rs["records"][0]["address"] = "10.8.8.8" result = client.update_recordset(result_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) assert_that(answers, has_length(1)) - response = dns_update(ok_zone, result_rs['name'], 300, result_rs['type'], '10.9.9.9') - print "\nSuccessfully updated the record, record is now out of sync\n" - print str(response) + response = dns_update(ok_zone, result_rs["name"], 300, result_rs["type"], "10.9.9.9") + print("\nSuccessfully updated the record, record is now out of sync\n") + print(str(response)) # check you can delete - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") result_rs = None finally: if result_rs: try: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if delete_result: - client.wait_until_recordset_change_status(delete_result, 'Complete') + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -456,25 +456,25 @@ def test_user_can_delete_record_via_user_acl_rule(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Delete', userId='dummy') + acl_rule = generate_acl_rule("Delete", userId="dummy") result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_user_acl_rule", ok_zone) #Dummy user cannot delete record in zone - shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=403, retries=3) + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=403, retries=3) #add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule]) #Dummy user can delete record - shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) result_rs = None finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -487,30 +487,30 @@ def test_user_cannot_delete_record_with_write_txt_read_all(shared_zone_test_cont ok_zone = shared_zone_test_context.ok_zone created_rs = None try: - acl_rule1 = generate_acl_rule('Read', userId='dummy', recordMask='www-*') - acl_rule2 = generate_acl_rule('Write', userId='dummy', recordMask='www-user-cant-delete', recordTypes=['TXT']) + acl_rule1 = generate_acl_rule("Read", userId="dummy", recordMask="www-*") + acl_rule2 = generate_acl_rule("Write", userId="dummy", recordMask="www-user-cant-delete", recordTypes=["TXT"]) add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) # verify dummy can see ok_zone - dummy_view = dummy_client.list_zones()['zones'] - zone_ids = [zone['id'] for zone in dummy_view] - assert_that(zone_ids, has_item(ok_zone['id'])) + dummy_view = dummy_client.list_zones()["zones"] + zone_ids = [zone["id"] for zone in dummy_view] + assert_that(zone_ids, has_item(ok_zone["id"])) # dummy should be able to add the RS - new_rs = get_recordset_json(ok_zone, "www-user-cant-delete", "TXT", [{'text':'should-work'}]) + new_rs = create_recordset(ok_zone, "www-user-cant-delete", "TXT", [{"text": "should-work"}]) rs_change = dummy_client.create_recordset(new_rs, status=202) - created_rs = dummy_client.wait_until_recordset_change_status(rs_change, 'Complete')['recordSet'] + created_rs = dummy_client.wait_until_recordset_change_status(rs_change, "Complete")["recordSet"] verify_recordset(created_rs, new_rs) #dummy cannot delete the RS - dummy_client.delete_recordset(ok_zone['id'], created_rs['id'], status=403) + dummy_client.delete_recordset(ok_zone["id"], created_rs["id"], status=403) finally: clear_ok_acl_rules(shared_zone_test_context) if created_rs: - delete_result = client.delete_recordset(created_rs['zoneId'], created_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(created_rs["zoneId"], created_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -522,25 +522,25 @@ def test_user_can_delete_record_via_group_acl_rule(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Delete', groupId=shared_zone_test_context.dummy_group['id']) + acl_rule = generate_acl_rule("Delete", groupId=shared_zone_test_context.dummy_group["id"]) result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_group_acl_rule", ok_zone) #Dummy user cannot delete record in zone - shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=403) + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=403) #add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule]) #Dummy user can delete record - shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) result_rs = None finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_ns_delete_for_admin_group_passes(shared_zone_test_context): @@ -553,28 +553,28 @@ def test_ns_delete_for_admin_group_passes(shared_zone_test_context): try: new_rs = { - 'zoneId': zone['id'], - 'name': generate_record_name(), - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": generate_record_name(), + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." } ] } result = client.create_recordset(new_rs, status=202) - ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + ns_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - delete_result = client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") ns_rs = None finally: if ns_rs: - client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) - client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(ns_rs["zoneId"], ns_rs["id"]) def test_ns_delete_existing_ns_origin_fails(shared_zone_test_context): @@ -584,11 +584,11 @@ def test_ns_delete_existing_ns_origin_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone - list_results_page = client.list_recordsets_by_zone(zone['id'], status=200)['recordSets'] + list_results_page = client.list_recordsets_by_zone(zone["id"], status=200)["recordSets"] - apex_ns = [item for item in list_results_page if item['type'] == 'NS' and item['name'] in zone['name']][0] + apex_ns = [item for item in list_results_page if item["type"] == "NS" and item["name"] in zone["name"]][0] - client.delete_recordset(apex_ns['zoneId'], apex_ns['id'], status=422) + client.delete_recordset(apex_ns["zoneId"], apex_ns["id"], status=422) def test_delete_dotted_a_record_apex_succeeds(shared_zone_test_context): @@ -600,20 +600,20 @@ def test_delete_dotted_a_record_apex_succeeds(shared_zone_test_context): zone = shared_zone_test_context.parent_zone apex_a_record = { - 'zoneId': zone['id'], - 'name': zone['name'].rstrip('.'), - 'type': 'A', - 'ttl': 500, - 'records': [{'address': '127.0.0.1'}] + "zoneId": zone["id"], + "name": zone["name"].rstrip("."), + "type": "A", + "ttl": 500, + "records": [{"address": "127.0.0.1"}] } try: apex_a_response = client.create_recordset(apex_a_record, status=202) - apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] - assert_that(apex_a_rs['name'],is_(apex_a_record['name'] + '.')) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] + assert_that(apex_a_rs["name"],is_(apex_a_record["name"] + ".")) finally: - delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_delete_high_value_domain_fails(shared_zone_test_context): @@ -623,10 +623,10 @@ def test_delete_high_value_domain_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone_system = shared_zone_test_context.system_test_zone - list_results_page_system = client.list_recordsets_by_zone(zone_system['id'], status=200)['recordSets'] - record_system = [item for item in list_results_page_system if item['name'] == 'high-value-domain'][0] + list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] + record_system = [item for item in list_results_page_system if item["name"] == "high-value-domain"][0] - errors_system = client.delete_recordset(record_system['zoneId'], record_system['id'], status=422) + errors_system = client.delete_recordset(record_system["zoneId"], record_system["id"], status=422) assert_that(errors_system, is_('Record name "high-value-domain.system-test." is configured as a High Value Domain, so it cannot be modified.')) @@ -636,10 +636,10 @@ def test_delete_high_value_domain_fails_ip4_ptr(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone_ip4 = shared_zone_test_context.classless_base_zone - list_results_page_ip4 = client.list_recordsets_by_zone(zone_ip4['id'], status=200)['recordSets'] - record_ip4 = [item for item in list_results_page_ip4 if item['name'] == '253'][0] + list_results_page_ip4 = client.list_recordsets_by_zone(zone_ip4["id"], status=200)["recordSets"] + record_ip4 = [item for item in list_results_page_ip4 if item["name"] == "253"][0] - errors_ip4 = client.delete_recordset(record_ip4['zoneId'], record_ip4['id'], status=422) + errors_ip4 = client.delete_recordset(record_ip4["zoneId"], record_ip4["id"], status=422) assert_that(errors_ip4, is_('Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.')) @@ -650,10 +650,10 @@ def test_delete_high_value_domain_fails_ip6_ptr(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone_ip6 = shared_zone_test_context.ip6_reverse_zone - list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6['id'], status=200)['recordSets'] - record_ip6 = [item for item in list_results_page_ip6 if item['name'] == '0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0'][0] + list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6["id"], status=200)["recordSets"] + record_ip6 = [item for item in list_results_page_ip6 if item["name"] == "0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0"][0] - errors_ip6 = client.delete_recordset(record_ip6['zoneId'], record_ip6['id'], status=422) + errors_ip6 = client.delete_recordset(record_ip6["zoneId"], record_ip6["id"], status=422) assert_that(errors_ip6, is_('Record name "fd69:27cc:fe91:0000:0000:0000:ffff:0000" is configured as a High Value Domain, so it cannot be modified.')) @@ -663,12 +663,12 @@ def test_no_delete_access_non_test_zone(shared_zone_test_context): """ client = shared_zone_test_context.shared_zone_vinyldns_client - zone_id = shared_zone_test_context.non_test_shared_zone['id'] + zone_id = shared_zone_test_context.non_test_shared_zone["id"] - list_results = client.list_recordsets_by_zone(zone_id, status=200)['recordSets'] - record_delete = [item for item in list_results if item['name'] == 'delete-test'][0] + list_results = client.list_recordsets_by_zone(zone_id, status=200)["recordSets"] + record_delete = [item for item in list_results if item["name"] == "delete-test"][0] - client.delete_recordset(zone_id, record_delete['id'], status=403) + client.delete_recordset(zone_id, record_delete["id"], status=403) def test_delete_for_user_in_record_owner_group_in_shared_zone_succeeds(shared_zone_test_context): """ @@ -679,13 +679,13 @@ def test_delete_for_user_in_record_owner_group_in_shared_zone_succeeds(shared_zo shared_zone = shared_zone_test_context.shared_zone shared_group = shared_zone_test_context.shared_record_group - record_json = get_recordset_json(shared_zone, 'test_shared_del_og', 'A', [{'address': '1.1.1.1'}], ownergroup_id = shared_group['id']) + record_json = create_recordset(shared_zone, "test_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_group["id"]) create_rs = shared_client.create_recordset(record_json, status=202) - result_rs = shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - delete_rs = ok_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - ok_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + ok_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_delete_for_zone_admin_in_shared_zone_succeeds(shared_zone_test_context): """ @@ -694,13 +694,13 @@ def test_delete_for_zone_admin_in_shared_zone_succeeds(shared_zone_test_context) shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone - record_json = get_recordset_json(shared_zone, 'test_shared_del_admin', 'A', [{'address': '1.1.1.1'}], ownergroup_id = shared_zone_test_context.shared_record_group['id']) + record_json = create_recordset(shared_zone, "test_shared_del_admin", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) create_rs = shared_client.create_recordset(record_json, status=202) - result_rs = shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - delete_rs = shared_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_delete_for_unowned_record_with_approved_record_type_in_shared_zone_succeeds(shared_zone_test_context): """ @@ -710,13 +710,13 @@ def test_delete_for_unowned_record_with_approved_record_type_in_shared_zone_succ shared_zone = shared_zone_test_context.shared_zone ok_client = shared_zone_test_context.ok_vinyldns_client - record_json = get_recordset_json(shared_zone, 'test_shared_approved_record_type', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(shared_zone, "test_shared_approved_record_type", "A", [{"address": "1.1.1.1"}]) create_rs = shared_client.create_recordset(record_json, status=202) - result_rs = shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - delete_rs = ok_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - ok_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + ok_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_delete_for_user_not_in_record_owner_group_in_shared_zone_fails(shared_zone_test_context): """ @@ -728,19 +728,19 @@ def test_delete_for_user_not_in_record_owner_group_in_shared_zone_fails(shared_z shared_zone = shared_zone_test_context.shared_zone result_rs = None - record_json = get_recordset_json(shared_zone, 'test_shared_del_nonog', 'A', [{'address': '1.1.1.1'}], ownergroup_id = shared_zone_test_context.shared_record_group['id']) + record_json = create_recordset(shared_zone, "test_shared_del_nonog", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) try: create_rs = shared_client.create_recordset(record_json, status=202) - result_rs = shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - error = dummy_client.delete_recordset(shared_zone['id'], result_rs['id'], status=403) - assert_that(error, is_('User dummy does not have access to delete test-shared-del-nonog.shared.')) + error = dummy_client.delete_recordset(shared_zone["id"], result_rs["id"], status=403) + assert_that(error, is_("User dummy does not have access to delete test-shared-del-nonog.shared.")) finally: if result_rs: - delete_rs = shared_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_delete_for_user_not_in_unowned_record_in_shared_zone_fails_if_record_type_is_not_approved(shared_zone_test_context): """ @@ -752,19 +752,19 @@ def test_delete_for_user_not_in_unowned_record_in_shared_zone_fails_if_record_ty shared_zone = shared_zone_test_context.shared_zone result_rs = None - record_json = get_recordset_json(shared_zone, 'test_shared_del_not_approved_record_type', 'MX', [{'preference': 3, 'exchange': 'mx'}]) + record_json = create_recordset(shared_zone, "test_shared_del_not_approved_record_type", "MX", [{"preference": 3, "exchange": "mx"}]) try: create_rs = shared_client.create_recordset(record_json, status=202) - result_rs = shared_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - error = dummy_client.delete_recordset(shared_zone['id'], result_rs['id'], status=403) - assert_that(error, is_('User dummy does not have access to delete test-shared-del-not-approved-record-type.shared.')) + error = dummy_client.delete_recordset(shared_zone["id"], result_rs["id"], status=403) + assert_that(error, is_("User dummy does not have access to delete test-shared-del-not-approved-record-type.shared.")) finally: if result_rs: - delete_rs = shared_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_rs, "Complete") def test_delete_for_user_in_record_owner_group_in_non_shared_zone_fails(shared_zone_test_context): """ @@ -775,16 +775,16 @@ def test_delete_for_user_in_record_owner_group_in_non_shared_zone_fails(shared_z ok_zone = shared_zone_test_context.ok_zone result_rs = None - record_json = get_recordset_json(ok_zone, 'test_non_shared_del_og', 'A', [{'address': '1.1.1.1'}], ownergroup_id = shared_zone_test_context.shared_record_group['id']) + record_json = create_recordset(ok_zone, "test_non_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) try: create_rs = ok_client.create_recordset(record_json, status=202) - result_rs = ok_client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + result_rs = ok_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] - error = shared_client.delete_recordset(ok_zone['id'], result_rs['id'], status=403) - assert_that(error, is_('User sharedZoneUser does not have access to delete test-non-shared-del-og.ok.')) + error = shared_client.delete_recordset(ok_zone["id"], result_rs["id"], status=403) + assert_that(error, is_("User sharedZoneUser does not have access to delete test-non-shared-del-og.ok.")) finally: if result_rs: - delete_rs = ok_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - ok_client.wait_until_recordset_change_status(delete_rs, 'Complete') + delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + ok_client.wait_until_recordset_change_status(delete_rs, "Complete") diff --git a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py index afe61f47a..63b993f52 100644 --- a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py @@ -10,7 +10,7 @@ def test_get_recordset_no_authorization(shared_zone_test_context): Test getting a recordset without authorization """ client = shared_zone_test_context.ok_vinyldns_client - client.get_recordset(shared_zone_test_context.ok_zone['id'], '12345', sign_request=False, status=401) + client.get_recordset(shared_zone_test_context.ok_zone["id"], "12345", sign_request=False, status=401) def test_get_recordset(shared_zone_test_context): @@ -21,35 +21,35 @@ def test_get_recordset(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_get_recordset', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_get_recordset", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify - result = client.get_recordset(result_rs['zoneId'], result_rs['id']) - result_rs = result['recordSet'] + result = client.get_recordset(result_rs["zoneId"], result_rs["id"]) + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.1.1.1', is_in(records)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.1.1.1", is_in(records)) + assert_that("10.2.2.2", is_in(records)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_recordset_zone_doesnt_exist(shared_zone_test_context): @@ -58,28 +58,28 @@ def test_get_recordset_zone_doesnt_exist(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_get_recordset_zone_doesnt_exist', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_get_recordset_zone_doesnt_exist", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result_rs = None try: result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - client.get_recordset('5678', result_rs['id'], status=404) + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + client.get_recordset("5678", result_rs["id"], status=404) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_recordset_doesnt_exist(shared_zone_test_context): @@ -87,7 +87,7 @@ def test_get_recordset_doesnt_exist(shared_zone_test_context): Test getting a new recordset that doesn't exist should return a 404 """ client = shared_zone_test_context.ok_vinyldns_client - client.get_recordset(shared_zone_test_context.ok_zone['id'], '123', status=404) + client.get_recordset(shared_zone_test_context.ok_zone["id"], "123", status=404) @pytest.mark.serial @@ -100,35 +100,35 @@ def test_at_get_recordset(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': '@', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "@", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'someText' + "text": "someText" } ] } result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify - result = client.get_recordset(result_rs['zoneId'], result_rs['id']) - result_rs = result['recordSet'] + result = client.get_recordset(result_rs["zoneId"], result_rs["id"]) + result_rs = result["recordSet"] expected_rs = new_rs - expected_rs['name'] = ok_zone['name'] + expected_rs["name"] = ok_zone["name"] verify_recordset(result_rs, expected_rs) - records = result_rs['records'] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('someText')) + assert_that(records[0]["text"], is_("someText")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_recordset_from_shared_zone(shared_zone_test_context): """ @@ -137,27 +137,27 @@ def test_get_recordset_from_shared_zone(shared_zone_test_context): client = shared_zone_test_context.shared_zone_vinyldns_client retrieved_rs = None try: - new_rs = get_recordset_json(shared_zone_test_context.shared_zone, - "test_get_recordset", "TXT", [{'text':'should-work'}], - 100, - shared_zone_test_context.shared_record_group['id']) + new_rs = create_recordset(shared_zone_test_context.shared_zone, + "test_get_recordset", "TXT", [{"text":"should-work"}], + 100, + shared_zone_test_context.shared_record_group["id"]) result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify ok_client = shared_zone_test_context.ok_vinyldns_client - retrieved = ok_client.get_recordset(result_rs['zoneId'], result_rs['id']) - retrieved_rs = retrieved['recordSet'] + retrieved = ok_client.get_recordset(result_rs["zoneId"], result_rs["id"]) + retrieved_rs = retrieved["recordSet"] verify_recordset(retrieved_rs, new_rs) - assert_that(retrieved_rs['ownerGroupId'], is_(shared_zone_test_context.shared_record_group['id'])) - assert_that(retrieved_rs['ownerGroupName'], is_('record-ownergroup')) + assert_that(retrieved_rs["ownerGroupId"], is_(shared_zone_test_context.shared_record_group["id"])) + assert_that(retrieved_rs["ownerGroupName"], is_("record-ownergroup")) finally: if retrieved_rs: - delete_result = client.delete_recordset(retrieved_rs['zoneId'], retrieved_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(retrieved_rs["zoneId"], retrieved_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_unowned_recordset_from_shared_zone_succeeds_if_record_type_approved(shared_zone_test_context): """ @@ -167,21 +167,21 @@ def test_get_unowned_recordset_from_shared_zone_succeeds_if_record_type_approved ok_client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - new_rs = get_recordset_json(shared_zone_test_context.shared_zone, + new_rs = create_recordset(shared_zone_test_context.shared_zone, "test_get_unowned_recordset_approved_type", "A", [{"address": "1.2.3.4"}]) result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify - retrieved = ok_client.get_recordset(result_rs['zoneId'], result_rs['id'], status=200) - retrieved_rs = retrieved['recordSet'] + retrieved = ok_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=200) + retrieved_rs = retrieved["recordSet"] verify_recordset(retrieved_rs, new_rs) finally: if result_rs: - delete_result = ok_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - ok_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + ok_client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_unowned_recordset_from_shared_zone_fails_if_record_type_not_approved(shared_zone_test_context): """ @@ -190,21 +190,21 @@ def test_get_unowned_recordset_from_shared_zone_fails_if_record_type_not_approve client = shared_zone_test_context.shared_zone_vinyldns_client result_rs = None try: - new_rs = get_recordset_json(shared_zone_test_context.shared_zone, - "test_get_unowned_recordset", "MX", [{'preference': 3, 'exchange': 'mx'}]) + new_rs = create_recordset(shared_zone_test_context.shared_zone, + "test_get_unowned_recordset", "MX", [{"preference": 3, "exchange": "mx"}]) result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify ok_client = shared_zone_test_context.ok_vinyldns_client - error = ok_client.get_recordset(result_rs['zoneId'], result_rs['id'], status=403) + error = ok_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=403) assert_that(error, is_("User ok does not have access to view test-get-unowned-recordset.shared.")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_get_owned_recordset_from_not_shared_zone(shared_zone_test_context): """ @@ -213,18 +213,18 @@ def test_get_owned_recordset_from_not_shared_zone(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - new_rs = get_recordset_json(shared_zone_test_context.ok_zone, - "test_cant_get_owned_recordset", "TXT", [{'text':'should-work'}], - 100, - shared_zone_test_context.shared_record_group['id']) + new_rs = create_recordset(shared_zone_test_context.ok_zone, + "test_cant_get_owned_recordset", "TXT", [{"text":"should-work"}], + 100, + shared_zone_test_context.shared_record_group["id"]) result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify shared_client = shared_zone_test_context.shared_zone_vinyldns_client - shared_client.get_recordset(result_rs['zoneId'], result_rs['id'], status=403) + shared_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=403) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py index f28c8c788..1aa9ba62b 100644 --- a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py +++ b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py @@ -1,34 +1,34 @@ -from hamcrest import * +import pytest + from utils import * -from vinyldns_python import VinylDNSClient def check_changes_response(response, recordChanges=False, nextId=False, startFrom=False, maxItems=100): """ :param response: return value of list_recordset_changes() :param recordChanges: true if not empty or False if empty, cannot check exact values because don't have access to all attributes - :param nextId: true if exists, false if doesn't, wouldn't be able to check exact value + :param nextId: true if exists, false if doesn"t, wouldn"t be able to check exact value :param startFrom: the string for startFrom or false if doesnt exist :param maxItems: maxItems is defined as an Int by default so will always return an Int """ - assert_that(response, has_key('zoneId')) #always defined as random string + assert_that(response, has_key("zoneId")) # always defined as random string if recordChanges: - assert_that(response['recordSetChanges'], is_not(has_length(0))) + assert_that(response["recordSetChanges"], is_not(has_length(0))) else: - assert_that(response['recordSetChanges'], has_length(0)) + assert_that(response["recordSetChanges"], has_length(0)) if nextId: - assert_that(response, has_key('nextId')) + assert_that(response, has_key("nextId")) else: - assert_that(response, is_not(has_key('nextId'))) + assert_that(response, is_not(has_key("nextId"))) if startFrom: - assert_that(response['startFrom'], is_(startFrom)) + assert_that(response["startFrom"], is_(startFrom)) else: - assert_that(response, is_not(has_key('startFrom'))) - assert_that(response['maxItems'], is_(maxItems)) + assert_that(response, is_not(has_key("startFrom"))) + assert_that(response["maxItems"], is_(maxItems)) - for change in response['recordSetChanges']: - assert_that(change['userName'], is_('history-user')) + for change in response["recordSetChanges"]: + assert_that(change["userName"], is_("history-user")) def test_list_recordset_changes_no_authorization(shared_zone_test_context): @@ -37,7 +37,7 @@ def test_list_recordset_changes_no_authorization(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - client.list_recordset_changes('12345', sign_request=False, status=401) + client.list_recordset_changes("12345", sign_request=False, status=401) def test_list_recordset_changes_member_auth_success(shared_zone_test_context): @@ -46,7 +46,7 @@ def test_list_recordset_changes_member_auth_success(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - client.list_recordset_changes(zone['id'], status=200) + client.list_recordset_changes(zone["id"], status=200) def test_list_recordset_changes_member_auth_no_access(shared_zone_test_context): @@ -55,7 +55,7 @@ def test_list_recordset_changes_member_auth_no_access(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client zone = shared_zone_test_context.ok_zone - client.list_recordset_changes(zone['id'], status=403) + client.list_recordset_changes(zone["id"], status=403) @pytest.mark.serial @@ -64,13 +64,13 @@ def test_list_recordset_changes_member_auth_with_acl(shared_zone_test_context): Test recordset changes succeeds for user with acl rules """ zone = shared_zone_test_context.ok_zone - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") try: client = shared_zone_test_context.dummy_vinyldns_client - client.list_recordset_changes(zone['id'], status=403) + client.list_recordset_changes(zone["id"], status=403) add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - client.list_recordset_changes(zone['id'], status=200) + client.list_recordset_changes(zone["id"], status=200) finally: clear_ok_acl_rules(shared_zone_test_context) @@ -81,19 +81,19 @@ def test_list_recordset_changes_no_start(shared_zone_test_context): """ client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=None) + response = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=None) check_changes_response(response, recordChanges=True, startFrom=False, nextId=False) - deleteChanges = response['recordSetChanges'][0:3] - updateChanges = response['recordSetChanges'][3:6] - createChanges = response['recordSetChanges'][6:9] + deleteChanges = response["recordSetChanges"][0:3] + updateChanges = response["recordSetChanges"][3:6] + createChanges = response["recordSetChanges"][6:9] for change in deleteChanges: - assert_that(change['changeType'], is_('Delete')) + assert_that(change["changeType"], is_("Delete")) for change in updateChanges: - assert_that(change['changeType'], is_('Update')) + assert_that(change["changeType"], is_("Update")) for change in createChanges: - assert_that(change['changeType'], is_('Create')) + assert_that(change["changeType"], is_("Create")) def test_list_recordset_changes_paging(shared_zone_test_context): @@ -103,22 +103,22 @@ def test_list_recordset_changes_paging(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response_1 = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=3) - response_2 = client.list_recordset_changes(original_zone['id'], start_from=response_1['nextId'], max_items=3) + response_1 = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=3) + response_2 = client.list_recordset_changes(original_zone["id"], start_from=response_1["nextId"], max_items=3) # nextId differs local/in dev where we get exactly the last item # Requesting one over the total in the local in memory dynamo will force consistent behavior. - response_3 = client.list_recordset_changes(original_zone['id'], start_from=response_2['nextId'], max_items=11) + response_3 = client.list_recordset_changes(original_zone["id"], start_from=response_2["nextId"], max_items=11) check_changes_response(response_1, recordChanges=True, nextId=True, startFrom=False, maxItems=3) - check_changes_response(response_2, recordChanges=True, nextId=True, startFrom=response_1['nextId'], maxItems=3) - check_changes_response(response_3, recordChanges=True, nextId=False, startFrom=response_2['nextId'], maxItems=11) + check_changes_response(response_2, recordChanges=True, nextId=True, startFrom=response_1["nextId"], maxItems=3) + check_changes_response(response_3, recordChanges=True, nextId=False, startFrom=response_2["nextId"], maxItems=11) - for change in response_1['recordSetChanges']: - assert_that(change['changeType'], is_('Delete')) - for change in response_2['recordSetChanges']: - assert_that(change['changeType'], is_('Update')) - for change in response_3['recordSetChanges']: - assert_that(change['changeType'], is_('Create')) + for change in response_1["recordSetChanges"]: + assert_that(change["changeType"], is_("Delete")) + for change in response_2["recordSetChanges"]: + assert_that(change["changeType"], is_("Update")) + for change in response_3["recordSetChanges"]: + assert_that(change["changeType"], is_("Create")) def test_list_recordset_changes_exhausted(shared_zone_test_context): @@ -127,19 +127,19 @@ def test_list_recordset_changes_exhausted(shared_zone_test_context): """ client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=17) + response = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=17) check_changes_response(response, recordChanges=True, startFrom=False, nextId=False, maxItems=17) - deleteChanges = response['recordSetChanges'][0:3] - updateChanges = response['recordSetChanges'][3:6] - createChanges = response['recordSetChanges'][6:9] + deleteChanges = response["recordSetChanges"][0:3] + updateChanges = response["recordSetChanges"][3:6] + createChanges = response["recordSetChanges"][6:9] for change in deleteChanges: - assert_that(change['changeType'], is_('Delete')) + assert_that(change["changeType"], is_("Delete")) for change in updateChanges: - assert_that(change['changeType'], is_('Update')) + assert_that(change["changeType"], is_("Update")) for change in createChanges: - assert_that(change['changeType'], is_('Create')) + assert_that(change["changeType"], is_("Create")) def test_list_recordset_returning_no_changes(shared_zone_test_context): @@ -148,8 +148,8 @@ def test_list_recordset_returning_no_changes(shared_zone_test_context): """ client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_recordset_changes(original_zone['id'], start_from='0', max_items=None) - check_changes_response(response, recordChanges=False, startFrom='0', nextId=False) + response = client.list_recordset_changes(original_zone["id"], start_from="0", max_items=None) + check_changes_response(response, recordChanges=False, startFrom="0", nextId=False) def test_list_recordset_changes_default_max_items(shared_zone_test_context): @@ -159,7 +159,7 @@ def test_list_recordset_changes_default_max_items(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=None) + response = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=None) check_changes_response(response, recordChanges=True, startFrom=False, nextId=False, maxItems=100) @@ -170,8 +170,8 @@ def test_list_recordset_changes_max_items_boundaries(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - too_large = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=101, status=400) - too_small = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=0, status=400) + too_large = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=101, status=400) + too_small = client.list_recordset_changes(original_zone["id"], start_from=None, max_items=0, status=400) assert_that(too_large, is_("maxItems was 101, maxItems must be between 0 exclusive and 100 inclusive")) assert_that(too_small, is_("maxItems was 0, maxItems must be between 0 exclusive and 100 inclusive")) diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py index bf6bb7e68..97c61a78c 100644 --- a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py +++ b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py @@ -19,7 +19,7 @@ def test_list_recordsets_no_start(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200) + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200) rs_fixture.check_recordsets_page_accuracy(list_results, size=22, offset=0) @@ -34,28 +34,28 @@ def test_list_recordsets_with_owner_group_id_and_owner_group_name(rs_fixture): result_rs = None try: # create a record in the zone with an owner group ID - new_rs = get_recordset_json(rs_zone, - "test-owned-recordset", "TXT", [{'text':'should-work'}], - 100, - shared_group['id']) + new_rs = create_recordset(rs_zone, + "test-owned-recordset", "TXT", [{"text":"should-work"}], + 100, + shared_group["id"]) result = client.create_recordset(new_rs, status=202) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200) - assert_that(list_results['recordSets'], has_length(23)) + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200) + assert_that(list_results["recordSets"], has_length(23)) # confirm the created recordset is in the list of recordsets - rs_from_list = (r for r in list_results['recordSets'] if r['id'] == result_rs['id']).next() - assert_that(rs_from_list['name'], is_("test-owned-recordset")) - assert_that(rs_from_list['ownerGroupId'], is_(shared_group['id'])) - assert_that(rs_from_list['ownerGroupName'], is_(shared_group['name'])) + rs_from_list = next((r for r in list_results["recordSets"] if r["id"] == result_rs["id"])) + assert_that(rs_from_list["name"], is_("test-owned-recordset")) + assert_that(rs_from_list["ownerGroupId"], is_(shared_group["id"])) + assert_that(rs_from_list["ownerGroupName"], is_(shared_group["name"])) finally: if result_rs: - delete_result = client.delete_recordset(rs_zone['id'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200) + delete_result = client.delete_recordset(rs_zone["id"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200) rs_fixture.check_recordsets_page_accuracy(list_results, size=22, offset=0) @@ -67,18 +67,18 @@ def test_list_recordsets_multiple_pages(rs_fixture): rs_zone = rs_fixture.zone # first page of 2 items - list_results_page = client.list_recordsets_by_zone(rs_zone['id'], max_items=2, status=200) - rs_fixture.check_recordsets_page_accuracy(list_results_page, size=2, offset=0, nextId=True, maxItems=2) + list_results_page = client.list_recordsets_by_zone(rs_zone["id"], max_items=2, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=2, offset=0, next_id=True, max_items=2) # second page of 5 items - start = list_results_page['nextId'] - list_results_page = client.list_recordsets_by_zone(rs_zone['id'], start_from=start, max_items=5, status=200) - rs_fixture.check_recordsets_page_accuracy(list_results_page, size=5, offset=2, nextId=True, startFrom=start, maxItems=5) + start = list_results_page["nextId"] + list_results_page = client.list_recordsets_by_zone(rs_zone["id"], start_from=start, max_items=5, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=5, offset=2, next_id=True, start_from=start, max_items=5) # third page of 6 items - start = list_results_page['nextId'] - list_results_page = client.list_recordsets_by_zone(rs_zone['id'], start_from=start, max_items=16, status=200) - rs_fixture.check_recordsets_page_accuracy(list_results_page, size=15, offset=7, nextId=False, startFrom=start, maxItems=16) + start = list_results_page["nextId"] + list_results_page = client.list_recordsets_by_zone(rs_zone["id"], start_from=start, max_items=16, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=15, offset=7, next_id=False, start_from=start, max_items=16) def test_list_recordsets_excess_page_size(rs_fixture): @@ -89,8 +89,8 @@ def test_list_recordsets_excess_page_size(rs_fixture): rs_zone = rs_fixture.zone #page of 22 items - list_results_page = client.list_recordsets_by_zone(rs_zone['id'], max_items=23, status=200) - rs_fixture.check_recordsets_page_accuracy(list_results_page, size=22, offset=0, maxItems=23, nextId=False) + list_results_page = client.list_recordsets_by_zone(rs_zone["id"], max_items=23, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=22, offset=0, max_items=23, next_id=False) def test_list_recordsets_fails_max_items_too_large(rs_fixture): @@ -100,7 +100,7 @@ def test_list_recordsets_fails_max_items_too_large(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - client.list_recordsets_by_zone(rs_zone['id'], max_items=200, status=400) + client.list_recordsets_by_zone(rs_zone["id"], max_items=200, status=400) def test_list_recordsets_fails_max_items_too_small(rs_fixture): @@ -110,7 +110,7 @@ def test_list_recordsets_fails_max_items_too_small(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - client.list_recordsets_by_zone(rs_zone['id'], max_items=0, status=400) + client.list_recordsets_by_zone(rs_zone["id"], max_items=0, status=400) def test_list_recordsets_default_size_is_100(rs_fixture): @@ -120,8 +120,8 @@ def test_list_recordsets_default_size_is_100(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200) - rs_fixture.check_recordsets_page_accuracy(list_results, size=22, offset=0, maxItems=100) + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=22, offset=0, max_items=100) @pytest.mark.serial @@ -135,107 +135,107 @@ def test_list_recordsets_duplicate_names(rs_fixture): created = [] try: - record_data_a = [{'address': '1.1.1.1'}] - record_data_txt = [{'text': 'some=value'}] + record_data_a = [{"address": "1.1.1.1"}] + record_data_txt = [{"text": "some=value"}] - record_json_a = get_recordset_json(rs_zone, '0', 'A', record_data_a, ttl=100) - record_json_txt = get_recordset_json(rs_zone, '0', 'TXT', record_data_txt, ttl=100) + record_json_a = create_recordset(rs_zone, "0", "A", record_data_a, ttl=100) + record_json_txt = create_recordset(rs_zone, "0", "TXT", record_data_txt, ttl=100) create_response = client.create_recordset(record_json_a, status=202) - created.append(client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet']['id']) + created.append(client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"]["id"]) create_response = client.create_recordset(record_json_txt, status=202) - created.append(client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet']['id']) + created.append(client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"]["id"]) - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200, start_from=None, max_items=1) - assert_that(list_results['recordSets'][0]['id'], is_(created[0])) + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200, start_from=None, max_items=1) + assert_that(list_results["recordSets"][0]["id"], is_(created[0])) - list_results = client.list_recordsets_by_zone(rs_zone['id'], status=200, start_from=list_results['nextId'], max_items=1) - assert_that(list_results['recordSets'][0]['id'], is_(created[1])) + list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200, start_from=list_results["nextId"], max_items=1) + assert_that(list_results["recordSets"][0]["id"], is_(created[1])) finally: - for id in created: - client.delete_recordset(rs_zone['id'], id, status=202) - client.wait_until_recordset_deleted(rs_zone['id'], id) + for recordset_id in created: + client.delete_recordset(rs_zone["id"], recordset_id, status=202) + client.wait_until_recordset_deleted(rs_zone["id"], recordset_id) def test_list_recordsets_with_record_name_filter_all(rs_fixture): """ - Test listing all recordsets whose name contains a substring, all recordsets have substring 'list' in name + Test listing all recordsets whose name contains a substring, all recordsets have substring "list" in name """ client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_name_filter="*list*", status=200) + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_name_filter="*list*", status=200) rs_fixture.check_recordsets_page_accuracy(list_results, size=22, offset=0) def test_list_recordsets_with_record_name_filter_and_page_size(rs_fixture): """ - First Listing 4 out of 5 recordsets with substring 'CNAME' in name - Second Listing 10 out of 10 recordsets with substring 'CNAME' in name with an excess page size of 12 + First Listing 4 out of 5 recordsets with substring "CNAME" in name + Second Listing 10 out of 10 recordsets with substring "CNAME" in name with an excess page size of 12 """ client = rs_fixture.client rs_zone = rs_fixture.zone # page of 4 items - list_results = client.list_recordsets_by_zone(rs_zone['id'], max_items=4, record_name_filter="*CNAME*", status=200) - assert_that(list_results['recordSets'], has_length(4)) + list_results = client.list_recordsets_by_zone(rs_zone["id"], max_items=4, record_name_filter="*CNAME*", status=200) + assert_that(list_results["recordSets"], has_length(4)) - list_results_records = list_results['recordSets']; + list_results_records = list_results["recordSets"] for i in range(len(list_results_records)): - assert_that(list_results_records[i]['name'], contains_string('CNAME')) + assert_that(list_results_records[i]["name"], contains_string("CNAME")) # page of 5 items but excess max items - list_results = client.list_recordsets_by_zone(rs_zone['id'], max_items=12, record_name_filter="*CNAME*", status=200) - assert_that(list_results['recordSets'], has_length(10)) + list_results = client.list_recordsets_by_zone(rs_zone["id"], max_items=12, record_name_filter="*CNAME*", status=200) + assert_that(list_results["recordSets"], has_length(10)) - list_results_records = list_results['recordSets']; + list_results_records = list_results["recordSets"] for i in range(len(list_results_records)): - assert_that(list_results_records[i]['name'], contains_string('CNAME')) + assert_that(list_results_records[i]["name"], contains_string("CNAME")) def test_list_recordsets_with_record_name_filter_and_chaining_pages_with_nextId(rs_fixture): """ - First Listing 2 out 10 recordsets with substring 'CNAME' in name, then using next Id of + First Listing 2 out 10 recordsets with substring "CNAME" in name, then using next Id of previous page to be the start key of next page """ client = rs_fixture.client rs_zone = rs_fixture.zone # page of 2 items - list_results = client.list_recordsets_by_zone(rs_zone['id'], max_items=2, record_name_filter="*CNAME*", status=200) - assert_that(list_results['recordSets'], has_length(2)) - start_key = list_results['nextId'] + list_results = client.list_recordsets_by_zone(rs_zone["id"], max_items=2, record_name_filter="*CNAME*", status=200) + assert_that(list_results["recordSets"], has_length(2)) + start_key = list_results["nextId"] # page of 2 items - list_results = client.list_recordsets_by_zone(rs_zone['id'], start_from=start_key, max_items=2, record_name_filter="*CNAME*", status=200) - assert_that(list_results['recordSets'], has_length(2)) + list_results = client.list_recordsets_by_zone(rs_zone["id"], start_from=start_key, max_items=2, record_name_filter="*CNAME*", status=200) + assert_that(list_results["recordSets"], has_length(2)) - list_results_records = list_results['recordSets'] - assert_that(list_results_records[0]['name'], contains_string('2-CNAME')) - assert_that(list_results_records[1]['name'], contains_string('3-CNAME')) + list_results_records = list_results["recordSets"] + assert_that(list_results_records[0]["name"], contains_string("2-CNAME")) + assert_that(list_results_records[1]["name"], contains_string("3-CNAME")) def test_list_recordsets_with_record_name_filter_one(rs_fixture): """ - Test listing all recordsets whose name contains a substring, only one record set has substring '8' in name + Test listing all recordsets whose name contains a substring, only one record set has substring "8" in name """ client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_name_filter="*1-A*", status=200) + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_name_filter="*1-A*", status=200) rs_fixture.check_recordsets_page_accuracy(list_results, size=1, offset=2) def test_list_recordsets_with_record_name_filter_none(rs_fixture): """ - Test listing all recordsets whose name contains a substring, no record set has substring 'Dummy' in name + Test listing all recordsets whose name contains a substring, no record set has substring "Dummy" in name """ client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_name_filter="*Dummy*", status=200) + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_name_filter="*Dummy*", status=200) rs_fixture.check_recordsets_page_accuracy(list_results, size=0, offset=0) @@ -246,12 +246,12 @@ def test_list_recordsets_with_record_type_filter_single_type(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_type_filter="NS", status=200) - rs_fixture.check_recordsets_parameters(list_results, recordTypeFilter="NS") + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_type_filter="NS", status=200) + rs_fixture.check_recordsets_parameters(list_results, record_type_filter="NS") - list_results_records = list_results['recordSets'] - assert_that(list_results_records[0]['type'], contains_string('NS')) - assert_that(list_results_records[0]['name'], contains_string('list-records')) + list_results_records = list_results["recordSets"] + assert_that(list_results_records[0]["type"], contains_string("NS")) + assert_that(list_results_records[0]["name"], contains_string("list-records")) def test_list_recordsets_with_record_type_filter_multiple_types(rs_fixture): @@ -261,14 +261,14 @@ def test_list_recordsets_with_record_type_filter_multiple_types(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_type_filter="NS,CNAME", status=200) - rs_fixture.check_recordsets_parameters(list_results, recordTypeFilter="NS,CNAME") + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_type_filter="NS,CNAME", status=200) + rs_fixture.check_recordsets_parameters(list_results, record_type_filter="NS,CNAME") - list_results_records = list_results['recordSets'] - assert_that(list_results_records[0]['type'], contains_string('CNAME')) - assert_that(list_results_records[0]['name'], contains_string('0-CNAME')) - assert_that(list_results_records[10]['type'], contains_string('NS')) - assert_that(list_results_records[10]['name'], contains_string('list-records')) + list_results_records = list_results["recordSets"] + assert_that(list_results_records[0]["type"], contains_string("CNAME")) + assert_that(list_results_records[0]["name"], contains_string("0-CNAME")) + assert_that(list_results_records[10]["type"], contains_string("NS")) + assert_that(list_results_records[10]["name"], contains_string("list-records")) def test_list_recordsets_with_record_type_filter_valid_and_invalid_type(rs_fixture): @@ -278,13 +278,13 @@ def test_list_recordsets_with_record_type_filter_valid_and_invalid_type(rs_fixtu client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_type_filter="FAKE,SOA", status=200) - rs_fixture.check_recordsets_parameters(list_results, recordTypeFilter="SOA") + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_type_filter="FAKE,SOA", status=200) + rs_fixture.check_recordsets_parameters(list_results, record_type_filter="SOA") - list_results_records = list_results['recordSets'] + list_results_records = list_results["recordSets"] assert_that(list_results_records, has_length(1)) - assert_that(list_results_records[0]['type'], contains_string('SOA')) - assert_that(list_results_records[0]['name'], contains_string('list-records.')) + assert_that(list_results_records[0]["type"], contains_string("SOA")) + assert_that(list_results_records[0]["name"], contains_string("list-records.")) def test_list_recordsets_with_record_type_filter_invalid_type(rs_fixture): """ @@ -293,10 +293,10 @@ def test_list_recordsets_with_record_type_filter_invalid_type(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], record_type_filter="FAKE", status=200) + list_results = client.list_recordsets_by_zone(rs_zone["id"], record_type_filter="FAKE", status=200) rs_fixture.check_recordsets_parameters(list_results) - assert_that(list_results['recordSets'], has_length(22)) + assert_that(list_results["recordSets"], has_length(22)) def test_list_recordsets_with_sort_descending(rs_fixture): @@ -306,14 +306,14 @@ def test_list_recordsets_with_sort_descending(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], name_sort="DESC", status=200) - rs_fixture.check_recordsets_parameters(list_results, nameSort="DESC") + list_results = client.list_recordsets_by_zone(rs_zone["id"], name_sort="DESC", status=200) + rs_fixture.check_recordsets_parameters(list_results, name_sort="DESC") - list_results_records = list_results['recordSets'] - assert_that(list_results_records[0]['type'], contains_string('NS')) - assert_that(list_results_records[0]['name'], contains_string('list-records.')) - assert_that(list_results_records[21]['type'], contains_string('A')) - assert_that(list_results_records[21]['name'], contains_string('0-A')) + list_results_records = list_results["recordSets"] + assert_that(list_results_records[0]["type"], contains_string("NS")) + assert_that(list_results_records[0]["name"], contains_string("list-records.")) + assert_that(list_results_records[21]["type"], contains_string("A")) + assert_that(list_results_records[21]["name"], contains_string("0-A")) def test_list_recordsets_with_invalid_sort(rs_fixture): @@ -323,14 +323,14 @@ def test_list_recordsets_with_invalid_sort(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - list_results = client.list_recordsets_by_zone(rs_zone['id'], name_sort="Nothing", status=200) - rs_fixture.check_recordsets_parameters(list_results, nameSort="ASC") + list_results = client.list_recordsets_by_zone(rs_zone["id"], name_sort="Nothing", status=200) + rs_fixture.check_recordsets_parameters(list_results, name_sort="ASC") - list_results_records = list_results['recordSets'] - assert_that(list_results_records[0]['type'], contains_string('A')) - assert_that(list_results_records[0]['name'], contains_string('0-A')) - assert_that(list_results_records[21]['type'], contains_string('SOA')) - assert_that(list_results_records[21]['name'], contains_string('list-records.')) + list_results_records = list_results["recordSets"] + assert_that(list_results_records[0]["type"], contains_string("A")) + assert_that(list_results_records[0]["name"], contains_string("0-A")) + assert_that(list_results_records[21]["type"], contains_string("SOA")) + assert_that(list_results_records[21]["name"], contains_string("list-records.")) def test_list_recordsets_no_authorization(rs_fixture): @@ -339,7 +339,7 @@ def test_list_recordsets_no_authorization(rs_fixture): """ client = rs_fixture.client rs_zone = rs_fixture.zone - client.list_recordsets_by_zone(rs_zone['id'], sign_request=False, status=401) + client.list_recordsets_by_zone(rs_zone["id"], sign_request=False, status=401) @pytest.mark.serial @@ -352,8 +352,8 @@ def test_list_recordsets_with_acl(shared_zone_test_context): new_rs = [] try: - acl_rule1 = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') - acl_rule2 = generate_acl_rule('Write', userId='dummy', recordMask='test-list-recordsets-with-acl1') + acl_rule1 = generate_acl_rule("Read", groupId=shared_zone_test_context.dummy_group["id"], recordMask="test.*") + acl_rule2 = generate_acl_rule("Write", userId="dummy", recordMask="test-list-recordsets-with-acl1") rec1 = seed_text_recordset(client, "test-list-recordsets-with-acl1", rs_zone) rec2 = seed_text_recordset(client, "test-list-recordsets-with-acl2", rs_zone) @@ -363,23 +363,23 @@ def test_list_recordsets_with_acl(shared_zone_test_context): add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) - result = shared_zone_test_context.dummy_vinyldns_client.list_recordsets_by_zone(rs_zone['id'], status=200) - result = result['recordSets'] + result = shared_zone_test_context.dummy_vinyldns_client.list_recordsets_by_zone(rs_zone["id"], status=200) + result = result["recordSets"] for rs in result: - if rs['name'] == rec1['name']: + if rs["name"] == rec1["name"]: verify_recordset(rs, rec1) - assert_that(rs['accessLevel'], is_('Write')) - elif rs['name'] == rec2['name']: + assert_that(rs["accessLevel"], is_("Write")) + elif rs["name"] == rec2["name"]: verify_recordset(rs, rec2) - assert_that(rs['accessLevel'], is_('Read')) - elif rs['name'] == rec3['name']: + assert_that(rs["accessLevel"], is_("Read")) + elif rs["name"] == rec3["name"]: verify_recordset(rs, rec3) - assert_that(rs['accessLevel'], is_('NoAccess')) + assert_that(rs["accessLevel"], is_("NoAccess")) finally: clear_ok_acl_rules(shared_zone_test_context) for rs in new_rs: - client.delete_recordset(rs['zoneId'], rs['id'], status=202) + client.delete_recordset(rs["zoneId"], rs["id"], status=202) for rs in new_rs: - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) diff --git a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py index 42e923055..a45c89586 100644 --- a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py @@ -16,30 +16,30 @@ def test_update_recordset_name_fails(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test-update-change-name-success-1', - 'type': 'A', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test-update-change-name-success-1", + "type": "A", + "ttl": 500, + "records": [ { - 'address': '1.1.1.1' + "address": "1.1.1.1" }, { - 'address': '1.1.1.2' + "address": "1.1.1.2" } ] } result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # update the record set, changing the name updated_rs = copy.deepcopy(result_rs) - updated_rs['name'] = 'test-update-change-name-success-2' - updated_rs['ttl'] = 600 - updated_rs['records'] = [ + updated_rs["name"] = "test-update-change-name-success-2" + updated_rs["ttl"] = 600 + updated_rs["records"] = [ { - 'address': '2.2.2.2' + "address": "2.2.2.2" } ] @@ -48,9 +48,9 @@ def test_update_recordset_name_fails(shared_zone_test_context): finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") def test_update_recordset_type_fails(shared_zone_test_context): @@ -61,29 +61,29 @@ def test_update_recordset_type_fails(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test-update-change-name-success-1', - 'type': 'A', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test-update-change-name-success-1", + "type": "A", + "ttl": 500, + "records": [ { - 'address': '1.1.1.1' + "address": "1.1.1.1" }, { - 'address': '1.1.1.2' + "address": "1.1.1.2" } ] } result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # update the record set, changing the name updated_rs = copy.deepcopy(result_rs) - updated_rs['type'] = 'AAAA' - updated_rs['records'] = [ + updated_rs["type"] = "AAAA" + updated_rs["records"] = [ { - 'address': '1::1' + "address": "1::1" } ] @@ -92,9 +92,9 @@ def test_update_recordset_type_fails(shared_zone_test_context): finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") def test_update_cname_with_multiple_records(shared_zone_test_context): @@ -105,40 +105,40 @@ def test_update_cname_with_multiple_records(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': shared_zone_test_context.system_test_zone['id'], - 'name': 'test_update_cname_with_multiple_records', - 'type': 'CNAME', - 'ttl': 500, - 'records': [ + "zoneId": shared_zone_test_context.system_test_zone["id"], + "name": "test_update_cname_with_multiple_records", + "type": "CNAME", + "ttl": 500, + "records": [ { - 'cname': 'cname1.' + "cname": "cname1." } ] } result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # update the record set, adding another cname record so there are multiple updated_rs = copy.deepcopy(result_rs) - updated_rs['records'] = [ + updated_rs["records"] = [ { - 'cname': 'cname1.' + "cname": "cname1." }, { - 'cname': 'cname2.' + "cname": "cname2." } ] - errors = client.update_recordset(updated_rs, status=400)['errors'] + errors = client.update_recordset(updated_rs, status=400)["errors"] assert_that(errors[0], is_("CNAME record sets cannot contain multiple records")) finally: if result_rs: - r = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(r, 'Complete') + r = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(r, "Complete") -@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.FORWARD_RECORDS) def test_update_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): """ Test updating a record set in a forward zone @@ -147,42 +147,42 @@ def test_update_recordset_forward_record_types(shared_zone_test_context, record_ result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) - new_rs['name'] = generate_record_name() + test_rs['type'] + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone["id"]) + new_rs["name"] = generate_record_name() + test_rs["type"] result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # now update update_rs = result_rs - update_rs['ttl'] = 1000 + update_rs["ttl"] = 1000 result = client.update_recordset(update_rs, status=202) - assert_that(result['status'], is_('Pending')) - result_rs = result['recordSet'] + assert_that(result["status"], is_("Pending")) + result_rs = result["recordSet"] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - assert_that(result_rs['ttl'], is_(1000)) + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + assert_that(result_rs["ttl"], is_(1000)) finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") @pytest.mark.serial -@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +@pytest.mark.parametrize("record_name,test_rs", TestData.REVERSE_RECORDS) def test_update_reverse_record_types(shared_zone_test_context, record_name, test_rs): """ Test updating a record set in a reverse zone @@ -192,38 +192,38 @@ def test_update_reverse_record_types(shared_zone_test_context, record_name, test result_rs = None try: - new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone["id"]) result = client.create_recordset(new_rs, status=202) - assert_that(result['status'], is_('Pending')) - print str(result) + assert_that(result["status"], is_("Pending")) + print(str(result)) - result_rs = result['recordSet'] + result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) - records = result_rs['records'] + records = result_rs["records"] - for record in new_rs['records']: + for record in new_rs["records"]: assert_that(records, has_item(has_entries(record))) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # now update update_rs = result_rs - update_rs['ttl'] = 1000 + update_rs["ttl"] = 1000 result = client.update_recordset(update_rs, status=202) - assert_that(result['status'], is_('Pending')) - result_rs = result['recordSet'] + assert_that(result["status"], is_("Pending")) + result_rs = result["recordSet"] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - assert_that(result_rs['ttl'], is_(1000)) + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + assert_that(result_rs["ttl"], is_(1000)) finally: if result_rs: - result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if result: - client.wait_until_recordset_change_status(result, 'Complete') + client.wait_until_recordset_change_status(result, "Complete") def test_update_record_in_zone_user_owns(shared_zone_test_context): @@ -235,29 +235,29 @@ def test_update_record_in_zone_user_owns(shared_zone_test_context): try: rs = client.create_recordset( { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_user_can_update_record_in_zone_it_owns', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_user_can_update_record_in_zone_it_owns", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" } ] }, status=202 - )['recordSet'] - client.wait_until_recordset_exists(rs['zoneId'], rs['id']) + )["recordSet"] + client.wait_until_recordset_exists(rs["zoneId"], rs["id"]) - rs['ttl'] = rs['ttl'] + 1000 + rs["ttl"] = rs["ttl"] + 1000 result = client.update_recordset(rs, status=202, retries=3) - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - assert_that(result_rs['ttl'], is_(rs['ttl'])) + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + assert_that(result_rs["ttl"], is_(rs["ttl"])) finally: if rs: try: - client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) finally: pass @@ -268,17 +268,17 @@ def test_update_recordset_no_authorization(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client rs = { - 'id': '12345', - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_update_recordset_no_authorization', - 'type': 'A', - 'ttl': 100, - 'records': [ + "id": "12345", + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_update_recordset_no_authorization", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -295,70 +295,70 @@ def test_update_recordset_replace_2_records_with_1_different_record(shared_zone_ result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'test_update_recordset_replace_2_records_with_1_different_record', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "test_update_recordset_replace_2_records_with_1_different_record", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.1.1.1', is_in(records)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.1.1.1", is_in(records)) + assert_that("10.2.2.2", is_in(records)) - result_rs['ttl'] = 200 + result_rs["ttl"] = 200 modified_records = [ { - 'address': '1.1.1.1' + "address": "1.1.1.1" } ] - result_rs['records'] = modified_records + result_rs["records"] = modified_records result = client.update_recordset(result_rs, status=202) - assert_that(result['status'], is_('Pending')) - result = client.wait_until_recordset_change_status(result, 'Complete') + assert_that(result["status"], is_("Pending")) + result = client.wait_until_recordset_change_status(result, "Complete") - assert_that(result['changeType'], is_('Update')) - assert_that(result['status'], is_('Complete')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Update")) + assert_that(result["status"], is_("Complete")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) # make sure the update was applied - result_rs = result['recordSet'] - records = [x['address'] for x in result_rs['records']] + result_rs = result["recordSet"] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(1)) - assert_that(records[0], is_('1.1.1.1')) + assert_that(records[0], is_("1.1.1.1")) # verify that the record exists in the backend dns server - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) - assert_that('1.1.1.1', is_in(rdata_strings)) + assert_that("1.1.1.1", is_in(rdata_strings)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_existing_record_set_add_record(shared_zone_test_context): @@ -370,76 +370,76 @@ def test_update_existing_record_set_add_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'test_update_existing_record_set_add_record', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "test_update_existing_record_set_add_record", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(1)) - assert_that(records[0], is_('10.2.2.2')) + assert_that(records[0], is_("10.2.2.2")) - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) - print "GOT ANSWERS BACK FOR INITIAL CREATE:" - print str(rdata_strings) + print("GOT ANSWERS BACK FOR INITIAL CREATE:") + print(str(rdata_strings)) # Update the record set, adding a new record to the existing one modified_records = [ { - 'address': '4.4.4.8' + "address": "4.4.4.8" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] - result_rs['records'] = modified_records + result_rs["records"] = modified_records result = client.update_recordset(result_rs, status=202) - assert_that(result['status'], is_('Pending')) - result = client.wait_until_recordset_change_status(result, 'Complete') + assert_that(result["status"], is_("Pending")) + result = client.wait_until_recordset_change_status(result, "Complete") - assert_that(result['changeType'], is_('Update')) - assert_that(result['status'], is_('Complete')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Update")) + assert_that(result["status"], is_("Complete")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) # make sure the update was applied - result_rs = result['recordSet'] - records = [x['address'] for x in result_rs['records']] + result_rs = result["recordSet"] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) - assert_that('10.2.2.2', is_in(records)) - assert_that('4.4.4.8', is_in(records)) + assert_that("10.2.2.2", is_in(records)) + assert_that("4.4.4.8", is_in(records)) - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) - print "GOT BACK ANSWERS FOR UPDATE" - print str(rdata_strings) + print("GOT BACK ANSWERS FOR UPDATE") + print(str(rdata_strings)) assert_that(rdata_strings, has_length(2)) - assert_that('10.2.2.2', is_in(rdata_strings)) - assert_that('4.4.4.8', is_in(rdata_strings)) + assert_that("10.2.2.2", is_in(rdata_strings)) + assert_that("4.4.4.8", is_in(rdata_strings)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_existing_record_set_delete_record(shared_zone_test_context): @@ -451,74 +451,74 @@ def test_update_existing_record_set_delete_record(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': 'test_update_existing_record_set_delete_record', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "test_update_existing_record_set_delete_record", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" }, { - 'address': '10.3.3.3' + "address": "10.3.3.3" }, { - 'address': '10.4.4.4' + "address": "10.4.4.4" } ] } result = client.create_recordset(new_rs, status=202) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) - records = [x['address'] for x in result_rs['records']] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(4)) - assert_that(records[0], is_('10.1.1.1')) - assert_that(records[1], is_('10.2.2.2')) - assert_that(records[2], is_('10.3.3.3')) - assert_that(records[3], is_('10.4.4.4')) + assert_that(records[0], is_("10.1.1.1")) + assert_that(records[1], is_("10.2.2.2")) + assert_that(records[2], is_("10.3.3.3")) + assert_that(records[3], is_("10.4.4.4")) - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(4)) # Update the record set, delete three records and leave one modified_records = [ { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] - result_rs['records'] = modified_records + result_rs["records"] = modified_records result = client.update_recordset(result_rs, status=202) - result = client.wait_until_recordset_change_status(result, 'Complete') + result = client.wait_until_recordset_change_status(result, "Complete") # make sure the update was applied - result_rs = result['recordSet'] - records = [x['address'] for x in result_rs['records']] + result_rs = result["recordSet"] + records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(1)) - assert_that('10.2.2.2', is_in(records)) + assert_that("10.2.2.2", is_in(records)) # do a DNS query - answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) - assert_that('10.2.2.2', is_in(rdata_strings)) + assert_that("10.2.2.2", is_in(rdata_strings)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): @@ -530,51 +530,51 @@ def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): result_rs = None try: orig_rs = { - 'zoneId': reverse4_zone['id'], - 'name': '30.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse4_zone["id"], + "name": "30.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } result = client.create_recordset(orig_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Updating..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Updating...") - new_ptr_target = 'www.vinyldns.' + new_ptr_target = "www.vinyldns." new_rs = result_rs - print new_rs - new_rs['records'][0]['ptrdname'] = new_ptr_target - print new_rs + print(new_rs) + new_rs["records"][0]["ptrdname"] = new_ptr_target + print(new_rs) result = client.update_recordset(new_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!updated recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!updated recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - print result_rs - records = result_rs['records'] - assert_that(records[0]['ptrdname'], is_(new_ptr_target)) + print(result_rs) + records = result_rs["records"] + assert_that(records[0]["ptrdname"], is_(new_ptr_target)) - print "\r\n\r\n!!!verifying recordset in dns backend" + print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server - answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + answers = dns_resolve(reverse4_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that(rdata_strings[0], is_(new_ptr_target)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_ipv6_ptr_recordset(shared_zone_test_context): @@ -586,49 +586,49 @@ def test_update_ipv6_ptr_recordset(shared_zone_test_context): result_rs = None try: orig_rs = { - 'zoneId': reverse6_zone['id'], - 'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', - 'type': 'PTR', - 'ttl': 100, - 'records': [ + "zoneId": reverse6_zone["id"], + "name": "0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + "type": "PTR", + "ttl": 100, + "records": [ { - 'ptrdname': 'ftp.vinyldns.' + "ptrdname": "ftp.vinyldns." } ] } result = client.create_recordset(orig_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!recordset is active! Updating..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!recordset is active! Updating...") - new_ptr_target = 'www.vinyldns.' + new_ptr_target = "www.vinyldns." new_rs = result_rs - print new_rs - new_rs['records'][0]['ptrdname'] = new_ptr_target - print new_rs + print(new_rs) + new_rs["records"][0]["ptrdname"] = new_ptr_target + print(new_rs) result = client.update_recordset(new_rs, status=202) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] - print "\r\n\r\n!!!updated recordset is active! Verifying..." + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] + print("\r\n\r\n!!!updated recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print "\r\n\r\n!!!recordset verified..." + print("\r\n\r\n!!!recordset verified...") - print result_rs - records = result_rs['records'] - assert_that(records[0]['ptrdname'], is_(new_ptr_target)) + print(result_rs) + records = result_rs["records"] + assert_that(records[0]["ptrdname"], is_(new_ptr_target)) - print "\r\n\r\n!!!verifying recordset in dns backend" - answers = dns_resolve(reverse6_zone, result_rs['name'], result_rs['type']) + print("\r\n\r\n!!!verifying recordset in dns backend") + answers = dns_resolve(reverse6_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that(rdata_strings[0], is_(new_ptr_target)) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_recordset_zone_not_found(shared_zone_test_context): @@ -640,29 +640,29 @@ def test_update_recordset_zone_not_found(shared_zone_test_context): try: new_rs = { - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_update_recordset_zone_not_found', - 'type': 'A', - 'ttl': 100, - 'records': [ + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_update_recordset_zone_not_found", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } result = client.create_recordset(new_rs, status=202) - new_rs = result['recordSet'] - client.wait_until_recordset_exists(new_rs['zoneId'], new_rs['id']) - new_rs['zoneId'] = '1234' + new_rs = result["recordSet"] + client.wait_until_recordset_exists(new_rs["zoneId"], new_rs["id"]) + new_rs["zoneId"] = "1234" client.update_recordset(new_rs, status=404) finally: if new_rs: try: - client.delete_recordset(shared_zone_test_context.ok_zone['id'], new_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(shared_zone_test_context.ok_zone['id'], new_rs['id']) + client.delete_recordset(shared_zone_test_context.ok_zone["id"], new_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(shared_zone_test_context.ok_zone["id"], new_rs["id"]) finally: pass @@ -673,17 +673,17 @@ def test_update_recordset_not_found(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client new_rs = { - 'id': 'nothere', - 'zoneId': shared_zone_test_context.ok_zone['id'], - 'name': 'test_update_recordset_not_found', - 'type': 'A', - 'ttl': 100, - 'records': [ + "id": "nothere", + "zoneId": shared_zone_test_context.ok_zone["id"], + "name": "test_update_recordset_not_found", + "type": "A", + "ttl": 100, + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } @@ -699,57 +699,57 @@ def test_at_update_recordset(shared_zone_test_context): result_rs = None try: new_rs = { - 'zoneId': ok_zone['id'], - 'name': '@', - 'type': 'TXT', - 'ttl': 100, - 'records': [ + "zoneId": ok_zone["id"], + "name": "@", + "type": "TXT", + "ttl": 100, + "records": [ { - 'text': 'someText' + "text": "someText" } ] } result = client.create_recordset(new_rs, status=202) - print str(result) + print(str(result)) - assert_that(result['changeType'], is_('Create')) - assert_that(result['status'], is_('Pending')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Create")) + assert_that(result["status"], is_("Pending")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) - result_rs = result['recordSet'] - result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + result_rs = result["recordSet"] + result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] expected_rs = new_rs - expected_rs['name'] = ok_zone['name'] + expected_rs["name"] = ok_zone["name"] verify_recordset(result_rs, expected_rs) - records = result_rs['records'] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('someText')) + assert_that(records[0]["text"], is_("someText")) - result_rs['ttl'] = 200 - result_rs['records'][0]['text'] = 'differentText' + result_rs["ttl"] = 200 + result_rs["records"][0]["text"] = "differentText" result = client.update_recordset(result_rs, status=202) - assert_that(result['status'], is_('Pending')) - result = client.wait_until_recordset_change_status(result, 'Complete') + assert_that(result["status"], is_("Pending")) + result = client.wait_until_recordset_change_status(result, "Complete") - assert_that(result['changeType'], is_('Update')) - assert_that(result['status'], is_('Complete')) - assert_that(result['created'], is_not(none())) - assert_that(result['userId'], is_not(none())) + assert_that(result["changeType"], is_("Update")) + assert_that(result["status"], is_("Complete")) + assert_that(result["created"], is_not(none())) + assert_that(result["userId"], is_not(none())) # make sure the update was applied - result_rs = result['recordSet'] - records = result_rs['records'] + result_rs = result["recordSet"] + records = result_rs["records"] assert_that(records, has_length(1)) - assert_that(records[0]['text'], is_('differentText')) + assert_that(records[0]["text"], is_("differentText")) finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -761,12 +761,12 @@ def test_user_can_update_record_via_user_acl_rule(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") result_rs = seed_text_recordset(client, "test_user_can_update_record_via_user_acl_rule", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -776,14 +776,14 @@ def test_user_can_update_record_via_user_acl_rule(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -794,12 +794,12 @@ def test_user_can_update_record_via_group_acl_rule(shared_zone_test_context): result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client - acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id']) + acl_rule = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"]) try: result_rs = seed_text_recordset(client, "test_user_can_update_record_via_group_acl_rule", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) @@ -809,14 +809,14 @@ def test_user_can_update_record_via_group_acl_rule(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -828,27 +828,27 @@ def test_user_rule_priority_over_group_acl_rule(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - group_acl_rule = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id']) - user_acl_rule = generate_acl_rule('Write', userId='dummy') + group_acl_rule = generate_acl_rule("Read", groupId=shared_zone_test_context.dummy_group["id"]) + user_acl_rule = generate_acl_rule("Write", userId="dummy") result_rs = seed_text_recordset(client, "test_user_rule_priority_over_group_acl_rule", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # add rules add_ok_acl_rules(shared_zone_test_context, [group_acl_rule, user_acl_rule]) # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) @pytest.mark.serial @@ -860,11 +860,11 @@ def test_more_restrictive_acl_rule_priority(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - read_rule = generate_acl_rule('Read', userId='dummy') - write_rule = generate_acl_rule('Write', userId='dummy') + read_rule = generate_acl_rule("Read", userId="dummy") + write_rule = generate_acl_rule("Write", userId="dummy") result_rs = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # add rules add_ok_acl_rules(shared_zone_test_context, [read_rule, write_rule]) @@ -874,8 +874,8 @@ def test_more_restrictive_acl_rule_priority(shared_zone_test_context): finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -887,14 +887,14 @@ def test_acl_rule_with_record_type_success(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['TXT']) + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["TXT"]) result_rs = seed_text_recordset(client, "test_acl_rule_with_record_type_success", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 - z = client.get_zone(ok_zone['id']) + z = client.get_zone(ok_zone["id"]) # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -904,14 +904,14 @@ def test_acl_rule_with_record_type_success(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -923,12 +923,12 @@ def test_acl_rule_with_cidr_ip4_success(shared_zone_test_context): ip4_zone = shared_zone_test_context.ip4_reverse_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="10.10.0.0/32") + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/32") result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -938,14 +938,14 @@ def test_acl_rule_with_cidr_ip4_success(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ip4_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -957,7 +957,7 @@ def test_acl_rule_with_cidr_ip4_failure(shared_zone_test_context): ip4_zone = shared_zone_test_context.ip4_reverse_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="172.30.0.0/32") + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="172.30.0.0/32") result_rs = seed_ptr_recordset(client, "0.1", ip4_zone) @@ -972,8 +972,8 @@ def test_acl_rule_with_cidr_ip4_failure(shared_zone_test_context): finally: clear_ip4_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -985,13 +985,13 @@ def test_acl_rule_with_cidr_ip6_success(shared_zone_test_context): ip6_zone = shared_zone_test_context.ip6_reverse_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = result_rs['ttl'] + 1000 + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -1001,14 +1001,14 @@ def test_acl_rule_with_cidr_ip6_success(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ip6_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1020,7 +1020,7 @@ def test_acl_rule_with_cidr_ip6_failure(shared_zone_test_context): ip6_zone = shared_zone_test_context.ip6_reverse_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.0.0.0.0.0", ip6_zone) @@ -1036,8 +1036,8 @@ def test_acl_rule_with_cidr_ip6_failure(shared_zone_test_context): finally: clear_ip6_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1049,11 +1049,11 @@ def test_more_restrictive_cidr_ip4_rule_priority(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - slash16_rule = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR'], recordMask="10.10.0.0/16") - slash32_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="10.10.0.0/32") + slash16_rule = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/16") + slash32_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/32") result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # add rules add_ip4_acl_rules(shared_zone_test_context, [slash16_rule, slash32_rule]) @@ -1063,8 +1063,8 @@ def test_more_restrictive_cidr_ip4_rule_priority(shared_zone_test_context): finally: clear_ip4_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1076,13 +1076,13 @@ def test_more_restrictive_cidr_ip6_rule_priority(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - slash50_rule = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR'], + slash50_rule = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR"], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") - slash100_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], + slash100_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/100") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # add rules add_ip6_acl_rules(shared_zone_test_context, [slash50_rule, slash100_rule]) @@ -1092,8 +1092,8 @@ def test_more_restrictive_cidr_ip6_rule_priority(shared_zone_test_context): finally: clear_ip6_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1109,18 +1109,18 @@ def test_mix_of_cidr_ip6_and_acl_rules_priority(shared_zone_test_context): result_rs_AAAA = None try: - mixed_type_rule_no_mask = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR', 'AAAA', 'A']) - ptr_rule_with_mask = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], + mixed_type_rule_no_mask = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR", "AAAA", "A"]) + ptr_rule_with_mask = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") result_rs_PTR = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) - result_rs_PTR['ttl'] = result_rs_PTR['ttl'] + 1000 + result_rs_PTR["ttl"] = result_rs_PTR["ttl"] + 1000 result_rs_A = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_1", ok_zone) - result_rs_A['ttl'] = result_rs_A['ttl'] + 1000 + result_rs_A["ttl"] = result_rs_A["ttl"] + 1000 result_rs_AAAA = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_2", ok_zone) - result_rs_AAAA['ttl'] = result_rs_AAAA['ttl'] + 1000 + result_rs_AAAA["ttl"] = result_rs_AAAA["ttl"] + 1000 # add rules add_ip6_acl_rules(shared_zone_test_context, [mixed_type_rule_no_mask, ptr_rule_with_mask]) @@ -1134,14 +1134,14 @@ def test_mix_of_cidr_ip6_and_acl_rules_priority(shared_zone_test_context): clear_ip6_acl_rules(shared_zone_test_context) clear_ok_acl_rules(shared_zone_test_context) if result_rs_A: - delete_result = client.delete_recordset(result_rs_A['zoneId'], result_rs_A['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs_A["zoneId"], result_rs_A["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") if result_rs_AAAA: - delete_result = client.delete_recordset(result_rs_AAAA['zoneId'], result_rs_AAAA['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs_AAAA["zoneId"], result_rs_AAAA["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") if result_rs_PTR: - delete_result = client.delete_recordset(result_rs_PTR['zoneId'], result_rs_PTR['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs_PTR["zoneId"], result_rs_PTR["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1153,10 +1153,10 @@ def test_acl_rule_with_wrong_record_type(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['CNAME']) + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["CNAME"]) result_rs = seed_text_recordset(client, "test_acl_rule_with_wrong_record_type", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -1169,8 +1169,8 @@ def test_acl_rule_with_wrong_record_type(shared_zone_test_context): finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1183,11 +1183,11 @@ def test_empty_acl_record_type_applies_to_all(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=[]) + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=[]) result_rs = seed_text_recordset(client, "test_empty_acl_record_type_applies_to_all", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = expected_ttl + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = expected_ttl # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) @@ -1197,14 +1197,14 @@ def test_empty_acl_record_type_applies_to_all(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1217,31 +1217,31 @@ def test_acl_rule_with_fewer_record_types_prioritized(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule_base = generate_acl_rule('Write', userId='dummy') - acl_rule1 = generate_acl_rule('Write', userId='dummy', recordTypes=['TXT', 'CNAME']) - acl_rule2 = generate_acl_rule('Read', userId='dummy', recordTypes=['TXT']) + acl_rule_base = generate_acl_rule("Write", userId="dummy") + acl_rule1 = generate_acl_rule("Write", userId="dummy", recordTypes=["TXT", "CNAME"]) + acl_rule2 = generate_acl_rule("Read", userId="dummy", recordTypes=["TXT"]) result_rs = seed_text_recordset(client, "test_acl_rule_with_fewer_record_types_prioritized", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) # Dummy user cannot update record - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1253,31 +1253,31 @@ def test_acl_rule_user_over_record_type_priority(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule_base = generate_acl_rule('Write', userId='dummy') - acl_rule1 = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordTypes=['TXT']) - acl_rule2 = generate_acl_rule('Read', userId='dummy', recordTypes=['TXT', 'CNAME']) + acl_rule_base = generate_acl_rule("Write", userId="dummy") + acl_rule1 = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"], recordTypes=["TXT"]) + acl_rule2 = generate_acl_rule("Read", userId="dummy", recordTypes=["TXT", "CNAME"]) result_rs = seed_text_recordset(client, "test_acl_rule_user_over_record_type_priority", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) # Dummy user cannot update record - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1290,11 +1290,11 @@ def test_acl_rule_with_record_mask_success(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') + acl_rule = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"], recordMask="test.*") result_rs = seed_text_recordset(client, "test_acl_rule_with_record_mask_success", ok_zone) - expected_ttl = result_rs['ttl'] + 1000 - result_rs['ttl'] = expected_ttl + expected_ttl = result_rs["ttl"] + 1000 + result_rs["ttl"] = expected_ttl # Dummy user cannot update record in zone shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) @@ -1304,14 +1304,14 @@ def test_acl_rule_with_record_mask_success(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] - assert_that(result_rs['ttl'], is_(expected_ttl)) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] + assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1324,10 +1324,10 @@ def test_acl_rule_with_record_mask_failure(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='bad.*') + acl_rule = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"], recordMask="bad.*") result_rs = seed_text_recordset(client, "test_acl_rule_with_record_mask_failure", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule]) @@ -1337,8 +1337,8 @@ def test_acl_rule_with_record_mask_failure(shared_zone_test_context): finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1351,31 +1351,31 @@ def test_acl_rule_with_defined_mask_prioritized(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule_base = generate_acl_rule('Write', userId='dummy') - acl_rule1 = generate_acl_rule('Write', userId='dummy', recordMask='.*') - acl_rule2 = generate_acl_rule('Read', userId='dummy', recordMask='test.*') + acl_rule_base = generate_acl_rule("Write", userId="dummy") + acl_rule1 = generate_acl_rule("Write", userId="dummy", recordMask=".*") + acl_rule2 = generate_acl_rule("Read", userId="dummy", recordMask="test.*") result_rs = seed_text_recordset(client, "test_acl_rule_with_defined_mask_prioritized", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) # Dummy user cannot update record - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1388,31 +1388,31 @@ def test_user_rule_over_mask_prioritized(shared_zone_test_context): ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule_base = generate_acl_rule('Write', userId='dummy') - acl_rule1 = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') - acl_rule2 = generate_acl_rule('Read', userId='dummy', recordMask='.*') + acl_rule_base = generate_acl_rule("Write", userId="dummy") + acl_rule1 = generate_acl_rule("Write", groupId=shared_zone_test_context.dummy_group["id"], recordMask="test.*") + acl_rule2 = generate_acl_rule("Read", userId="dummy", recordMask=".*") result_rs = seed_text_recordset(client, "test_user_rule_over_mask_prioritized", ok_zone) - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')[ - 'recordSet'] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ + "recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) # Dummy user cannot update record - result_rs['ttl'] = result_rs['ttl'] + 1000 + result_rs["ttl"] = result_rs["ttl"] + 1000 shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) finally: clear_ok_acl_rules(shared_zone_test_context) if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_ns_update_passes(shared_zone_test_context): @@ -1425,29 +1425,29 @@ def test_ns_update_passes(shared_zone_test_context): try: new_rs = { - 'zoneId': zone['id'], - 'name': 'someNS', - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": "someNS", + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." } ] } result = client.create_recordset(new_rs, status=202) - ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + ns_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] changed_rs = ns_rs - changed_rs['ttl'] = changed_rs['ttl'] + 100 + changed_rs["ttl"] = changed_rs["ttl"] + 100 change_result = client.update_recordset(changed_rs, status=202) - client.wait_until_recordset_change_status(change_result, 'Complete') + client.wait_until_recordset_change_status(change_result, "Complete") finally: if ns_rs: - client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(ns_rs["zoneId"], ns_rs["id"]) def test_ns_update_for_unapproved_server_fails(shared_zone_test_context): @@ -1460,36 +1460,36 @@ def test_ns_update_for_unapproved_server_fails(shared_zone_test_context): try: new_rs = { - 'zoneId': zone['id'], - 'name': 'badNSupdate', - 'type': 'NS', - 'ttl': 38400, - 'records': [ + "zoneId": zone["id"], + "name": "badNSupdate", + "type": "NS", + "ttl": 38400, + "records": [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." } ] } result = client.create_recordset(new_rs, status=202) - ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + ns_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] changed_rs = ns_rs bad_records = [ { - 'nsdname': 'ns1.parent.com.' + "nsdname": "ns1.parent.com." }, { - 'nsdname': 'this.is.bad.' + "nsdname": "this.is.bad." } ] - changed_rs['records'] = bad_records + changed_rs["records"] = bad_records client.update_recordset(changed_rs, status=422) finally: if ns_rs: - client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(ns_rs["zoneId"], ns_rs["id"]) def test_update_to_txt_dotted_host_succeeds(shared_zone_test_context): @@ -1502,15 +1502,15 @@ def test_update_to_txt_dotted_host_succeeds(shared_zone_test_context): try: result_rs = seed_text_recordset(client, "update_with.dots", ok_zone) - result_rs['ttl'] = 333 + result_rs["ttl"] = 333 update_rs = client.update_recordset(result_rs, status=202) - result_rs = client.wait_until_recordset_change_status(update_rs, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(update_rs, "Complete")["recordSet"] finally: if result_rs: - delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_ns_update_existing_ns_origin_fails(shared_zone_test_context): @@ -1520,11 +1520,11 @@ def test_ns_update_existing_ns_origin_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone - list_results_page = client.list_recordsets_by_zone(zone['id'], status=200)['recordSets'] + list_results_page = client.list_recordsets_by_zone(zone["id"], status=200)["recordSets"] - apex_ns = [item for item in list_results_page if item['type'] == 'NS' and item['name'] in zone['name']][0] + apex_ns = [item for item in list_results_page if item["type"] == "NS" and item["name"] in zone["name"]][0] - apex_ns['ttl'] = apex_ns['ttl'] + 100 + apex_ns["ttl"] = apex_ns["ttl"] + 100 client.update_recordset(apex_ns, status=422) @@ -1537,21 +1537,21 @@ def test_update_existing_dotted_a_record_succeeds(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - recordsets = client.list_recordsets_by_zone(zone['id'], record_name_filter="dotted.a", status=200)['recordSets'] + recordsets = client.list_recordsets_by_zone(zone["id"], record_name_filter="dotted.a", status=200)["recordSets"] update_rs = recordsets[0] - update_rs['records'] = [{'address': '1.1.1.1'}] + update_rs["records"] = [{"address": "1.1.1.1"}] try: update_response = client.update_recordset(update_rs, status=202) - updated_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(updated_rs['records'], is_([{'address': '1.1.1.1'}])) + updated_rs = client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(updated_rs["records"], is_([{"address": "1.1.1.1"}])) finally: - update_rs['records'] = [{'address': '7.7.7.7'}] + update_rs["records"] = [{"address": "7.7.7.7"}] revert_rs_update = client.update_recordset(update_rs, status=202) - client.wait_until_recordset_change_status(revert_rs_update, 'Complete') + client.wait_until_recordset_change_status(revert_rs_update, "Complete") def test_update_existing_dotted_cname_record_succeeds(shared_zone_test_context): @@ -1562,18 +1562,18 @@ def test_update_existing_dotted_cname_record_succeeds(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - recordsets = client.list_recordsets_by_zone(zone['id'], record_name_filter="dottedc.name", status=200)['recordSets'] + recordsets = client.list_recordsets_by_zone(zone["id"], record_name_filter="dottedc.name", status=200)["recordSets"] update_rs = recordsets[0] - update_rs['records'] = [{'cname': 'got.reference'}] + update_rs["records"] = [{"cname": "got.reference"}] try: update_response = client.update_recordset(update_rs, status=202) - updated_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(updated_rs['records'], is_([{'cname': 'got.reference.'}])) + updated_rs = client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(updated_rs["records"], is_([{"cname": "got.reference."}])) finally: - update_rs['records'] = [{'cname': 'test.example.com'}] + update_rs["records"] = [{"cname": "test.example.com"}] revert_rs_update = client.update_recordset(update_rs, status=202) - client.wait_until_recordset_change_status(revert_rs_update, 'Complete') + client.wait_until_recordset_change_status(revert_rs_update, "Complete") def test_update_succeeds_for_applied_unsynced_record_change(shared_zone_test_context): @@ -1584,35 +1584,35 @@ def test_update_succeeds_for_applied_unsynced_record_change(shared_zone_test_con client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone - a_rs = get_recordset_json(zone, 'already-applied-unsynced-update', 'A', - [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]) + a_rs = create_recordset(zone, "already-applied-unsynced-update", "A", + [{"address": "1.1.1.1"}, {"address": "2.2.2.2"}]) create_rs = {} try: create_response = client.create_recordset(a_rs, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] - dns_update(zone, 'already-applied-unsynced-update', 550, 'A', '8.8.8.8') + dns_update(zone, "already-applied-unsynced-update", 550, "A", "8.8.8.8") updates = create_rs - updates['ttl'] = 550 - updates['records'] = [ + updates["ttl"] = 550 + updates["records"] = [ { - 'address': '8.8.8.8' + "address": "8.8.8.8" } ] update_response = client.update_recordset(updates, status=202) - update_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] + update_rs = client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] - retrieved_rs = client.get_recordset(zone['id'], update_rs['id'])['recordSet'] + retrieved_rs = client.get_recordset(zone["id"], update_rs["id"])["recordSet"] verify_recordset(retrieved_rs, updates) finally: try: - delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -1625,32 +1625,32 @@ def test_update_fails_for_unapplied_unsynced_record_change(shared_zone_test_cont client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone - a_rs = get_recordset_json(zone, 'unapplied-unsynced-update', 'A', [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]) + a_rs = create_recordset(zone, "unapplied-unsynced-update", "A", [{"address": "1.1.1.1"}, {"address": "2.2.2.2"}]) create_rs = {} try: create_response = client.create_recordset(a_rs, status=202) - create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] - dns_update(zone, 'unapplied-unsynced-update', 550, 'A', '8.8.8.8') + dns_update(zone, "unapplied-unsynced-update", 550, "A", "8.8.8.8") update_rs = create_rs - update_rs['records'] = [ + update_rs["records"] = [ { - 'address': '5.5.5.5' + "address": "5.5.5.5" } ] update_response = client.update_recordset(update_rs, status=202) - response = client.wait_until_recordset_change_status(update_response, 'Failed') - assert_that(response['systemMessage'], is_("Failed validating update to DNS for change " + response['id'] + + response = client.wait_until_recordset_change_status(update_response, "Failed") + assert_that(response["systemMessage"], is_("Failed validating update to DNS for change " + response["id"] + ":" + a_rs[ - 'name'] + ": This record set is out of sync with the DNS backend; sync this zone before attempting to update this record set.")) + "name"] + ": This record set is out of sync with the DNS backend; sync this zone before attempting to update this record set.")) finally: try: - delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") except: pass @@ -1662,9 +1662,9 @@ def test_update_high_value_domain_fails(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone_system = shared_zone_test_context.system_test_zone - list_results_page_system = client.list_recordsets_by_zone(zone_system['id'], status=200)['recordSets'] - record_system = [item for item in list_results_page_system if item['name'] == 'high-value-domain'][0] - record_system['ttl'] = record_system['ttl'] + 100 + list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] + record_system = [item for item in list_results_page_system if item["name"] == "high-value-domain"][0] + record_system["ttl"] = record_system["ttl"] + 100 errors_system = client.update_recordset(record_system, status=422) assert_that(errors_system, is_( @@ -1678,9 +1678,9 @@ def test_update_high_value_domain_fails_case_insensitive(shared_zone_test_contex client = shared_zone_test_context.ok_vinyldns_client zone_system = shared_zone_test_context.system_test_zone - list_results_page_system = client.list_recordsets_by_zone(zone_system['id'], status=200)['recordSets'] - record_system = [item for item in list_results_page_system if item['name'] == 'high-VALUE-domain-UPPER-CASE'][0] - record_system['ttl'] = record_system['ttl'] + 100 + list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] + record_system = [item for item in list_results_page_system if item["name"] == "high-VALUE-domain-UPPER-CASE"][0] + record_system["ttl"] = record_system["ttl"] + 100 errors_system = client.update_recordset(record_system, status=422) assert_that(errors_system, is_( @@ -1693,9 +1693,9 @@ def test_update_high_value_domain_fails_ip4_ptr(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone_ip4 = shared_zone_test_context.classless_base_zone - list_results_page_ip4 = client.list_recordsets_by_zone(zone_ip4['id'], status=200)['recordSets'] - record_ip4 = [item for item in list_results_page_ip4 if item['name'] == '253'][0] - record_ip4['ttl'] = record_ip4['ttl'] + 100 + list_results_page_ip4 = client.list_recordsets_by_zone(zone_ip4["id"], status=200)["recordSets"] + record_ip4 = [item for item in list_results_page_ip4 if item["name"] == "253"][0] + record_ip4["ttl"] = record_ip4["ttl"] + 100 errors_ip4 = client.update_recordset(record_ip4, status=422) assert_that(errors_ip4, @@ -1709,10 +1709,10 @@ def test_update_high_value_domain_fails_ip6_ptr(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone_ip6 = shared_zone_test_context.ip6_reverse_zone - list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6['id'], status=200)['recordSets'] - record_ip6 = [item for item in list_results_page_ip6 if item['name'] == '0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0'][ + list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6["id"], status=200)["recordSets"] + record_ip6 = [item for item in list_results_page_ip6 if item["name"] == "0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0"][ 0] - record_ip6['ttl'] = record_ip6['ttl'] + 100 + record_ip6["ttl"] = record_ip6["ttl"] + 100 errors_ip6 = client.update_recordset(record_ip6, status=422) assert_that(errors_ip6, is_( @@ -1725,11 +1725,11 @@ def test_no_update_access_non_test_zone(shared_zone_test_context): """ client = shared_zone_test_context.shared_zone_vinyldns_client - zone_id = shared_zone_test_context.non_test_shared_zone['id'] + zone_id = shared_zone_test_context.non_test_shared_zone["id"] - list_results = client.list_recordsets_by_zone(zone_id, status=200)['recordSets'] - record_update = [item for item in list_results if item['name'] == 'update-test'][0] - record_update['ttl'] = record_update['ttl'] + 100 + list_results = client.list_recordsets_by_zone(zone_id, status=200)["recordSets"] + record_update = [item for item in list_results if item["name"] == "update-test"][0] + record_update["ttl"] = record_update["ttl"] + 100 client.update_recordset(record_update, status=403) @@ -1746,21 +1746,21 @@ def test_update_from_user_in_record_owner_group_for_private_zone_fails(shared_zo create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_failure', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = shared_record_group['id'] + record_json = create_recordset(zone, "test_shared_failure", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = shared_record_group["id"] create_response = ok_client.create_recordset(record_json, status=202) - create_rs = ok_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs['ownerGroupId'], is_(shared_record_group['id'])) + create_rs = ok_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs["ownerGroupId"], is_(shared_record_group["id"])) update = create_rs - update['ttl'] = update['ttl'] + 100 + update["ttl"] = update["ttl"] + 100 error = shared_zone_client.update_recordset(update, status=403) - assert_that(error, is_('User sharedZoneUser does not have access to update test-shared-failure.ok.')) + assert_that(error, is_("User sharedZoneUser does not have access to update test-shared-failure.ok.")) finally: if create_rs: - delete_result = ok_client.delete_recordset(zone['id'], create_rs['id'], status=202) - ok_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = ok_client.delete_recordset(zone["id"], create_rs["id"], status=202) + ok_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_owner_group_from_user_in_record_owner_group_for_shared_zone_passes(shared_zone_test_context): @@ -1775,22 +1775,22 @@ def test_update_owner_group_from_user_in_record_owner_group_for_shared_zone_pass update_rs = None try: - record_json = get_recordset_json(shared_zone, 'test_shared_success', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = shared_record_group['id'] + record_json = create_recordset(shared_zone, "test_shared_success", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = shared_record_group["id"] create_response = shared_client.create_recordset(record_json, status=202) - update = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(update['ownerGroupId'], is_(shared_record_group['id'])) + update = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(update["ownerGroupId"], is_(shared_record_group["id"])) - update['ttl'] = update['ttl'] + 100 + update["ttl"] = update["ttl"] + 100 update_response = ok_client.update_recordset(update, status=202) - update_rs = shared_client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(update_rs['ownerGroupId'], is_(shared_record_group['id'])) + update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(update_rs["ownerGroupId"], is_(shared_record_group["id"])) finally: if update_rs: - delete_result = shared_client.delete_recordset(shared_zone['id'], update_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(shared_zone["id"], update_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_owner_group_from_admin_in_shared_zone_passes(shared_zone_test_context): @@ -1804,22 +1804,22 @@ def test_update_owner_group_from_admin_in_shared_zone_passes(shared_zone_test_co update_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_admin_update_success', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_shared_admin_update_success", "A", [{"address": "1.1.1.1"}]) create_response = shared_client.create_recordset(record_json, status=202) - update = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(update, is_not(has_key('ownerGroupId'))) + update = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(update, is_not(has_key("ownerGroupId"))) - update['ownerGroupId'] = group['id'] - update['ttl'] = update['ttl'] + 100 + update["ownerGroupId"] = group["id"] + update["ttl"] = update["ttl"] + 100 update_response = shared_client.update_recordset(update, status=202) - update_rs = shared_client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(update_rs['ownerGroupId'], is_(group['id'])) + update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(update_rs["ownerGroupId"], is_(group["id"])) finally: if update_rs: - delete_result = shared_client.delete_recordset(zone['id'], update_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_from_unassociated_user_in_shared_zone_passes_when_record_type_is_approved(shared_zone_test_context): @@ -1833,20 +1833,20 @@ def test_update_from_unassociated_user_in_shared_zone_passes_when_record_type_is update_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_approved_record_type', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_shared_approved_record_type", "A", [{"address": "1.1.1.1"}]) create_response = shared_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs, is_not(has_key('ownerGroupId'))) + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs, is_not(has_key("ownerGroupId"))) update = create_rs - update['ttl'] = update['ttl'] + 100 + update["ttl"] = update["ttl"] + 100 update_response = ok_client.update_recordset(update, status=202) - update_rs = shared_client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] + update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] finally: if update_rs: - delete_result = shared_client.delete_recordset(zone['id'], update_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_from_unassociated_user_in_shared_zone_fails(shared_zone_test_context): @@ -1860,21 +1860,21 @@ def test_update_from_unassociated_user_in_shared_zone_fails(shared_zone_test_con create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_unapproved_record_type', 'MX', - [{'preference': 3, 'exchange': 'mx'}]) + record_json = create_recordset(zone, "test_shared_unapproved_record_type", "MX", + [{"preference": 3, "exchange": "mx"}]) create_response = shared_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(create_rs, is_not(has_key('ownerGroupId'))) + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(create_rs, is_not(has_key("ownerGroupId"))) update = create_rs - update['ttl'] = update['ttl'] + 100 + update["ttl"] = update["ttl"] + 100 error = ok_client.update_recordset(update, status=403) - assert_that(error, is_('User ok does not have access to update test-shared-unapproved-record-type.shared.')) + assert_that(error, is_("User ok does not have access to update test-shared-unapproved-record-type.shared.")) finally: if create_rs: - delete_result = shared_client.delete_recordset(zone['id'], create_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -1885,29 +1885,29 @@ def test_update_from_acl_for_shared_zone_passes(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") zone = shared_zone_test_context.shared_zone update_rs = None try: add_shared_zone_acl_rules(shared_zone_test_context, [acl_rule]) - record_json = get_recordset_json(zone, 'test_shared_acl', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_shared_acl", "A", [{"address": "1.1.1.1"}]) create_response = shared_client.create_recordset(record_json, status=202) - update = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(update, is_not(has_key('ownerGroupId'))) + update = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(update, is_not(has_key("ownerGroupId"))) - update['ttl'] = update['ttl'] + 100 + update["ttl"] = update["ttl"] + 100 update_response = dummy_client.update_recordset(update, status=202) - update_rs = dummy_client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(update, is_not(has_key('ownerGroupId'))) + update_rs = dummy_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(update, is_not(has_key("ownerGroupId"))) finally: clear_shared_zone_acl_rules(shared_zone_test_context) if update_rs: - delete_result = shared_client.delete_recordset(zone['id'], update_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_to_no_group_owner_passes(shared_zone_test_context): @@ -1921,21 +1921,21 @@ def test_update_to_no_group_owner_passes(shared_zone_test_context): update_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_success_no_owner', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = shared_record_group['id'] + record_json = create_recordset(zone, "test_shared_success_no_owner", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = shared_record_group["id"] create_response = shared_client.create_recordset(record_json, status=202) - update = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] - assert_that(update['ownerGroupId'], is_(shared_record_group['id'])) + update = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] + assert_that(update["ownerGroupId"], is_(shared_record_group["id"])) - update['ownerGroupId'] = None + update["ownerGroupId"] = None update_response = shared_client.update_recordset(update, status=202) - update_rs = shared_client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] - assert_that(update_rs, is_not(has_key('ownerGroupId'))) + update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] + assert_that(update_rs, is_not(has_key("ownerGroupId"))) finally: if update_rs: - delete_result = shared_client.delete_recordset(zone['id'], update_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_to_invalid_record_owner_group_fails(shared_zone_test_context): @@ -1949,20 +1949,20 @@ def test_update_to_invalid_record_owner_group_fails(shared_zone_test_context): create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_fail_no_owner2', 'A', [{'address': '1.1.1.1'}]) - record_json['ownerGroupId'] = shared_record_group['id'] + record_json = create_recordset(zone, "test_shared_fail_no_owner2", "A", [{"address": "1.1.1.1"}]) + record_json["ownerGroupId"] = shared_record_group["id"] create_response = shared_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = create_rs - update['ownerGroupId'] = 'no-existo' + update["ownerGroupId"] = "no-existo" error = shared_client.update_recordset(update, status=422) assert_that(error, is_('Record owner group with id "no-existo" not found')) finally: if create_rs: - delete_result = shared_client.delete_recordset(zone['id'], create_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_to_group_a_user_is_not_in_fails(shared_zone_test_context): @@ -1976,19 +1976,19 @@ def test_update_to_group_a_user_is_not_in_fails(shared_zone_test_context): create_rs = None try: - record_json = get_recordset_json(zone, 'test_shared_fail_no_owner1', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_shared_fail_no_owner1", "A", [{"address": "1.1.1.1"}]) create_response = shared_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = create_rs - update['ownerGroupId'] = dummy_group['id'] + update["ownerGroupId"] = dummy_group["id"] error = shared_client.update_recordset(update, status=422) - assert_that(error, is_('User not in record owner group with id "' + dummy_group['id'] + '"')) + assert_that(error, is_(f"User not in record owner group with id \"{dummy_group['id']}\"")) finally: if create_rs: - delete_result = shared_client.delete_recordset(zone['id'], create_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -2002,20 +2002,20 @@ def test_update_with_global_acl_rule_only_fails(shared_zone_test_context): create_rs = None try: - record_json = get_recordset_json(zone, 'test-global-acl', 'A', [{'address': '1.1.1.1'}], 200, - 'shared-zone-group') + record_json = create_recordset(zone, "test-global-acl", "A", [{"address": "1.1.1.1"}], 200, + "shared-zone-group") create_response = shared_client.create_recordset(record_json, status=202) - create_rs = shared_client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = create_rs - update['ttl'] = 400 + update["ttl"] = 400 error = dummy_client.update_recordset(update, status=403) - assert_that(error, is_('User dummy does not have access to update test-global-acl.shared.')) + assert_that(error, is_("User dummy does not have access to update test-global-acl.shared.")) finally: if create_rs: - delete_result = shared_client.delete_recordset(zone['id'], create_rs['id'], status=202) - shared_client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) + shared_client.wait_until_recordset_change_status(delete_result, "Complete") @pytest.mark.serial @@ -2027,31 +2027,31 @@ def test_update_ds_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"} ] record_data_update = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'}, - {'keytag': 60485, 'algorithm': 5, 'digesttype': 2, - 'digest': 'D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}, + {"keytag": 60485, "algorithm": 5, "digesttype": 2, + "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} ] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data_create, ttl=3600) + record_json = create_recordset(zone, "dskey", "DS", record_data_create, ttl=3600) result_rs = None try: create_call = client.create_recordset(record_json, status=202) - result_rs = client.wait_until_recordset_change_status(create_call, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(create_call, "Complete")["recordSet"] update_json = result_rs - update_json['records'] = record_data_update + update_json["records"] = record_data_update update_call = client.update_recordset(update_json, status=202) - result_rs = client.wait_until_recordset_change_status(update_call, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(update_call, "Complete")["recordSet"] # get result - get_result = client.get_recordset(result_rs['zoneId'], result_rs['id'])['recordSet'] + get_result = client.get_recordset(result_rs["zoneId"], result_rs["id"])["recordSet"] verify_recordset(get_result, update_json) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) @pytest.mark.serial @@ -2063,38 +2063,38 @@ def test_update_ds_data_failures(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"} ] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data_create, ttl=3600) + record_json = create_recordset(zone, "dskey", "DS", record_data_create, ttl=3600) result_rs = None try: create_call = client.create_recordset(record_json, status=202) - result_rs = client.wait_until_recordset_change_status(create_call, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(create_call, "Complete")["recordSet"] update_json_bad_hex = result_rs record_data_update = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': 'BADWWW'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "BADWWW"} ] - update_json_bad_hex['records'] = record_data_update + update_json_bad_hex["records"] = record_data_update client.update_recordset(update_json_bad_hex, status=400) update_json_bad_alg = result_rs record_data_update = [ - {'keytag': 60485, 'algorithm': 0, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'} + {"keytag": 60485, "algorithm": 0, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"} ] - update_json_bad_alg['records'] = record_data_update + update_json_bad_alg["records"] = record_data_update client.update_recordset(update_json_bad_alg, status=400) update_json_bad_dig = result_rs record_data_update = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 0, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'} + {"keytag": 60485, "algorithm": 5, "digesttype": 0, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"} ] - update_json_bad_dig['records'] = record_data_update + update_json_bad_dig["records"] = record_data_update client.update_recordset(update_json_bad_dig, status=400) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) @pytest.mark.serial @@ -2106,21 +2106,21 @@ def test_update_ds_bad_ttl(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ - {'keytag': 60485, 'algorithm': 5, 'digesttype': 1, 'digest': '2BB183AF5F22588179A53B0A98631FAD1A292118'} + {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"} ] - record_json = get_recordset_json(zone, 'dskey', 'DS', record_data_create, ttl=3600) + record_json = create_recordset(zone, "dskey", "DS", record_data_create, ttl=3600) result_rs = None try: create_call = client.create_recordset(record_json, status=202) - result_rs = client.wait_until_recordset_change_status(create_call, 'Complete')['recordSet'] + result_rs = client.wait_until_recordset_change_status(create_call, "Complete")["recordSet"] update_json = result_rs - update_json['ttl'] = 100 + update_json["ttl"] = 100 client.update_recordset(update_json, status=422) finally: if result_rs: - client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) - client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) def test_update_fails_when_payload_and_route_zone_id_does_not_match(shared_zone_test_context): @@ -2134,24 +2134,24 @@ def test_update_fails_when_payload_and_route_zone_id_does_not_match(shared_zone_ created = None try: - record_json = get_recordset_json(zone, 'test_update_zone_id1', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_update_zone_id1", "A", [{"address": "1.1.1.1"}]) create_response = client.create_recordset(record_json, status=202) - created = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + created = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = created - update['ttl'] = update['ttl'] + 100 - update['zoneId'] = shared_zone_test_context.dummy_zone['id'] + update["ttl"] = update["ttl"] + 100 + update["zoneId"] = shared_zone_test_context.dummy_zone["id"] - url = urljoin(client.index_url, u'/zones/{0}/recordsets/{1}'.format(zone[u'id'], update[u'id'])) - response, error = client.make_request(url, u'PUT', client.headers, json.dumps(update), not_found_ok=True, + url = urljoin(client.index_url, "/zones/{0}/recordsets/{1}".format(zone["id"], update["id"])) + response, error = client.make_request(url, "PUT", client.headers, json.dumps(update), not_found_ok=True, status=422) assert_that(error, is_("Cannot update RecordSet's zoneId attribute")) finally: if created: - delete_result = client.delete_recordset(zone['id'], created['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], created["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") def test_update_fails_when_payload_and_actual_zone_id_do_not_match(shared_zone_test_context): @@ -2165,12 +2165,12 @@ def test_update_fails_when_payload_and_actual_zone_id_do_not_match(shared_zone_t created = None try: - record_json = get_recordset_json(zone, 'test_update_zone_id', 'A', [{'address': '1.1.1.1'}]) + record_json = create_recordset(zone, "test_update_zone_id", "A", [{"address": "1.1.1.1"}]) create_response = client.create_recordset(record_json, status=202) - created = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + created = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = created - update['zoneId'] = shared_zone_test_context.dummy_zone['id'] + update["zoneId"] = shared_zone_test_context.dummy_zone["id"] error = client.update_recordset(update, status=422) @@ -2178,5 +2178,5 @@ def test_update_fails_when_payload_and_actual_zone_id_do_not_match(shared_zone_t finally: if created: - delete_result = client.delete_recordset(zone['id'], created['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') + delete_result = client.delete_recordset(zone["id"], created["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") diff --git a/modules/api/functional_test/live_tests/shared_zone_test_context.py b/modules/api/functional_test/live_tests/shared_zone_test_context.py index a69c0da18..5422f897c 100644 --- a/modules/api/functional_test/live_tests/shared_zone_test_context.py +++ b/modules/api/functional_test/live_tests/shared_zone_test_context.py @@ -1,37 +1,96 @@ -import time -from hamcrest import * +import copy +import inspect +import logging +from typing import MutableMapping, Mapping + +from live_tests.list_batch_summaries_test_context import ListBatchChangeSummariesTestContext +from live_tests.list_groups_test_context import ListGroupsTestContext +from live_tests.list_recordsets_test_context import ListRecordSetsTestContext +from live_tests.list_zones_test_context import ListZonesTestContext +from live_tests.test_data import TestData from utils import * -from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient -from list_batch_summaries_test_context import ListBatchChangeSummariesTestContext -from list_groups_test_context import ListGroupsTestContext -from list_recordsets_test_context import ListRecordSetsTestContext -from list_zones_test_context import ListZonesTestContext +logger = logging.getLogger(__name__) class SharedZoneTestContext(object): """ Creates multiple zones to test authorization / access to shared zones across users """ + _data_cache: MutableMapping[str, MutableMapping[str, Mapping]] = {} - def __init__(self, fixture_file=None): - self.ok_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'okAccessKey', 'okSecretKey') - self.dummy_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'dummyAccessKey', - 'dummySecretKey') - self.shared_zone_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'sharedZoneUserAccessKey', - 'sharedZoneUserSecretKey') - self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'supportUserAccessKey', - 'supportUserSecretKey') - self.unassociated_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listGroupAccessKey', - 'listGroupSecretKey') - self.test_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'testUserAccessKey', - 'testUserSecretKey') - self.history_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'history-key', 'history-secret') - self.list_zones = ListZonesTestContext() + @property + def ok_zone(self) -> Mapping: + return self.attempt_retrieve_value("_ok_zone") + + @property + def shared_zone(self) -> Mapping: + return self.attempt_retrieve_value("_shared_zone") + + @property + def history_zone(self) -> Mapping: + return self.attempt_retrieve_value("_history_zone") + + @property + def dummy_zone(self) -> Mapping: + return self.attempt_retrieve_value("_dummy_zone") + + @property + def ip6_reverse_zone(self) -> Mapping: + return self.attempt_retrieve_value("_ip6_reverse_zone") + + @property + def ip6_16_nibble_zone(self) -> Mapping: + return self.attempt_retrieve_value("_ip6_16_nibble_zone") + + @property + def ip4_reverse_zone(self) -> Mapping: + return self.attempt_retrieve_value("_ip4_reverse_zone") + + @property + def classless_base_zone(self) -> Mapping: + return self.attempt_retrieve_value("_classless_base_zone") + + @property + def classless_zone_delegation_zone(self) -> Mapping: + return self.attempt_retrieve_value("_classless_zone_delegation_zone") + + @property + def system_test_zone(self) -> Mapping: + return self.attempt_retrieve_value("_system_test_zone") + + @property + def parent_zone(self) -> Mapping: + return self.attempt_retrieve_value("_parent_zone") + + @property + def ds_zone(self) -> Mapping: + return self.attempt_retrieve_value("_ds_zone") + + @property + def requires_review_zone(self) -> Mapping: + return self.attempt_retrieve_value("_requires_review_zone") + + @property + def non_test_shared_zone(self) -> Mapping: + return self._non_test_shared_zone + + def __init__(self, partition_id: str): + self.partition_id = partition_id + self.ok_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") + self.dummy_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "dummyAccessKey", "dummySecretKey") + self.shared_zone_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "sharedZoneUserAccessKey", "sharedZoneUserSecretKey") + self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "supportUserAccessKey", "supportUserSecretKey") + self.unassociated_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listGroupAccessKey", "listGroupSecretKey") + self.test_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "testUserAccessKey", "testUserSecretKey") + self.history_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "history-key", "history-secret") + self.clients = [self.ok_vinyldns_client, self.dummy_vinyldns_client, self.shared_zone_vinyldns_client, self.support_user_client, + self.unassociated_client, self.test_user_client, self.history_client] + self.list_zones = ListZonesTestContext(partition_id) self.list_zones_client = self.list_zones.client - self.list_records_context = ListRecordSetsTestContext() - self.list_groups_context = ListGroupsTestContext() + self.list_records_context = ListRecordSetsTestContext(partition_id) + self.list_groups_context = ListGroupsTestContext(partition_id) self.list_batch_summaries_context = None self.dummy_group = None @@ -41,467 +100,524 @@ class SharedZoneTestContext(object): self.group_activity_created = None self.group_activity_updated = None - # if we are using an existing fixture, load the fixture file and pull all of our data from there - if fixture_file: - print "\r\n!!! FIXTURE FILE IS SET !!!" - self.load_fixture_file(fixture_file) - else: - print "\r\n!!! FIXTURE FILE NOT SET, BUILDING FIXTURE !!!" - # No fixture file, so we have to build everything ourselves - self.tear_down() # ensures that the environment is clean before starting - try: - ok_group = { - 'name': 'ok-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok'}, {'id': 'support-user-id'}], - 'admins': [{'id': 'ok'}] - } + self._history_zone = None + self._ok_zone = None + self._dummy_zone = None + self._ip6_reverse_zone = None + self._ip6_16_nibble_zone = None + self._ip4_reverse_zone = None + self._classless_base_zone = None + self._classless_zone_delegation_zone = None + self._system_test_zone = None + self._parent_zone = None + self._ds_zone = None + self._requires_review_zone = None + self._shared_zone = None + self._non_test_shared_zone = None - self.ok_group = self.ok_vinyldns_client.create_group(ok_group, status=200) - # in theory this shouldn't be needed, but getting 'user is not in group' errors on zone creation - self.confirm_member_in_group(self.ok_vinyldns_client, self.ok_group) + self.ip4_10_prefix = None + self.ip4_classless_prefix = None + self.ip6_prefix = None - dummy_group = { - 'name': 'dummy-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'dummy'}], - 'admins': [{'id': 'dummy'}] - } - self.dummy_group = self.dummy_vinyldns_client.create_group(dummy_group, status=200) - # in theory this shouldn't be needed, but getting 'user is not in group' errors on zone creation - self.confirm_member_in_group(self.dummy_vinyldns_client, self.dummy_group) + def setup(self): + partition_id = self.partition_id + try: + ok_group = { + "name": f"ok-group{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}, {"id": "support-user-id"}], + "admins": [{"id": "ok"}] + } - shared_record_group = { - 'name': 'record-ownergroup', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'sharedZoneUser'}, {'id': 'ok'}], - 'admins': [{'id': 'sharedZoneUser'}, {'id': 'ok'}] - } - self.shared_record_group = self.ok_vinyldns_client.create_group(shared_record_group, status=200) + self.ok_group = self.ok_vinyldns_client.create_group(ok_group, status=200) + # in theory this shouldn"t be needed, but getting "user is not in group' errors on zone creation + self.confirm_member_in_group(self.ok_vinyldns_client, self.ok_group) - history_group = { - 'name': 'history-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'history-id'}], - 'admins': [{'id': 'history-id'}] - } - self.history_group = self.history_client.create_group(history_group, status=200) - self.confirm_member_in_group(self.history_client, self.history_group) + dummy_group = { + "name": f"dummy-group{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "dummy"}], + "admins": [{"id": "dummy"}] + } + self.dummy_group = self.dummy_vinyldns_client.create_group(dummy_group, status=200) + # in theory this shouldn"t be needed, but getting "user is not in group' errors on zone creation + self.confirm_member_in_group(self.dummy_vinyldns_client, self.dummy_group) - history_zone_change = self.history_client.create_zone( - { - 'name': 'system-test-history.', - 'email': 'i.changed.this.1.times@history-test.com', - 'shared': False, - 'adminGroupId': self.history_group['id'], - 'isTest': True, - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202) - self.history_zone = history_zone_change['zone'] + shared_record_group = { + "name": f"record-ownergroup{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "sharedZoneUser"}, {"id": "ok"}, {"id": "support-user-id"}], + "admins": [{"id": "sharedZoneUser"}, {"id": "ok"}] + } + self.shared_record_group = self.ok_vinyldns_client.create_group(shared_record_group, status=200) - ok_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': 'ok.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'ok.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'ok.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202) - self.ok_zone = ok_zone_change['zone'] + history_group = { + "name": f"history-group{partition_id}", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "history-id"}], + "admins": [{"id": "history-id"}] + } + self.history_group = self.history_client.create_group(history_group, status=200) + self.confirm_member_in_group(self.history_client, self.history_group) - dummy_zone_change = self.dummy_vinyldns_client.create_zone( - { - 'name': 'dummy.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.dummy_group['id'], - 'isTest': True, - 'connection': { - 'name': 'dummy.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'dummy.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202) - self.dummy_zone = dummy_zone_change['zone'] - - ip6_reverse_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': '1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'ip6.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'ip6.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202 - ) - self.ip6_reverse_zone = ip6_reverse_zone_change['zone'] - - ip6_16_nibble_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': '0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' - }, status=202 - ) - self.ip6_16_nibble_zone = ip6_16_nibble_zone_change['zone'] - - ip4_reverse_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': '10.10.in-addr.arpa.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'ip4.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'ip4.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202 - ) - self.ip4_reverse_zone = ip4_reverse_zone_change['zone'] - - self.classless_base_zone_json = { - 'name': '2.0.192.in-addr.arpa.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'classless-base.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + history_zone_change = self.history_client.create_zone( + { + "name": f"system-test-history{partition_id}.", + "email": "i.changed.this.1.times@history-test.com", + "shared": False, + "adminGroupId": self.history_group["id"], + "isTest": True, + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'classless-base.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip } - } + }, status=202) + self._history_zone = history_zone_change["zone"] - classless_base_zone_change = self.ok_vinyldns_client.create_zone( - self.classless_base_zone_json, status=202 - ) - self.classless_base_zone = classless_base_zone_change['zone'] + ok_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"ok{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "ok.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "ok.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._ok_zone = ok_zone_change["zone"] - classless_zone_delegation_change = self.ok_vinyldns_client.create_zone( - { - 'name': '192/30.2.0.192.in-addr.arpa.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'classless.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'classless.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202 - ) - self.classless_zone_delegation_zone = classless_zone_delegation_change['zone'] + dummy_zone_change = self.dummy_vinyldns_client.create_zone( + { + "name": f"dummy{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.dummy_group["id"], + "isTest": True, + "connection": { + "name": "dummy.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "dummy.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._dummy_zone = dummy_zone_change["zone"] - system_test_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': 'system-test.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'system-test.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'system-test.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202 - ) - self.system_test_zone = system_test_zone_change['zone'] + self.ip6_prefix = f"fd69:27cc:fe9{partition_id}" + ip6_reverse_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"{partition_id}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "ip6.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "ip6.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202 + ) + self._ip6_reverse_zone = ip6_reverse_zone_change["zone"] - # parent zone gives access to the dummy user, dummy user cannot manage ns records - parent_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': 'parent.com.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'acl': { - 'rules': [ - { - 'accessLevel': 'Delete', - 'description': 'some_test_rule', - 'userId': 'dummy' - } - ] - }, - 'connection': { - 'name': 'parent.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'parent.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202) - self.parent_zone = parent_zone_change['zone'] + ip6_16_nibble_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"0.0.0.1.{partition_id}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "backendId": "func-test-backend" + }, status=202 + ) + self._ip6_16_nibble_zone = ip6_16_nibble_zone_change["zone"] - # mimicking the spec example - ds_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': 'example.com.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'example.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'example.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - }, status=202) - self.ds_zone = ds_zone_change['zone'] + self.ip4_10_prefix = f"10.{partition_id}" + ip4_reverse_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"{partition_id}.10.in-addr.arpa.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "ip4.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "ip4.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202 + ) + self._ip4_reverse_zone = ip4_reverse_zone_change["zone"] - # zone with name configured for manual review - requires_review_zone_change = self.ok_vinyldns_client.create_zone( - { - 'name': 'zone.requires.review.', - 'email': 'test@test.com', - 'shared': False, - 'adminGroupId': self.ok_group['id'], - 'isTest': True, - 'backendId': 'func-test-backend' - }, status=202) - self.requires_review_zone = requires_review_zone_change['zone'] + self.ip4_classless_prefix = f"192.0.{partition_id}" + classless_base_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"{partition_id}.0.192.in-addr.arpa.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "classless-base.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "classless-base.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202 + ) + self._classless_base_zone = classless_base_zone_change["zone"] - get_shared_zones = self.shared_zone_vinyldns_client.list_zones(status=200)['zones'] - shared_zone = [zone for zone in get_shared_zones if zone['name'] == "shared."] - non_test_shared_zone = [zone for zone in get_shared_zones if zone['name'] == "non.test.shared."] + classless_zone_delegation_change = self.ok_vinyldns_client.create_zone( + { + "name": f"192/30.{partition_id}.0.192.in-addr.arpa.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "classless.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "classless.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202 + ) + self._classless_zone_delegation_zone = classless_zone_delegation_change["zone"] - shared_zone_change = self.set_up_shared_zone(shared_zone[0]['id']) - self.shared_zone = shared_zone_change['zone'] + system_test_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"system-test{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "system-test.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "system-test.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202 + ) + self._system_test_zone = system_test_zone_change["zone"] - non_test_shared_zone_change = self.set_up_shared_zone(non_test_shared_zone[0]['id']) - self.non_test_shared_zone = non_test_shared_zone_change['zone'] + # parent zone gives access to the dummy user, dummy user cannot manage ns records + parent_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"parent.com{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "acl": { + "rules": [ + { + "accessLevel": "Delete", + "description": "some_test_rule", + "userId": "dummy" + } + ] + }, + "connection": { + "name": "parent.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "parent.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._parent_zone = parent_zone_change["zone"] - # wait until our zones are created - self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(ok_zone_change[u'zone'][u'id']) - self.dummy_vinyldns_client.wait_until_zone_active(dummy_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(ip6_reverse_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(ip6_16_nibble_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(ip4_reverse_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(classless_base_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(classless_zone_delegation_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(parent_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(ds_zone_change[u'zone'][u'id']) - self.ok_vinyldns_client.wait_until_zone_active(requires_review_zone_change[u'zone'][u'id']) - self.history_client.wait_until_zone_active(history_zone_change[u'zone'][u'id']) - self.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(shared_zone_change) + # mimicking the spec example + ds_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"example.com{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "connection": { + "name": "example.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "example.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._ds_zone = ds_zone_change["zone"] - shared_sync_change = self.shared_zone_vinyldns_client.sync_zone(self.shared_zone['id']) - self.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(non_test_shared_zone_change) - non_test_shared_sync_change = self.shared_zone_vinyldns_client.sync_zone( - self.non_test_shared_zone['id']) + # zone with name configured for manual review + requires_review_zone_change = self.ok_vinyldns_client.create_zone( + { + "name": f"zone.requires.review{partition_id}.", + "email": "test@test.com", + "shared": False, + "adminGroupId": self.ok_group["id"], + "isTest": True, + "backendId": "func-test-backend" + }, status=202) + self._requires_review_zone = requires_review_zone_change["zone"] - self.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(shared_sync_change) - self.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(non_test_shared_sync_change) + # Shared zone + shared_zone_change = self.support_user_client.create_zone( + { + "name": f"shared{partition_id}.", + "email": "test@test.com", + "shared": True, + "adminGroupId": self.shared_record_group["id"], + "isTest": True, + "connection": { + "name": "shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._shared_zone = shared_zone_change["zone"] - # validate all in there - zones = self.dummy_vinyldns_client.list_zones()['zones'] - assert_that(len(zones), is_(2)) - zones = self.ok_vinyldns_client.list_zones()['zones'] - assert_that(len(zones), is_(10)) - zones = self.shared_zone_vinyldns_client.list_zones()['zones'] - assert_that(len(zones), is_(2)) + # Shared zone + non_test_shared_zone_change = self.support_user_client.create_zone( + { + "name": f"non.test.shared{partition_id}.", + "email": "test@test.com", + "shared": True, + "adminGroupId": self.shared_record_group["id"], + "isTest": False, + "connection": { + "name": "shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "algorithm": VinylDNSTestContext.dns_key_algo, + "primaryServer": VinylDNSTestContext.name_server_ip + } + }, status=202) + self._non_test_shared_zone = non_test_shared_zone_change["zone"] - # initialize history - self.init_history() + # wait until our zones are created + self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(ok_zone_change["zone"]["id"]) + self.dummy_vinyldns_client.wait_until_zone_active(dummy_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(ip6_reverse_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(ip6_16_nibble_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(ip4_reverse_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(classless_base_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(classless_zone_delegation_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(parent_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(ds_zone_change["zone"]["id"]) + self.ok_vinyldns_client.wait_until_zone_active(requires_review_zone_change["zone"]["id"]) + self.history_client.wait_until_zone_active(history_zone_change["zone"]["id"]) + self.shared_zone_vinyldns_client.wait_until_zone_active(shared_zone_change["zone"]["id"]) + self.shared_zone_vinyldns_client.wait_until_zone_active(non_test_shared_zone_change["zone"]["id"]) - # initalize group activity - self.init_group_activity() + # validate all in there + zones = self.dummy_vinyldns_client.list_zones()["zones"] + assert_that(len(zones), is_(2)) + zones = self.ok_vinyldns_client.list_zones()["zones"] + assert_that(len(zones), is_(12)) - # initialize list zones, only do this when constructing the whole! - self.list_zones.build() + # initialize history + self.init_history() - # note: there are no state to load, the tests only need the client - self.list_zones_client = self.list_zones.client + # initialize group activity + self.init_group_activity() - # build the list of records; note: we do need to save the test records - self.list_records_context.build() + # initialize list zones, only do this when constructing the whole! + self.list_zones.build() - # build the list of groups - self.list_groups_context.build() + # note: there are no state to load, the tests only need the client + self.list_zones_client = self.list_zones.client - except: - # teardown if there was any issue in setup - try: - self.tear_down() - except: - pass - raise + # build the list of records; note: we do need to save the test records + self.list_records_context.build() - # We need to load somethings AFTER we are all initialized, do that here - self.list_batch_summaries_context = ListBatchChangeSummariesTestContext(self) + # build the list of groups + self.list_groups_context.build() + + self.list_batch_summaries_context = ListBatchChangeSummariesTestContext() + except Exception as e: + # Cleanup if setup fails + self.tear_down() + raise def init_history(self): - from test_data import TestData - import copy # Initialize the zone history # change the zone nine times to we have update events in zone change history, # ten total changes including creation for i in range(2, 11): - zone_update = copy.deepcopy(self.history_zone) - zone_update['connection']['key'] = VinylDNSTestContext.dns_key - zone_update['transferConnection']['key'] = VinylDNSTestContext.dns_key - zone_update['email'] = 'i.changed.this.{0}.times@history-test.com'.format(i) - zone_update = self.history_client.update_zone(zone_update, status=202)['zone'] + zone_update = copy.deepcopy(self._history_zone) + zone_update["connection"]["key"] = VinylDNSTestContext.dns_key + zone_update["transferConnection"]["key"] = VinylDNSTestContext.dns_key + zone_update["email"] = "i.changed.this.{0}.times@history-test.com".format(i) + self.history_client.update_zone(zone_update, status=202) # create some record sets test_a = TestData.A.copy() - test_a['zoneId'] = self.history_zone['id'] + test_a["zoneId"] = self._history_zone["id"] test_aaaa = TestData.AAAA.copy() - test_aaaa['zoneId'] = self.history_zone['id'] + test_aaaa["zoneId"] = self._history_zone["id"] test_cname = TestData.CNAME.copy() - test_cname['zoneId'] = self.history_zone['id'] + test_cname["zoneId"] = self._history_zone["id"] - a_record = self.history_client.create_recordset(test_a, status=202)['recordSet'] - aaaa_record = self.history_client.create_recordset(test_aaaa, status=202)['recordSet'] - cname_record = self.history_client.create_recordset(test_cname, status=202)['recordSet'] + a_record = self.history_client.create_recordset(test_a, status=202)["recordSet"] + aaaa_record = self.history_client.create_recordset(test_aaaa, status=202)["recordSet"] + cname_record = self.history_client.create_recordset(test_cname, status=202)["recordSet"] # wait here for all the record sets to be created - self.history_client.wait_until_recordset_exists(a_record['zoneId'], a_record['id']) - self.history_client.wait_until_recordset_exists(aaaa_record['zoneId'], aaaa_record['id']) - self.history_client.wait_until_recordset_exists(cname_record['zoneId'], cname_record['id']) + self.history_client.wait_until_recordset_exists(a_record["zoneId"], a_record["id"]) + self.history_client.wait_until_recordset_exists(aaaa_record["zoneId"], aaaa_record["id"]) + self.history_client.wait_until_recordset_exists(cname_record["zoneId"], cname_record["id"]) # update the record sets a_record_update = copy.deepcopy(a_record) - a_record_update['ttl'] += 100 - a_record_update['records'][0]['address'] = '9.9.9.9' + a_record_update["ttl"] += 100 + a_record_update["records"][0]["address"] = "9.9.9.9" a_change = self.history_client.update_recordset(a_record_update, status=202) aaaa_record_update = copy.deepcopy(aaaa_record) - aaaa_record_update['ttl'] += 100 - aaaa_record_update['records'][0]['address'] = '2003:db8:0:0:0:0:0:4' + aaaa_record_update["ttl"] += 100 + aaaa_record_update["records"][0]["address"] = "2003:db8:0:0:0:0:0:4" aaaa_change = self.history_client.update_recordset(aaaa_record_update, status=202) cname_record_update = copy.deepcopy(cname_record) - cname_record_update['ttl'] += 100 - cname_record_update['records'][0]['cname'] = 'changed-cname.' + cname_record_update["ttl"] += 100 + cname_record_update["records"][0]["cname"] = "changed-cname." cname_change = self.history_client.update_recordset(cname_record_update, status=202) - self.history_client.wait_until_recordset_change_status(a_change, 'Complete') - self.history_client.wait_until_recordset_change_status(aaaa_change, 'Complete') - self.history_client.wait_until_recordset_change_status(cname_change, 'Complete') + self.history_client.wait_until_recordset_change_status(a_change, "Complete") + self.history_client.wait_until_recordset_change_status(aaaa_change, "Complete") + self.history_client.wait_until_recordset_change_status(cname_change, "Complete") # delete the recordsets - self.history_client.delete_recordset(a_record['zoneId'], a_record['id']) - self.history_client.delete_recordset(aaaa_record['zoneId'], aaaa_record['id']) - self.history_client.delete_recordset(cname_record['zoneId'], cname_record['id']) + self.history_client.delete_recordset(a_record["zoneId"], a_record["id"]) + self.history_client.delete_recordset(aaaa_record["zoneId"], aaaa_record["id"]) + self.history_client.delete_recordset(cname_record["zoneId"], cname_record["id"]) - self.history_client.wait_until_recordset_deleted(a_record['zoneId'], a_record['id']) - self.history_client.wait_until_recordset_deleted(aaaa_record['zoneId'], aaaa_record['id']) - self.history_client.wait_until_recordset_deleted(cname_record['zoneId'], cname_record['id']) + self.history_client.wait_until_recordset_deleted(a_record["zoneId"], a_record["id"]) + self.history_client.wait_until_recordset_deleted(aaaa_record["zoneId"], aaaa_record["id"]) + self.history_client.wait_until_recordset_deleted(cname_record["zoneId"], cname_record["id"]) def init_group_activity(self): client = self.ok_vinyldns_client - created_group = None - group_name = 'test-list-group-activity-max-item-success' + group_name = "test-list-group-activity-max-item-success" # cleanup existing group if it's already in there groups = client.list_all_my_groups() - existing = [grp for grp in groups if grp['name'] == group_name] + existing = [grp for grp in groups if grp["name"] == group_name] for grp in existing: - client.delete_group(grp['id'], status=200) + client.delete_group(grp["id"], status=200) - members = [{'id': 'ok'}] + members = [{"id": "ok"}] new_group = { - 'name': group_name, - 'email': 'test@test.com', - 'members': members, - 'admins': [{'id': 'ok'}] + "name": group_name, + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] } created_group = client.create_group(new_group, status=200) @@ -509,81 +625,19 @@ class SharedZoneTestContext(object): updated_groups = [] # each update changes the member for runner in range(0, 10): - id = "dummy{0:0>3}".format(runner) - members = [{'id': id}] + members = [{"id": "dummy{0:0>3}".format(runner)}] update_groups.append({ - 'id': created_group['id'], - 'name': group_name, - 'email': 'test@test.com', - 'members': members, - 'admins': [{'id': 'ok'}] + "id": created_group["id"], + "name": group_name, + "email": "test@test.com", + "members": members, + "admins": [{"id": "ok"}] }) - updated_groups.append(client.update_group(update_groups[runner]['id'], update_groups[runner], status=200)) + updated_groups.append(client.update_group(update_groups[runner]["id"], update_groups[runner], status=200)) self.group_activity_created = created_group self.group_activity_updated = updated_groups - def load_fixture_file(self, fixture_file): - # The fixture file contains all of the groups and zones, - # The format is simply json where groups = [] and zones = [] - import json - with open(fixture_file) as json_file: - data = json.load(json_file) - self.ok_group = data['ok_group'] - self.ok_zone = data['ok_zone'] - self.dummy_group = data['dummy_group'] - self.shared_record_group = data['shared_record_group'] - self.dummy_zone = data['dummy_zone'] - self.ip6_reverse_zone = data['ip6_reverse_zone'] - self.ip6_16_nibble_zone = data['ip6_16_nibble_zone'] - self.ip4_reverse_zone = data['ip4_reverse_zone'] - self.classless_base_zone = data['classless_base_zone'] - self.classless_zone_delegation_zone = data['classless_zone_delegation_zone'] - self.system_test_zone = data['system_test_zone'] - self.parent_zone = data['parent_zone'] - self.ds_zone = data['ds_zone'] - self.requires_review_zone = data['requires_review_zone'] - self.shared_zone = data['shared_zone'] - self.non_test_shared_zone = data['non_test_shared_zone'] - self.history_zone = data['history_zone'] - self.history_group = data['history_group'] - self.group_activity_created = data['group_activity_created'] - self.group_activity_updated = data['group_activity_updated'] - - def out_fixture_file(self, fixture_file): - print "\r\n!!! PRINTING OUT FIXTURE FILE !!!" - import json - # output the fixture file, be sure to be in sync with the load_fixture_file - data = {'ok_group': self.ok_group, 'ok_zone': self.ok_zone, 'dummy_group': self.dummy_group, - 'shared_record_group': self.shared_record_group, 'dummy_zone': self.dummy_zone, - 'ip6_reverse_zone': self.ip6_reverse_zone, 'ip6_16_nibble_zone': self.ip6_16_nibble_zone, - 'ip4_reverse_zone': self.ip4_reverse_zone, 'classless_base_zone': self.classless_base_zone, - 'classless_zone_delegation_zone': self.classless_zone_delegation_zone, - 'system_test_zone': self.system_test_zone, 'parent_zone': self.parent_zone, 'ds_zone': self.ds_zone, - 'requires_review_zone': self.requires_review_zone, 'shared_zone': self.shared_zone, - 'non_test_shared_zone': self.non_test_shared_zone, 'history_zone': self.history_zone, - 'history_group': self.history_group, 'group_activity_created': self.group_activity_created, - 'group_activity_updated': self.group_activity_updated} - with open(fixture_file, 'w') as out_file: - json.dump(data, out_file) - - def set_up_shared_zone(self, zone_id): - # shared zones are created through test data loader, but needs connection info added here to use - get_shared_zone = self.shared_zone_vinyldns_client.get_zone(zone_id) - shared_zone = get_shared_zone['zone'] - - connection_info = { - 'name': 'shared.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - } - - shared_zone['connection'] = connection_info - shared_zone['transferConnection'] = connection_info - - return self.shared_zone_vinyldns_client.update_zone(shared_zone, status=202) - def tear_down(self): """ The ok_vinyldns_client is a zone admin on _all_ the zones. @@ -591,21 +645,29 @@ class SharedZoneTestContext(object): We shouldn't have to do any checks now, as zone admins have full rights to all zones, including deleting all records (even in the old shared model) """ - self.list_zones.tear_down() - self.list_records_context.tear_down() + try: + self.list_zones.tear_down() + self.list_records_context.tear_down() - if self.list_batch_summaries_context: - self.list_batch_summaries_context.tear_down(self) + if self.list_batch_summaries_context: + self.list_batch_summaries_context.tear_down() - if self.list_groups_context: - self.list_groups_context.tear_down() + if self.list_groups_context: + self.list_groups_context.tear_down() - clear_zones(self.dummy_vinyldns_client) - clear_zones(self.ok_vinyldns_client) - clear_zones(self.history_client) - clear_groups(self.dummy_vinyldns_client, "global-acl-group-id") - clear_groups(self.ok_vinyldns_client, "global-acl-group-id") - clear_groups(self.history_client) + clear_zones(self.dummy_vinyldns_client) + clear_zones(self.ok_vinyldns_client) + clear_zones(self.history_client) + clear_groups(self.dummy_vinyldns_client, "global-acl-group-id") + clear_groups(self.ok_vinyldns_client, "global-acl-group-id") + clear_groups(self.history_client) + + # Close all clients + for client in self.clients: + client.tear_down() + + except Exception as e: + raise @staticmethod def confirm_member_in_group(client, group): @@ -616,3 +678,29 @@ class SharedZoneTestContext(object): time.sleep(.05) retries -= 1 assert_that(success, is_(True)) + + def attempt_retrieve_value(self, attribute_name: str) -> Mapping: + """ + Attempts to retrieve the data for the attribute given by `attribute_name` + :param attribute_name: The name of the attribute for which to attempt to retrieve the value + :return: The value of the attribute given by `attribute_name` + """ + if not VinylDNSTestContext.enable_safety_check: + # Just return the real data + return getattr(self, attribute_name) + + # Get the real data, stored on this instance + real_data = getattr(self, attribute_name) + + # If we don't have a cache of the original value, make a copy and cache it + if self._data_cache.get(attribute_name) is None: + self._data_cache[attribute_name] = {"caller": "", "data": copy.deepcopy(real_data)} + else: + print("last caller: " + str(self._data_cache[attribute_name]["caller"])) + assert_that(real_data, has_entries(self._data_cache[attribute_name]["data"])) + + # Set last known caller to print if our assertion fails + self._data_cache[attribute_name]["caller"] = inspect.stack()[2][3] + + # Return the data + return self._data_cache[attribute_name]["data"] diff --git a/modules/api/functional_test/live_tests/test_data.py b/modules/api/functional_test/live_tests/test_data.py index 3069d9f2c..0163f4027 100644 --- a/modules/api/functional_test/live_tests/test_data.py +++ b/modules/api/functional_test/live_tests/test_data.py @@ -1,124 +1,124 @@ class TestData: A = { - 'zoneId': None, - 'name': 'test-create-a-ok', - 'type': 'A', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-a-ok", + "type": "A", + "ttl": 100, + "account": "foo", + "records": [ { - 'address': '10.1.1.1' + "address": "10.1.1.1" }, { - 'address': '10.2.2.2' + "address": "10.2.2.2" } ] } AAAA = { - 'zoneId': None, - 'name': 'test-create-aaaa-ok', - 'type': 'AAAA', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-aaaa-ok", + "type": "AAAA", + "ttl": 100, + "account": "foo", + "records": [ { - 'address': '2001:db8:0:0:0:0:0:3' + "address": "2001:db8:0:0:0:0:0:3" }, { - 'address': '2002:db8:0:0:0:0:0:3' + "address": "2002:db8:0:0:0:0:0:3" } ] } CNAME = { - 'zoneId': None, - 'name': 'test-create-cname-ok', - 'type': 'CNAME', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-cname-ok", + "type": "CNAME", + "ttl": 100, + "account": "foo", + "records": [ { - 'cname': 'cname.' + "cname": "cname." } ] } MX = { - 'zoneId': None, - 'name': 'test-create-mx-ok', - 'type': 'MX', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-mx-ok", + "type": "MX", + "ttl": 100, + "account": "foo", + "records": [ { - 'preference': 100, - 'exchange': 'exchange.' + "preference": 100, + "exchange": "exchange." } ] } PTR = { - 'zoneId': None, - 'name': '10.20', - 'type': 'PTR', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "10.20", + "type": "PTR", + "ttl": 100, + "account": "foo", + "records": [ { - 'ptrdname': 'ptr.' + "ptrdname": "ptr." } ] } SPF = { - 'zoneId': None, - 'name': 'test-create-spf-ok', - 'type': 'SPF', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-spf-ok", + "type": "SPF", + "ttl": 100, + "account": "foo", + "records": [ { - 'text': 'spf.' + "text": "spf." } ] } SRV = { - 'zoneId': None, - 'name': 'test-create-srv-ok', - 'type': 'SRV', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-srv-ok", + "type": "SRV", + "ttl": 100, + "account": "foo", + "records": [ { - 'priority': 1, - 'weight': 2, - 'port': 8000, - 'target': 'srv.' + "priority": 1, + "weight": 2, + "port": 8000, + "target": "srv." } ] } SSHFP = { - 'zoneId': None, - 'name': 'test-create-sshfp-ok', - 'type': 'SSHFP', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-sshfp-ok", + "type": "SSHFP", + "ttl": 100, + "account": "foo", + "records": [ { - 'algorithm': 1, - 'type': 2, - 'fingerprint': 'fp' + "algorithm": 1, + "type": 2, + "fingerprint": "fp" } ] } TXT = { - 'zoneId': None, - 'name': 'test-create-txt-ok', - 'type': 'TXT', - 'ttl': 100, - 'account': 'foo', - 'records': [ + "zoneId": None, + "name": "test-create-txt-ok", + "type": "TXT", + "ttl": 100, + "account": "foo", + "records": [ { - 'text': 'some text' + "text": "some text" } ] } - RECORDS = [('A', A), ('AAAA', AAAA), ('CNAME', CNAME), ('MX', MX), ('PTR', PTR), ('SPF', SPF), ('SRV', SRV), ('SSHFP', SSHFP), ('TXT', TXT)] - FORWARD_RECORDS = [('A', A), ('AAAA', AAAA), ('CNAME', CNAME), ('MX', MX), ('SPF', SPF), ('SRV', SRV), ('SSHFP', SSHFP), ('TXT', TXT)] - REVERSE_RECORDS = [('CNAME', CNAME), ('PTR', PTR), ('TXT', TXT)] + RECORDS = [("A", A), ("AAAA", AAAA), ("CNAME", CNAME), ("MX", MX), ("PTR", PTR), ("SPF", SPF), ("SRV", SRV), ("SSHFP", SSHFP), ("TXT", TXT)] + FORWARD_RECORDS = [("A", A), ("AAAA", AAAA), ("CNAME", CNAME), ("MX", MX), ("SPF", SPF), ("SRV", SRV), ("SSHFP", SSHFP), ("TXT", TXT)] + REVERSE_RECORDS = [("CNAME", CNAME), ("PTR", PTR), ("TXT", TXT)] diff --git a/modules/api/functional_test/live_tests/zones/create_zone_test.py b/modules/api/functional_test/live_tests/zones/create_zone_test.py index cf0c888b1..51602bcd5 100644 --- a/modules/api/functional_test/live_tests/zones/create_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/create_zone_test.py @@ -1,85 +1,84 @@ import copy -import pytest -import uuid -from hamcrest import * -from vinyldns_python import VinylDNSClient -from vinyldns_context import VinylDNSTestContext +import pytest + from utils import * records_in_dns = [ - {'name': 'one-time.', - 'type': 'SOA', - 'records': [{u'mname': u'172.17.42.1.', - u'rname': u'admin.test.com.', - u'retry': 3600, - u'refresh': 10800, - u'minimum': 38400, - u'expire': 604800, - u'serial': 1439234395}]}, - {'name': u'one-time.', - 'type': u'NS', - 'records': [{u'nsdname': u'172.17.42.1.'}]}, - {'name': u'jenkins', - 'type': u'A', - 'records': [{u'address': u'10.1.1.1'}]}, - {'name': u'foo', - 'type': u'A', - 'records': [{u'address': u'2.2.2.2'}]}, - {'name': u'test', - 'type': u'A', - 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, - {'name': u'one-time.', - 'type': u'A', - 'records': [{u'address': u'5.5.5.5'}]}, - {'name': u'already-exists', - 'type': u'A', - 'records': [{u'address': u'6.6.6.6'}]}] - + {"name": "one-time.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 1439234395}]}, + {"name": "one-time.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "jenkins", + "type": "A", + "records": [{"address": "10.1.1.1"}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "2.2.2.2"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": "one-time.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}] # Defined in docker bind9 conf file TSIG_KEYS = [ - ('vinyldns-sha1.', '0nIhR1zS/nHUg2n0AIIUyJwXUyQ=', 'HMAC-SHA1'), - ('vinyldns-sha224.', 'yud/F666YjcnfqPSulHaYXrNObNnS1Jv+rX61A==', 'HMAC-SHA224'), - ('vinyldns-sha256.', 'wzLsDGgPRxFaC6z/9Bc0n1W4KrnmaUdFCgCn2+7zbPU=', 'HMAC-SHA256'), - ('vinyldns-sha384.', 'ne9jSUJ7PBGveM37aOX+ZmBXQgz1EqkbYBO1s5l/LNpjEno4OfYvGo1Lv1rnw3pE', 'HMAC-SHA384'), - ('vinyldns-sha512.', 'xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA==', 'HMAC-SHA512'), + ("vinyldns-sha1.", "0nIhR1zS/nHUg2n0AIIUyJwXUyQ=", "HMAC-SHA1"), + ("vinyldns-sha224.", "yud/F666YjcnfqPSulHaYXrNObNnS1Jv+rX61A==", "HMAC-SHA224"), + ("vinyldns-sha256.", "wzLsDGgPRxFaC6z/9Bc0n1W4KrnmaUdFCgCn2+7zbPU=", "HMAC-SHA256"), + ("vinyldns-sha384.", "ne9jSUJ7PBGveM37aOX+ZmBXQgz1EqkbYBO1s5l/LNpjEno4OfYvGo1Lv1rnw3pE", "HMAC-SHA384"), + ("vinyldns-sha512.", "xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA==", "HMAC-SHA512"), ] + + @pytest.mark.serial -@pytest.mark.parametrize('key_name,key_secret,key_alg', TSIG_KEYS) +@pytest.mark.parametrize("key_name,key_secret,key_alg", TSIG_KEYS) def test_create_zone_with_tsigs(shared_zone_test_context, key_name, key_secret, key_alg): client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': key_name, - 'keyName': key_name, - 'key': key_secret, - 'primaryServer': VinylDNSTestContext.dns_ip, - 'algorithm': key_alg + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": key_name, + "keyName": key_name, + "key": key_secret, + "primaryServer": VinylDNSTestContext.name_server_ip, + "algorithm": key_alg } } try: zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone_change[u'zone'][u'id']) + zone = zone_change["zone"] + client.wait_until_zone_active(zone_change["zone"]["id"]) # Check that it was internally stored correctly using GET - zone_get = client.get_zone(zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name+'.')) - assert_that('connection' in zone_get) - assert_that(zone_get['connection']['keyName'], is_(key_name)) - assert_that(zone_get['connection']['algorithm'], is_(key_alg)) + zone_get = client.get_zone(zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name + ".")) + assert_that("connection" in zone_get) + assert_that(zone_get["connection"]["keyName"], is_(key_name)) + assert_that(zone_get["connection"]["algorithm"], is_(key_alg)) finally: - if 'id' in zone: - client.abandon_zones([zone['id']], status=202) + if "id" in zone: + client.abandon_zones([zone["id"]], status=202) + @pytest.mark.serial def test_create_zone_success(shared_zone_test_context): @@ -89,40 +88,40 @@ def test_create_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time ' + zone_name = "one-time " zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'backendId': 'func-test-backend' + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "backendId": "func-test-backend" } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result_zone['id']) + result_zone = result["zone"] + client.wait_until_zone_active(result_zone["id"]) - get_result = client.get_zone(result_zone['id']) + get_result = client.get_zone(result_zone["id"]) - get_zone = get_result['zone'] - assert_that(get_zone['name'], is_(zone['name'].strip()+'.')) - assert_that(get_zone['email'], is_(zone['email'])) - assert_that(get_zone['adminGroupId'], is_(zone['adminGroupId'])) - assert_that(get_zone['latestSync'], is_not(none())) - assert_that(get_zone['status'], is_('Active')) - assert_that(get_zone['backendId'], is_('func-test-backend')) + get_zone = get_result["zone"] + assert_that(get_zone["name"], is_(zone["name"].strip() + ".")) + assert_that(get_zone["email"], is_(zone["email"])) + assert_that(get_zone["adminGroupId"], is_(zone["adminGroupId"])) + assert_that(get_zone["latestSync"], is_not(none())) + assert_that(get_zone["status"], is_("Active")) + assert_that(get_zone["backendId"], is_("func-test-backend")) # confirm that the recordsets in DNS have been saved in vinyldns - recordsets = client.list_recordsets_by_zone(result_zone['id'])['recordSets'] + recordsets = client.list_recordsets_by_zone(result_zone["id"])["recordSets"] assert_that(len(recordsets), is_(7)) for rs in recordsets: - small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) - small_rs['records'] = sorted(small_rs['records']) + small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) + small_rs["records"] = small_rs["records"] assert_that(records_in_dns, has_item(small_rs)) finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) @pytest.mark.skip_production @@ -133,34 +132,34 @@ def test_create_zone_without_transfer_connection_leaves_it_empty(shared_zone_tes client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result[u'zone'][u'id']) + result_zone = result["zone"] + client.wait_until_zone_active(result["zone"]["id"]) - get_result = client.get_zone(result_zone['id']) + get_result = client.get_zone(result_zone["id"]) - get_zone = get_result['zone'] - assert_that(get_zone['name'], is_(zone['name']+'.')) - assert_that(get_zone['email'], is_(zone['email'])) - assert_that(get_zone['adminGroupId'], is_(zone['adminGroupId'])) + get_zone = get_result["zone"] + assert_that(get_zone["name"], is_(zone["name"] + ".")) + assert_that(get_zone["email"], is_(zone["email"])) + assert_that(get_zone["adminGroupId"], is_(zone["adminGroupId"])) - assert_that(get_zone, is_not(has_key('transferConnection'))) + assert_that(get_zone, is_not(has_key("transferConnection"))) finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) def test_create_zone_fails_no_authorization(shared_zone_test_context): @@ -170,8 +169,8 @@ def test_create_zone_fails_no_authorization(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = { - 'name': str(uuid.uuid4()), - 'email': 'test@test.com', + "name": str(uuid.uuid4()), + "email": "test@test.com", } client.create_zone(zone, sign_request=False, status=401) @@ -183,12 +182,12 @@ def test_create_missing_zone_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = { - 'random_key': 'some_value', - 'another_key': 'meaningless_data' + "random_key": "some_value", + "another_key": "meaningless_data" } - errors = client.create_zone(zone, status=400)['errors'] - assert_that(errors, contains_inanyorder('Missing Zone.name', 'Missing Zone.email', 'Missing Zone.adminGroupId')) + errors = client.create_zone(zone, status=400)["errors"] + assert_that(errors, contains_inanyorder("Missing Zone.name", "Missing Zone.email", "Missing Zone.adminGroupId")) def test_create_invalid_zone_data(shared_zone_test_context): @@ -197,17 +196,17 @@ def test_create_invalid_zone_data(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'test.zone.invalid.' + zone_name = "test.zone.invalid." zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'shared': 'invalid_value', - 'adminGroupId': 'admin-group-id' + "name": zone_name, + "email": "test@test.com", + "shared": "invalid_value", + "adminGroupId": "admin-group-id" } - errors = client.create_zone(zone, status=400)['errors'] - assert_that(errors, contains_inanyorder('Do not know how to convert JString(invalid_value) into boolean')) + errors = client.create_zone(zone, status=400)["errors"] + assert_that(errors, contains_inanyorder("Do not know how to convert JString(invalid_value) into boolean")) @pytest.mark.serial @@ -217,15 +216,15 @@ def test_create_zone_with_connection_failure(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time.' + zone_name = "one-time." zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'connection': { - 'name': zone_name, - 'keyName': zone_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "connection": { + "name": zone_name, + "keyName": zone_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } client.create_zone(zone, status=400) @@ -236,8 +235,8 @@ def test_create_zone_returns_409_if_already_exists(shared_zone_test_context): Test creating a zone returns a 409 Conflict if the zone name already exists """ create_conflict = dict(shared_zone_test_context.ok_zone) - create_conflict['connection']['key'] = VinylDNSTestContext.dns_key # necessary because we encrypt the key - create_conflict['transferConnection']['key'] = VinylDNSTestContext.dns_key + create_conflict["connection"]["key"] = VinylDNSTestContext.dns_key # necessary because we encrypt the key + create_conflict["transferConnection"]["key"] = VinylDNSTestContext.dns_key shared_zone_test_context.ok_vinyldns_client.create_zone(create_conflict, status=409) @@ -249,8 +248,8 @@ def test_create_zone_returns_400_for_invalid_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = { - 'jim': 'bob', - 'hey': 'you' + "jim": "bob", + "hey": "you" } client.create_zone(zone, status=400) @@ -258,116 +257,113 @@ def test_create_zone_returns_400_for_invalid_data(shared_zone_test_context): @pytest.mark.skip_production @pytest.mark.serial def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): - client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'] + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"] } try: zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone_change[u'zone'][u'id']) + zone = zone_change["zone"] + client.wait_until_zone_active(zone_change["zone"]["id"]) # Check response from create - assert_that(zone['name'], is_(zone_name+'.')) - print "'connection' not in zone = " + 'connection' not in zone + assert_that(zone["name"], is_(zone_name + ".")) + print("`connection` not in zone = " + "connection" not in zone) - assert_that('connection' not in zone) - assert_that('transferConnection' not in zone) + assert_that("connection" not in zone) + assert_that("transferConnection" not in zone) # Check that it was internally stored correctly using GET - zone_get = client.get_zone(zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name+'.')) - assert_that('connection' not in zone_get) - assert_that('transferConnection' not in zone_get) + zone_get = client.get_zone(zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name + ".")) + assert_that("connection" not in zone_get) + assert_that("transferConnection" not in zone_get) finally: - if 'id' in zone: - client.abandon_zones([zone['id']], status=202) + if "id" in zone: + client.abandon_zones([zone["id"]], status=202) @pytest.mark.serial def test_zone_connection_only(shared_zone_test_context): - client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } expected_connection = { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } try: zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone_change[u'zone'][u'id']) + zone = zone_change["zone"] + client.wait_until_zone_active(zone_change["zone"]["id"]) # Check response from create - assert_that(zone['name'], is_(zone_name+'.')) - assert_that(zone['connection']['name'], is_(expected_connection['name'])) - assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) - assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) - assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone["name"], is_(zone_name + ".")) + assert_that(zone["connection"]["name"], is_(expected_connection["name"])) + assert_that(zone["connection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["connection"]["primaryServer"], is_(expected_connection["primaryServer"])) + assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) + assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) # Check that it was internally stored correctly using GET - zone_get = client.get_zone(zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name+'.')) - assert_that(zone['connection']['name'], is_(expected_connection['name'])) - assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) - assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) - assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + zone_get = client.get_zone(zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name + ".")) + assert_that(zone["connection"]["name"], is_(expected_connection["name"])) + assert_that(zone["connection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["connection"]["primaryServer"], is_(expected_connection["primaryServer"])) + assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) + assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) finally: - if 'id' in zone: - client.abandon_zones([zone['id']], status=202) + if "id" in zone: + client.abandon_zones([zone["id"]], status=202) @pytest.mark.serial def test_zone_bad_connection(shared_zone_test_context): - client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'connection': { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': 'somebadkey', - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "connection": { + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": "somebadkey", + "primaryServer": VinylDNSTestContext.name_server_ip } } @@ -376,25 +372,24 @@ def test_zone_bad_connection(shared_zone_test_context): @pytest.mark.serial def test_zone_bad_transfer_connection(shared_zone_test_context): - client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'connection': { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "connection": { + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': "bad", - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": "bad", + "primaryServer": VinylDNSTestContext.name_server_ip } } @@ -403,63 +398,62 @@ def test_zone_bad_transfer_connection(shared_zone_test_context): @pytest.mark.serial def test_zone_transfer_connection(shared_zone_test_context): - client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } expected_connection = { - 'name': zone_name, - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } try: zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone_change[u'zone'][u'id']) + zone = zone_change["zone"] + client.wait_until_zone_active(zone_change["zone"]["id"]) # Check response from create - assert_that(zone['name'], is_(zone_name+'.')) - assert_that(zone['connection']['name'], is_(expected_connection['name'])) - assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) - assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) - assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone["name"], is_(zone_name + ".")) + assert_that(zone["connection"]["name"], is_(expected_connection["name"])) + assert_that(zone["connection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["connection"]["primaryServer"], is_(expected_connection["primaryServer"])) + assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) + assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) # Check that it was internally stored correctly using GET - zone_get = client.get_zone(zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name+'.')) - assert_that(zone['connection']['name'], is_(expected_connection['name'])) - assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) - assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) - assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) - assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + zone_get = client.get_zone(zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name + ".")) + assert_that(zone["connection"]["name"], is_(expected_connection["name"])) + assert_that(zone["connection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["connection"]["primaryServer"], is_(expected_connection["primaryServer"])) + assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) + assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) + assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) finally: - if 'id' in zone: - client.abandon_zones([zone['id']], status=202) + if "id" in zone: + client.abandon_zones([zone["id"]], status=202) @pytest.mark.serial @@ -468,20 +462,20 @@ def test_user_cannot_create_zone_with_nonmember_admin_group(shared_zone_test_con Test user cannot create a zone with an admin group they are not a member of """ zone = { - 'name': 'one-time.', - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.dummy_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "one-time.", + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.dummy_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } @@ -493,46 +487,48 @@ def test_user_cannot_create_zone_with_failed_validations(shared_zone_test_contex Test that a user cannot create a zone that has invalid zone data """ zone = { - 'name': 'invalid-zone.', - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "invalid-zone.", + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = shared_zone_test_context.ok_vinyldns_client.create_zone(zone, status=400) - assert_that(result['errors'], contains_inanyorder( + assert_that(result["errors"], contains_inanyorder( contains_string("not-approved.thing.com. is not an approved name server") )) + def test_normal_user_cannot_create_shared_zone(shared_zone_test_context): """ Test that a normal user cannot create a shared zone """ super_zone = copy.deepcopy(shared_zone_test_context.ok_zone) - super_zone['shared'] = True + super_zone["shared"] = True shared_zone_test_context.ok_vinyldns_client.create_zone(super_zone, status=403) + def test_create_zone_bad_backend_id(shared_zone_test_context): """ Test that a user cannot create a zone with a backendId that is not in config """ zone = { - 'name': "test-create-zone-bad-backend-id", - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'backendId': "does-not-exist-id" + "name": "test-create-zone-bad-backend-id", + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "backendId": "does-not-exist-id" } result = shared_zone_test_context.ok_vinyldns_client.create_zone(zone, status=400) assert_that(result, contains_string("Invalid backendId")) diff --git a/modules/api/functional_test/live_tests/zones/delete_zone_test.py b/modules/api/functional_test/live_tests/zones/delete_zone_test.py index 5957b1db0..a888c20a7 100644 --- a/modules/api/functional_test/live_tests/zones/delete_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/delete_zone_test.py @@ -15,38 +15,38 @@ def test_delete_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result_zone['id']) + result_zone = result["zone"] + client.wait_until_zone_active(result_zone["id"]) - client.delete_zone(result_zone['id'], status=202) - client.wait_until_zone_deleted(result_zone['id']) + client.delete_zone(result_zone["id"], status=202) + client.wait_until_zone_deleted(result_zone["id"]) - client.get_zone(result_zone['id'], status=404) + client.get_zone(result_zone["id"], status=404) result_zone = None finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) @pytest.mark.serial @@ -57,38 +57,38 @@ def test_delete_zone_twice(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time' + zone_name = "one-time" zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result_zone['id']) + result_zone = result["zone"] + client.wait_until_zone_active(result_zone["id"]) - client.delete_zone(result_zone['id'], status=202) - client.wait_until_zone_deleted(result_zone['id']) + client.delete_zone(result_zone["id"], status=202) + client.wait_until_zone_deleted(result_zone["id"]) - client.delete_zone(result_zone['id'], status=404) + client.delete_zone(result_zone["id"], status=404) result_zone = None finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) def test_delete_zone_returns_404_if_zone_not_found(shared_zone_test_context): @@ -96,7 +96,7 @@ def test_delete_zone_returns_404_if_zone_not_found(shared_zone_test_context): Test deleting a zone returns a 404 if the zone was not found """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_zone('nothere', status=404) + client.delete_zone("nothere", status=404) def test_delete_zone_no_authorization(shared_zone_test_context): @@ -105,4 +105,4 @@ def test_delete_zone_no_authorization(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - client.delete_zone('1234', sign_request=False, status=401) + client.delete_zone("1234", sign_request=False, status=401) diff --git a/modules/api/functional_test/live_tests/zones/get_zone_test.py b/modules/api/functional_test/live_tests/zones/get_zone_test.py index 9cac5e9fa..7c2f2a246 100644 --- a/modules/api/functional_test/live_tests/zones/get_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/get_zone_test.py @@ -1,6 +1,5 @@ -import uuid +import pytest -from hamcrest import * from utils import * @@ -10,12 +9,12 @@ def test_get_zone_by_id(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone(shared_zone_test_context.system_test_zone['id'], status=200) - retrieved = result['zone'] + result = client.get_zone(shared_zone_test_context.system_test_zone["id"], status=200) + retrieved = result["zone"] - assert_that(retrieved['id'], is_(shared_zone_test_context.system_test_zone['id'])) - assert_that(retrieved['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) - assert_that(retrieved['accessLevel'], is_('Delete')) + assert_that(retrieved["id"], is_(shared_zone_test_context.system_test_zone["id"])) + assert_that(retrieved["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) + assert_that(retrieved["accessLevel"], is_("Delete")) def test_get_zone_shared_by_id_as_owner(shared_zone_test_context): @@ -24,13 +23,13 @@ def test_get_zone_shared_by_id_as_owner(shared_zone_test_context): """ client = shared_zone_test_context.shared_zone_vinyldns_client - result = client.get_zone(shared_zone_test_context.shared_zone['id'], status=200) - retrieved = result['zone'] + result = client.get_zone(shared_zone_test_context.shared_zone["id"], status=200) + retrieved = result["zone"] - assert_that(retrieved['id'], is_(shared_zone_test_context.shared_zone['id'])) - assert_that(retrieved['adminGroupName'], is_('testSharedZoneGroup')) - assert_that(retrieved['shared'], is_(True)) - assert_that(retrieved['accessLevel'], is_('Delete')) + assert_that(retrieved["id"], is_(shared_zone_test_context.shared_zone["id"])) + assert_that(retrieved["adminGroupName"], is_("testSharedZoneGroup")) + assert_that(retrieved["shared"], is_(True)) + assert_that(retrieved["accessLevel"], is_("Delete")) def test_get_zone_shared_by_id_non_owner(shared_zone_test_context): @@ -39,7 +38,7 @@ def test_get_zone_shared_by_id_non_owner(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client - client.get_zone(shared_zone_test_context.shared_zone['id'], status=403) + client.get_zone(shared_zone_test_context.shared_zone["id"], status=403) def test_get_zone_private_by_id_fails_without_access(shared_zone_test_context): @@ -48,7 +47,7 @@ def test_get_zone_private_by_id_fails_without_access(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client - client.get_zone(shared_zone_test_context.ok_zone['id'], status=403) + client.get_zone(shared_zone_test_context.ok_zone["id"], status=403) def test_get_zone_by_id_returns_404_when_not_found(shared_zone_test_context): @@ -65,7 +64,7 @@ def test_get_zone_by_id_no_authorization(shared_zone_test_context): Test get an existing zone by id without authorization """ client = shared_zone_test_context.ok_vinyldns_client - client.get_zone('123456', sign_request=False, status=401) + client.get_zone("123456", sign_request=False, status=401) @pytest.mark.serial @@ -76,24 +75,24 @@ def test_get_zone_by_id_includes_acl_display_name(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - user_acl_rule = generate_acl_rule('Write', userId='ok', recordTypes=[]) - group_acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.ok_group['id'], recordTypes=[]) - bad_acl_rule = generate_acl_rule('Write', userId='badId', recordTypes=[]) + user_acl_rule = generate_acl_rule("Write", userId="ok", recordTypes=[]) + group_acl_rule = generate_acl_rule("Write", groupId=shared_zone_test_context.ok_group["id"], recordTypes=[]) + bad_acl_rule = generate_acl_rule("Write", userId="badId", recordTypes=[]) - client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], user_acl_rule, status=202) - client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], group_acl_rule, status=202) - client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], bad_acl_rule, status=202) + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], user_acl_rule, status=202) + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], group_acl_rule, status=202) + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], bad_acl_rule, status=202) - result = client.get_zone(shared_zone_test_context.system_test_zone['id'], status=200) - retrieved = result['zone'] + result = client.get_zone(shared_zone_test_context.system_test_zone["id"], status=200) + retrieved = result["zone"] - assert_that(retrieved['id'], is_(shared_zone_test_context.system_test_zone['id'])) - assert_that(retrieved['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) + assert_that(retrieved["id"], is_(shared_zone_test_context.system_test_zone["id"])) + assert_that(retrieved["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) - acl = retrieved['acl']['rules'] + acl = retrieved["acl"]["rules"] - user_acl_rule['displayName'] = 'ok' - group_acl_rule['displayName'] = shared_zone_test_context.ok_group['name'] + user_acl_rule["displayName"] = "ok" + group_acl_rule["displayName"] = shared_zone_test_context.ok_group["name"] assert_that(acl, has_item(user_acl_rule)) assert_that(acl, has_item(group_acl_rule)) @@ -106,12 +105,12 @@ def test_get_zone_by_name(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone_by_name(shared_zone_test_context.system_test_zone['name'], status=200)['zone'] + result = client.get_zone_by_name(shared_zone_test_context.system_test_zone["name"], status=200)["zone"] - assert_that(result['id'], is_(shared_zone_test_context.system_test_zone['id'])) - assert_that(result['name'], is_(shared_zone_test_context.system_test_zone['name'])) - assert_that(result['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) - assert_that(result['accessLevel'], is_("Delete")) + assert_that(result["id"], is_(shared_zone_test_context.system_test_zone["id"])) + assert_that(result["name"], is_(shared_zone_test_context.system_test_zone["name"])) + assert_that(result["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) + assert_that(result["accessLevel"], is_("Delete")) def test_get_zone_by_name_without_trailing_dot_succeeds(shared_zone_test_context): @@ -120,12 +119,12 @@ def test_get_zone_by_name_without_trailing_dot_succeeds(shared_zone_test_context """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone_by_name("system-test", status=200)['zone'] + result = client.get_zone_by_name("system-test", status=200)["zone"] - assert_that(result['id'], is_(shared_zone_test_context.system_test_zone['id'])) - assert_that(result['name'], is_(shared_zone_test_context.system_test_zone['name'])) - assert_that(result['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) - assert_that(result['accessLevel'], is_("Delete")) + assert_that(result["id"], is_(shared_zone_test_context.system_test_zone["id"])) + assert_that(result["name"], is_(shared_zone_test_context.system_test_zone["name"])) + assert_that(result["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) + assert_that(result["accessLevel"], is_("Delete")) def test_get_zone_by_name_shared_zone_succeeds(shared_zone_test_context): @@ -134,11 +133,11 @@ def test_get_zone_by_name_shared_zone_succeeds(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone_by_name(shared_zone_test_context.shared_zone['name'], status=200)['zone'] - assert_that(result['id'], is_(shared_zone_test_context.shared_zone['id'])) - assert_that(result['name'], is_(shared_zone_test_context.shared_zone['name'])) - assert_that(result['adminGroupName'], is_("testSharedZoneGroup")) - assert_that(result['accessLevel'], is_("NoAccess")) + result = client.get_zone_by_name(shared_zone_test_context.shared_zone["name"], status=200)["zone"] + assert_that(result["id"], is_(shared_zone_test_context.shared_zone["id"])) + assert_that(result["name"], is_(shared_zone_test_context.shared_zone["name"])) + assert_that(result["adminGroupName"], is_("testSharedZoneGroup")) + assert_that(result["accessLevel"], is_("NoAccess")) def test_get_zone_by_name_succeeds_without_access(shared_zone_test_context): @@ -147,11 +146,12 @@ def test_get_zone_by_name_succeeds_without_access(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client - result = client.get_zone_by_name("system-test", status=200)['zone'] - assert_that(result['id'], is_(shared_zone_test_context.system_test_zone['id'])) - assert_that(result['name'], is_(shared_zone_test_context.system_test_zone['name'])) - assert_that(result['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) - assert_that(result['accessLevel'], is_("NoAccess")) + result = client.get_zone_by_name("system-test", status=200)["zone"] + assert_that(result["id"], is_(shared_zone_test_context.system_test_zone["id"])) + assert_that(result["name"], is_(shared_zone_test_context.system_test_zone["name"])) + assert_that(result["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) + assert_that(result["accessLevel"], is_("NoAccess")) + def test_get_zone_by_name_returns_404_when_not_found(shared_zone_test_context): """ @@ -168,4 +168,4 @@ def test_get_zone_backend_ids(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client response = client.get_backend_ids(status=200) - assert_that(response, has_item(u'func-test-backend')) + assert_that(response, has_item("func-test-backend")) diff --git a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py index 8ece8ec8d..a55ba6282 100644 --- a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py +++ b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py @@ -1,3 +1,4 @@ +import pytest from hamcrest import * from utils import * @@ -6,19 +7,19 @@ def check_zone_changes_page_accuracy(results, expected_first_change, expected_nu assert_that(len(results), is_(expected_num_results)) change_num = expected_first_change for change in results: - change_email = 'i.changed.this.{0}.times@history-test.com'.format(change_num) - assert_that(change['zone']['email'], is_(change_email)) + change_email = "i.changed.this.{0}.times@history-test.com".format(change_num) + assert_that(change["zone"]["email"], is_(change_email)) # should return changes in reverse order (most recent 1st) change_num -= 1 def check_zone_changes_responses(response, zoneId=True, zoneChanges=True, nextId=True, startFrom=True, maxItems=True): - assert_that(response, has_key('zoneId')) if zoneId else assert_that(response, is_not(has_key('zoneId'))) - assert_that(response, has_key('zoneChanges')) if zoneChanges else assert_that(response, - is_not(has_key('zoneChanges'))) - assert_that(response, has_key('nextId')) if nextId else assert_that(response, is_not(has_key('nextId'))) - assert_that(response, has_key('startFrom')) if startFrom else assert_that(response, is_not(has_key('startFrom'))) - assert_that(response, has_key('maxItems')) if maxItems else assert_that(response, is_not(has_key('maxItems'))) + assert_that(response, has_key("zoneId")) if zoneId else assert_that(response, is_not(has_key("zoneId"))) + assert_that(response, has_key("zoneChanges")) if zoneChanges else assert_that(response, + is_not(has_key("zoneChanges"))) + assert_that(response, has_key("nextId")) if nextId else assert_that(response, is_not(has_key("nextId"))) + assert_that(response, has_key("startFrom")) if startFrom else assert_that(response, is_not(has_key("startFrom"))) + assert_that(response, has_key("maxItems")) if maxItems else assert_that(response, is_not(has_key("maxItems"))) def test_list_zone_changes_no_authorization(shared_zone_test_context): @@ -27,7 +28,7 @@ def test_list_zone_changes_no_authorization(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - client.list_zone_changes('12345', sign_request=False, status=401) + client.list_zone_changes("12345", sign_request=False, status=401) def test_list_zone_changes_member_auth_success(shared_zone_test_context): @@ -36,7 +37,7 @@ def test_list_zone_changes_member_auth_success(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - client.list_zone_changes(zone['id'], status=200) + client.list_zone_changes(zone["id"], status=200) @pytest.mark.serial @@ -46,7 +47,7 @@ def test_list_zone_changes_member_auth_no_access(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client zone = shared_zone_test_context.ok_zone - client.list_zone_changes(zone['id'], status=403) + client.list_zone_changes(zone["id"], status=403) @pytest.mark.serial @@ -55,13 +56,13 @@ def test_list_zone_changes_member_auth_with_acl(shared_zone_test_context): Test list zone changes succeeds for user with acl rules """ zone = shared_zone_test_context.ok_zone - acl_rule = generate_acl_rule('Write', userId='dummy') + acl_rule = generate_acl_rule("Write", userId="dummy") try: client = shared_zone_test_context.dummy_vinyldns_client - client.list_zone_changes(zone['id'], status=403) + client.list_zone_changes(zone["id"], status=403) add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - client.list_zone_changes(zone['id'], status=200) + client.list_zone_changes(zone["id"], status=200) finally: clear_ok_acl_rules(shared_zone_test_context) @@ -72,9 +73,9 @@ def test_list_zone_changes_no_start(shared_zone_test_context): """ client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_zone_changes(original_zone['id'], start_from=None) + response = client.list_zone_changes(original_zone["id"], start_from=None) - check_zone_changes_page_accuracy(response['zoneChanges'], expected_first_change=10, expected_num_results=10) + check_zone_changes_page_accuracy(response["zoneChanges"], expected_first_change=10, expected_num_results=10) check_zone_changes_responses(response, startFrom=False, nextId=False) @@ -85,13 +86,13 @@ def test_list_zone_changes_paging(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response_1 = client.list_zone_changes(original_zone['id'], start_from=None, max_items=3) - response_2 = client.list_zone_changes(original_zone['id'], start_from=response_1['nextId'], max_items=3) - response_3 = client.list_zone_changes(original_zone['id'], start_from=response_2['nextId'], max_items=3) + response_1 = client.list_zone_changes(original_zone["id"], start_from=None, max_items=3) + response_2 = client.list_zone_changes(original_zone["id"], start_from=response_1["nextId"], max_items=3) + response_3 = client.list_zone_changes(original_zone["id"], start_from=response_2["nextId"], max_items=3) - check_zone_changes_page_accuracy(response_1['zoneChanges'], expected_first_change=10, expected_num_results=3) - check_zone_changes_page_accuracy(response_2['zoneChanges'], expected_first_change=7, expected_num_results=3) - check_zone_changes_page_accuracy(response_3['zoneChanges'], expected_first_change=4, expected_num_results=3) + check_zone_changes_page_accuracy(response_1["zoneChanges"], expected_first_change=10, expected_num_results=3) + check_zone_changes_page_accuracy(response_2["zoneChanges"], expected_first_change=7, expected_num_results=3) + check_zone_changes_page_accuracy(response_3["zoneChanges"], expected_first_change=4, expected_num_results=3) check_zone_changes_responses(response_1, startFrom=False) check_zone_changes_responses(response_2) @@ -105,8 +106,8 @@ def test_list_zone_changes_exhausted(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_zone_changes(original_zone['id'], start_from=None, max_items=11) - check_zone_changes_page_accuracy(response['zoneChanges'], expected_first_change=10, expected_num_results=10) + response = client.list_zone_changes(original_zone["id"], start_from=None, max_items=11) + check_zone_changes_page_accuracy(response["zoneChanges"], expected_first_change=10, expected_num_results=10) check_zone_changes_responses(response, startFrom=False, nextId=False) @@ -117,8 +118,8 @@ def test_list_zone_changes_default_max_items(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - response = client.list_zone_changes(original_zone['id'], start_from=None, max_items=None) - assert_that(response['maxItems'], is_(100)) + response = client.list_zone_changes(original_zone["id"], start_from=None, max_items=None) + assert_that(response["maxItems"], is_(100)) check_zone_changes_responses(response, startFrom=None, nextId=None) @@ -129,8 +130,8 @@ def test_list_zone_changes_max_items_boundaries(shared_zone_test_context): client = shared_zone_test_context.history_client original_zone = shared_zone_test_context.history_zone - too_large = client.list_zone_changes(original_zone['id'], start_from=None, max_items=101, status=400) - too_small = client.list_zone_changes(original_zone['id'], start_from=None, max_items=0, status=400) + too_large = client.list_zone_changes(original_zone["id"], start_from=None, max_items=101, status=400) + too_small = client.list_zone_changes(original_zone["id"], start_from=None, max_items=0, status=400) assert_that(too_large, is_("maxItems was 101, maxItems must be between 0 exclusive and 100 inclusive")) assert_that(too_small, is_("maxItems was 0, maxItems must be between 0 exclusive and 100 inclusive")) diff --git a/modules/api/functional_test/live_tests/zones/list_zones_test.py b/modules/api/functional_test/live_tests/zones/list_zones_test.py index a758ff301..048bda293 100644 --- a/modules/api/functional_test/live_tests/zones/list_zones_test.py +++ b/modules/api/functional_test/live_tests/zones/list_zones_test.py @@ -7,12 +7,12 @@ def test_list_zones_success(shared_zone_test_context): Test that we can retrieve a list of the user's zones """ result = shared_zone_test_context.list_zones_client.list_zones(status=200) - retrieved = result['zones'] + retrieved = result["zones"] assert_that(retrieved, has_length(5)) - assert_that(retrieved, has_item(has_entry('name', 'list-zones-test-searched-1.'))) - assert_that(retrieved, has_item(has_entry('adminGroupName', 'list-zones-group'))) - assert_that(retrieved, has_item(has_entry('backendId', 'func-test-backend'))) + assert_that(retrieved, has_item(has_entry("name", "list-zones-test-searched-1."))) + assert_that(retrieved, has_item(has_entry("adminGroupName", "list-zones-group"))) + assert_that(retrieved, has_item(has_entry("backendId", "func-test-backend"))) def test_list_zones_max_items_100(shared_zone_test_context): @@ -20,14 +20,14 @@ def test_list_zones_max_items_100(shared_zone_test_context): Test that the default max items for a list zones request is 100 """ result = shared_zone_test_context.list_zones_client.list_zones(status=200) - assert_that(result['maxItems'], is_(100)) + assert_that(result["maxItems"], is_(100)) def test_list_zones_ignore_access_default_false(shared_zone_test_context): """ Test that the default ignore access value for a list zones request is false """ result = shared_zone_test_context.list_zones_client.list_zones(status=200) - assert_that(result['ignoreAccess'], is_(False)) + assert_that(result["ignoreAccess"], is_(False)) def test_list_zones_invalid_max_items_fails(shared_zone_test_context): """ @@ -49,17 +49,17 @@ def test_list_zones_no_search_first_page(shared_zone_test_context): Test that the first page of listing zones returns correctly when no name filter is provided """ result = shared_zone_test_context.list_zones_client.list_zones(max_items=3) - zones = result['zones'] + zones = result["zones"] assert_that(zones, has_length(3)) - assert_that(zones[0]['name'], is_('list-zones-test-searched-1.')) - assert_that(zones[1]['name'], is_('list-zones-test-searched-2.')) - assert_that(zones[2]['name'], is_('list-zones-test-searched-3.')) + assert_that(zones[0]["name"], is_("list-zones-test-searched-1.")) + assert_that(zones[1]["name"], is_("list-zones-test-searched-2.")) + assert_that(zones[2]["name"], is_("list-zones-test-searched-3.")) - assert_that(result['nextId'], is_('list-zones-test-searched-3.')) - assert_that(result['maxItems'], is_(3)) - assert_that(result, is_not(has_key('startFrom'))) - assert_that(result, is_not(has_key('nameFilter'))) + assert_that(result["nextId"], is_("list-zones-test-searched-3.")) + assert_that(result["maxItems"], is_(3)) + assert_that(result, is_not(has_key("startFrom"))) + assert_that(result, is_not(has_key("nameFilter"))) def test_list_zones_no_search_second_page(shared_zone_test_context): @@ -67,16 +67,16 @@ def test_list_zones_no_search_second_page(shared_zone_test_context): Test that the second page of listing zones returns correctly when no name filter is provided """ result = shared_zone_test_context.list_zones_client.list_zones(start_from="list-zones-test-searched-2.", max_items=2, status=200) - zones = result['zones'] + zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]['name'], is_('list-zones-test-searched-3.')) - assert_that(zones[1]['name'], is_('list-zones-test-unfiltered-1.')) + assert_that(zones[0]["name"], is_("list-zones-test-searched-3.")) + assert_that(zones[1]["name"], is_("list-zones-test-unfiltered-1.")) - assert_that(result['nextId'], is_("list-zones-test-unfiltered-1.")) - assert_that(result['maxItems'], is_(2)) - assert_that(result['startFrom'], is_("list-zones-test-searched-2.")) - assert_that(result, is_not(has_key('nameFilter'))) + assert_that(result["nextId"], is_("list-zones-test-unfiltered-1.")) + assert_that(result["maxItems"], is_(2)) + assert_that(result["startFrom"], is_("list-zones-test-searched-2.")) + assert_that(result, is_not(has_key("nameFilter"))) def test_list_zones_no_search_last_page(shared_zone_test_context): @@ -84,73 +84,73 @@ def test_list_zones_no_search_last_page(shared_zone_test_context): Test that the last page of listing zones returns correctly when no name filter is provided """ result = shared_zone_test_context.list_zones_client.list_zones(start_from="list-zones-test-searched-3.", max_items=4, status=200) - zones = result['zones'] + zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]['name'], is_('list-zones-test-unfiltered-1.')) - assert_that(zones[1]['name'], is_('list-zones-test-unfiltered-2.')) + assert_that(zones[0]["name"], is_("list-zones-test-unfiltered-1.")) + assert_that(zones[1]["name"], is_("list-zones-test-unfiltered-2.")) - assert_that(result, is_not(has_key('nextId'))) - assert_that(result['maxItems'], is_(4)) - assert_that(result['startFrom'], is_('list-zones-test-searched-3.')) - assert_that(result, is_not(has_key('nameFilter'))) + assert_that(result, is_not(has_key("nextId"))) + assert_that(result["maxItems"], is_(4)) + assert_that(result["startFrom"], is_("list-zones-test-searched-3.")) + assert_that(result, is_not(has_key("nameFilter"))) def test_list_zones_with_search_first_page(shared_zone_test_context): """ Test that the first page of listing zones returns correctly when a name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter='*searched*', max_items=2, status=200) - zones = result['zones'] + result = shared_zone_test_context.list_zones_client.list_zones(name_filter="*searched*", max_items=2, status=200) + zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]['name'], is_('list-zones-test-searched-1.')) - assert_that(zones[1]['name'], is_('list-zones-test-searched-2.')) + assert_that(zones[0]["name"], is_("list-zones-test-searched-1.")) + assert_that(zones[1]["name"], is_("list-zones-test-searched-2.")) - assert_that(result['nextId'], is_('list-zones-test-searched-2.')) - assert_that(result['maxItems'], is_(2)) - assert_that(result['nameFilter'], is_('*searched*')) - assert_that(result, is_not(has_key('startFrom'))) + assert_that(result["nextId"], is_("list-zones-test-searched-2.")) + assert_that(result["maxItems"], is_(2)) + assert_that(result["nameFilter"], is_("*searched*")) + assert_that(result, is_not(has_key("startFrom"))) def test_list_zones_with_no_results(shared_zone_test_context): """ Test that the response is formed correctly when no results are found """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter='this-wont-be-found', max_items=2, status=200) - zones = result['zones'] + result = shared_zone_test_context.list_zones_client.list_zones(name_filter="this-wont-be-found", max_items=2, status=200) + zones = result["zones"] assert_that(zones, has_length(0)) - assert_that(result['maxItems'], is_(2)) - assert_that(result['nameFilter'], is_('this-wont-be-found')) - assert_that(result, is_not(has_key('startFrom'))) - assert_that(result, is_not(has_key('nextId'))) + assert_that(result["maxItems"], is_(2)) + assert_that(result["nameFilter"], is_("this-wont-be-found")) + assert_that(result, is_not(has_key("startFrom"))) + assert_that(result, is_not(has_key("nextId"))) def test_list_zones_with_search_last_page(shared_zone_test_context): """ Test that the second page of listing zones returns correctly when a name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter='*test-searched-3', start_from="list-zones-test-searched-2.", max_items=2, status=200) - zones = result['zones'] + result = shared_zone_test_context.list_zones_client.list_zones(name_filter="*test-searched-3", start_from="list-zones-test-searched-2.", max_items=2, status=200) + zones = result["zones"] assert_that(zones, has_length(1)) - assert_that(zones[0]['name'], is_('list-zones-test-searched-3.')) + assert_that(zones[0]["name"], is_("list-zones-test-searched-3.")) - assert_that(result, is_not(has_key('nextId'))) - assert_that(result['maxItems'], is_(2)) - assert_that(result['nameFilter'], is_('*test-searched-3')) - assert_that(result['startFrom'], is_('list-zones-test-searched-2.')) + assert_that(result, is_not(has_key("nextId"))) + assert_that(result["maxItems"], is_(2)) + assert_that(result["nameFilter"], is_("*test-searched-3")) + assert_that(result["startFrom"], is_("list-zones-test-searched-2.")) def test_list_zones_ignore_access_success(shared_zone_test_context): """ Test that we can retrieve a list of zones regardless of zone access """ result = shared_zone_test_context.list_zones_client.list_zones(ignore_access=True, status=200) - retrieved = result['zones'] + retrieved = result["zones"] - assert_that(result['ignoreAccess'], is_(True)) + assert_that(result["ignoreAccess"], is_(True)) assert_that(len(retrieved), greater_than(5)) @@ -158,9 +158,9 @@ def test_list_zones_ignore_access_success_with_name_filter(shared_zone_test_cont """ Test that we can retrieve a list of all zones with a name filter """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter='shared', ignore_access=True, status=200) - retrieved = result['zones'] + result = shared_zone_test_context.list_zones_client.list_zones(name_filter="shared", ignore_access=True, status=200) + retrieved = result["zones"] - assert_that(result['ignoreAccess'], is_(True)) - assert_that(retrieved, has_item(has_entry('name', 'shared.'))) - assert_that(retrieved, has_item(has_entry('accessLevel', 'NoAccess'))) + assert_that(result["ignoreAccess"], is_(True)) + assert_that(retrieved, has_item(has_entry("name", "shared."))) + assert_that(retrieved, has_item(has_entry("accessLevel", "NoAccess"))) diff --git a/modules/api/functional_test/live_tests/zones/sync_zone_test.py b/modules/api/functional_test/live_tests/zones/sync_zone_test.py index 2e64b610b..4cda1fadb 100644 --- a/modules/api/functional_test/live_tests/zones/sync_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/sync_zone_test.py @@ -1,93 +1,94 @@ -from hamcrest import * -from vinyldns_python import VinylDNSClient -from vinyldns_context import VinylDNSTestContext -from utils import * -import time +import pytest +import pytz +from utils import * + +# This is set in the API service's configuration +API_SYNC_DELAY = 10 MAX_RETRIES = 30 RETRY_WAIT = 0.05 records_in_dns = [ - {'name': 'sync-test.', - 'type': 'SOA', - 'records': [{u'mname': u'172.17.42.1.', - u'rname': u'admin.test.com.', - u'retry': 3600, - u'refresh': 10800, - u'minimum': 38400, - u'expire': 604800, - u'serial': 1439234395}]}, - {'name': u'sync-test.', - 'type': u'NS', - 'records': [{u'nsdname': u'172.17.42.1.'}]}, - {'name': u'jenkins', - 'type': u'A', - 'records': [{u'address': u'10.1.1.1'}]}, - {'name': u'foo', - 'type': u'A', - 'records': [{u'address': u'2.2.2.2'}]}, - {'name': u'test', - 'type': u'A', - 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, - {'name': u'sync-test.', - 'type': u'A', - 'records': [{u'address': u'5.5.5.5'}]}, - {'name': u'already-exists', - 'type': u'A', - 'records': [{u'address': u'6.6.6.6'}]}, - {'name': u'fqdn', - 'type': u'A', - 'records': [{u'address': u'7.7.7.7'}]}, - {'name': u'_sip._tcp', - 'type': u'SRV', - 'records': [{u'priority': 10, u'weight': 60, u'port': 5060, u'target': u'foo.sync-test.'}]}, - {'name': u'existing.dotted', - 'type': u'A', - 'records': [{u'address': u'9.9.9.9'}]}] + {"name": "sync-test.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 1439234395}]}, + {"name": "sync-test.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "jenkins", + "type": "A", + "records": [{"address": "10.1.1.1"}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "2.2.2.2"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": "sync-test.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}, + {"name": "fqdn", + "type": "A", + "records": [{"address": "7.7.7.7"}]}, + {"name": "_sip._tcp", + "type": "SRV", + "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, + {"name": "existing.dotted", + "type": "A", + "records": [{"address": "9.9.9.9"}]}] records_post_update = [ - {'name': 'sync-test.', - 'type': 'SOA', - 'records': [{u'mname': u'172.17.42.1.', - u'rname': u'admin.test.com.', - u'retry': 3600, - u'refresh': 10800, - u'minimum': 38400, - u'expire': 604800, - u'serial': 0}]}, - {'name': u'sync-test.', - 'type': u'NS', - 'records': [{u'nsdname': u'172.17.42.1.'}]}, - {'name': u'foo', - 'type': u'A', - 'records': [{u'address': u'1.2.3.4'}]}, - {'name': u'test', - 'type': u'A', - 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, - {'name': u'sync-test.', - 'type': u'A', - 'records': [{u'address': u'5.5.5.5'}]}, - {'name': u'already-exists', - 'type': u'A', - 'records': [{u'address': u'6.6.6.6'}]}, - {'name': u'newrs', - 'type': u'A', - 'records': [{u'address': u'2.3.4.5'}]}, - {'name': u'fqdn', - 'type': u'A', - 'records': [{u'address': u'7.7.7.7'}]}, - {'name': u'_sip._tcp', - 'type': u'SRV', - 'records': [{u'priority': 10, u'weight': 60, u'port': 5060, u'target': u'foo.sync-test.'}]}, - {'name': u'existing.dotted', - 'type': u'A', - 'records': [{u'address': u'9.9.9.9'}]}, - {'name': u'dott.ed', - 'type': u'A', - 'records': [{u'address': u'6.7.8.9'}]}, - {'name': u'dott.ed-two', - 'type': u'A', - 'records': [{u'address': u'6.7.8.9'}]}] + {"name": "sync-test.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 0}]}, + {"name": "sync-test.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "1.2.3.4"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": "sync-test.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}, + {"name": "newrs", + "type": "A", + "records": [{"address": "2.3.4.5"}]}, + {"name": "fqdn", + "type": "A", + "records": [{"address": "7.7.7.7"}]}, + {"name": "_sip._tcp", + "type": "SRV", + "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, + {"name": "existing.dotted", + "type": "A", + "records": [{"address": "9.9.9.9"}]}, + {"name": "dott.ed", + "type": "A", + "records": [{"address": "6.7.8.9"}]}, + {"name": "dott.ed-two", + "type": "A", + "records": [{"address": "6.7.8.9"}]}] @pytest.mark.skip_production @@ -96,151 +97,149 @@ def test_sync_zone_success(shared_zone_test_context): Test syncing a zone """ client = shared_zone_test_context.ok_vinyldns_client - zone_name = 'sync-test' + zone_name = "sync-test" updated_rs_id = None check_rs = None zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'isTest': True, - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "isTest": True, + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } try: zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone['id']) + zone = zone_change["zone"] + client.wait_until_zone_active(zone["id"]) - time.sleep(.5) - - # confirm zone has been synced - get_result = client.get_zone(zone['id']) - synced_zone = get_result['zone'] - latest_sync = synced_zone['latestSync'] + # Confirm zone has been synced + get_result = client.get_zone(zone["id"]) + synced_zone = get_result["zone"] + latest_sync = synced_zone["latestSync"] assert_that(latest_sync, is_not(none())) - # confirm that the recordsets in DNS have been saved in vinyldns - recordsets = client.list_recordsets_by_zone(zone['id'])['recordSets'] + # Confirm that the recordsets in DNS have been saved in vinyldns + recordsets = client.list_recordsets_by_zone(zone["id"])["recordSets"] assert_that(len(recordsets), is_(10)) for rs in recordsets: - if rs['name'] == 'foo': - # get the ID for recordset with name 'foo' - updated_rs_id = rs['id'] - small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) - small_rs['records'] = sorted(small_rs['records']) - if small_rs['type'] == 'SOA': - assert_that(small_rs['name'], is_('sync-test.')) + if rs["name"] == "foo": + # get the ID for recordset with name "foo" + updated_rs_id = rs["id"] + small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) + if small_rs["type"] == "SOA": + assert_that(small_rs["name"], is_("sync-test.")) else: assert_that(records_in_dns, has_item(small_rs)) - # give the 'foo' record an ownerGroupID to confirm it's still present after the zone sync - foo_rs = client.get_recordset(zone['id'], updated_rs_id)['recordSet'] - foo_rs['ownerGroupId'] = shared_zone_test_context.ok_group['id'] + # Give the "foo" record an ownerGroupID to confirm it's still present after the zone sync + foo_rs = client.get_recordset(zone["id"], updated_rs_id)["recordSet"] + foo_rs["ownerGroupId"] = shared_zone_test_context.ok_group["id"] update_response = client.update_recordset(foo_rs, status=202) - foo_rs_change = client.wait_until_recordset_change_status(update_response, 'Complete') - assert_that(foo_rs_change['recordSet']['ownerGroupId'], is_(shared_zone_test_context.ok_group['id'])) + foo_rs_change = client.wait_until_recordset_change_status(update_response, "Complete") + assert_that(foo_rs_change["recordSet"]["ownerGroupId"], is_(shared_zone_test_context.ok_group["id"])) - # make changes to the dns backend - dns_update(zone, 'foo', 38400, 'A', '1.2.3.4') - dns_add(zone, 'newrs', 38400, 'A', '2.3.4.5') - dns_delete(zone, 'jenkins', 'A') + # Make changes to the dns backend + dns_update(zone, "foo", 38400, "A", "1.2.3.4") + dns_add(zone, "newrs", 38400, "A", "2.3.4.5") + dns_delete(zone, "jenkins", "A") - # add unknown this should not be synced - dns_add(zone, 'dnametest', 38400, 'DNAME', 'test.com.') + # Add unknown this should not be synced + dns_add(zone, "dnametest", 38400, "DNAME", "test.com.") - # add dotted hosts, this should be synced, so we will have 10 records ( +2 ) - dns_add(zone, 'dott.ed', 38400, 'A', '6.7.8.9') - dns_add(zone, 'dott.ed-two', 38400, 'A', '6.7.8.9') + # Add dotted hosts, this should be synced, so we will have 10 records ( +2 ) + dns_add(zone, "dott.ed", 38400, "A", "6.7.8.9") + dns_add(zone, "dott.ed-two", 38400, "A", "6.7.8.9") - # wait for next sync - time.sleep(10) + # Wait until we can safely sync again (from the original caused by "importing"/creating the zone) + time.sleep(API_SYNC_DELAY) - # sync again - change = client.sync_zone(zone['id'], status=202) + # Perform the sync + change = client.sync_zone(zone["id"], status=202) client.wait_until_zone_change_status_synced(change) - # confirm cannot again sync without waiting - client.sync_zone(zone['id'], status=403) + # Confirm cannot again sync without waiting + client.sync_zone(zone["id"], status=403) - # validate zone - get_result = client.get_zone(zone['id']) - synced_zone = get_result['zone'] - assert_that(synced_zone['latestSync'], is_not(latest_sync)) - assert_that(synced_zone['status'], is_('Active')) - assert_that(synced_zone['updated'], is_not(none())) + # Validate zone + get_result = client.get_zone(zone["id"]) + synced_zone = get_result["zone"] + assert_that(synced_zone["latestSync"], is_not(latest_sync)) + assert_that(synced_zone["status"], is_("Active")) + assert_that(synced_zone["updated"], is_not(none())) # confirm that the updated recordsets in DNS have been saved in vinyldns - recordsets = client.list_recordsets_by_zone(zone['id'])['recordSets'] + recordsets = client.list_recordsets_by_zone(zone["id"])["recordSets"] assert_that(len(recordsets), is_(12)) for rs in recordsets: - small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) - small_rs['records'] = sorted(small_rs['records']) - if small_rs['type'] == 'SOA': - small_rs['records'][0]['serial'] = 0 + small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) + small_rs["records"] = small_rs["records"] + if small_rs["type"] == "SOA": + small_rs["records"][0]["serial"] = 0 # records_post_update does not contain dnametest assert_that(records_post_update, has_item(small_rs)) - changes = client.list_recordset_changes(zone['id']) - for c in changes['recordSetChanges']: - if c['id'] != foo_rs_change['id']: - assert_that(c['systemMessage'], is_('Change applied via zone sync')) + changes = client.list_recordset_changes(zone["id"]) + for c in changes["recordSetChanges"]: + if c["id"] != foo_rs_change["id"]: + assert_that(c["systemMessage"], is_("Change applied via zone sync")) - check_rs = client.get_recordset(zone['id'], updated_rs_id)['recordSet'] - assert_that(check_rs['ownerGroupId'], is_(shared_zone_test_context.ok_group['id'])) + check_rs = client.get_recordset(zone["id"], updated_rs_id)["recordSet"] + assert_that(check_rs["ownerGroupId"], is_(shared_zone_test_context.ok_group["id"])) for rs in recordsets: # confirm that we can update the dotted host if the name is the same - if rs['name'] == 'dott.ed': + if rs["name"] == "dott.ed": attempt_update = rs - attempt_update['ttl'] = attempt_update['ttl'] + 100 + attempt_update["ttl"] = attempt_update["ttl"] + 100 change = client.update_recordset(attempt_update, status=202) - client.wait_until_recordset_change_status(change, 'Complete') + client.wait_until_recordset_change_status(change, "Complete") # we should be able to delete the record - client.delete_recordset(rs['zoneId'], rs['id'], status=202) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=202) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) # confirm that we cannot update the dotted host if the name changes - if rs['name'] == 'dott.ed-two': + if rs["name"] == "dott.ed-two": attempt_update = rs - attempt_update['name'] = 'new.dotted' + attempt_update["name"] = "new.dotted" errors = client.update_recordset(attempt_update, status=422) assert_that(errors, is_("Cannot update RecordSet's name.")) - # we should be able to delete the record - client.delete_recordset(rs['zoneId'], rs['id'], status=202) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"], status=202) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) - if rs['name'] == "example.dotted": + if rs["name"] == "example.dotted": # confirm that we can modify the example dotted good_update = rs - good_update['name'] = "example-dotted" + good_update["name"] = "example-dotted" change = client.update_recordset(good_update, status=202) - client.wait_until_recordset_change_status(change, 'Complete') + client.wait_until_recordset_change_status(change, "Complete") finally: # reset the ownerGroupId for foo record if check_rs: - check_rs['ownerGroupId'] = None + check_rs["ownerGroupId"] = None update_response = client.update_recordset(check_rs, status=202) - client.wait_until_recordset_change_status(update_response, 'Complete') - if 'id' in zone: - dns_update(zone, 'foo', 38400, 'A', '2.2.2.2') - dns_delete(zone, 'newrs', 'A') - dns_add(zone, 'jenkins', 38400, 'A', '10.1.1.1') - dns_delete(zone, 'example-dotted', 'A') - client.abandon_zones([zone['id']], status=202) + client.wait_until_recordset_change_status(update_response, "Complete") + if "id" in zone: + dns_update(zone, "foo", 38400, "A", "2.2.2.2") + dns_delete(zone, "newrs", "A") + dns_add(zone, "jenkins", 38400, "A", "10.1.1.1") + dns_delete(zone, "example-dotted", "A") + dns_delete(zone, "dott.ed", "A") + dns_delete(zone, "dott.ed-two", "A") + client.abandon_zones([zone["id"]], status=202) diff --git a/modules/api/functional_test/live_tests/zones/update_zone_test.py b/modules/api/functional_test/live_tests/zones/update_zone_test.py index 2b1aceed7..e762506bb 100644 --- a/modules/api/functional_test/live_tests/zones/update_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/update_zone_test.py @@ -1,6 +1,7 @@ +import copy + import pytest -import uuid -from hamcrest import * + from utils import * from vinyldns_context import VinylDNSTestContext @@ -13,58 +14,58 @@ def test_update_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time' + zone_name = "one-time" acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-updated-by-updatezn', - 'userId': 'ok', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-updated-by-updatezn", + "userId": "ok", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result_zone['id']) + result_zone = result["zone"] + client.wait_until_zone_active(result_zone["id"]) - result_zone['email'] = 'foo@bar.com' - result_zone['acl']['rules'] = [acl_rule] + result_zone["email"] = "foo@bar.com" + result_zone["acl"]["rules"] = [acl_rule] update_result = client.update_zone(result_zone, status=202) client.wait_until_zone_change_status_synced(update_result) - assert_that(update_result['changeType'], is_('Update')) - assert_that(update_result['userId'], is_('ok')) - assert_that(update_result, has_key('created')) + assert_that(update_result["changeType"], is_("Update")) + assert_that(update_result["userId"], is_("ok")) + assert_that(update_result, has_key("created")) - get_result = client.get_zone(result_zone['id']) + get_result = client.get_zone(result_zone["id"]) - uz = get_result['zone'] - assert_that(uz['email'], is_('foo@bar.com')) - assert_that(uz['updated'], is_not(none())) + uz = get_result["zone"] + assert_that(uz["email"], is_("foo@bar.com")) + assert_that(uz["updated"], is_not(none())) - acl = uz['acl'] + acl = uz["acl"] verify_acl_rule_is_present_once(acl_rule, acl) finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) def test_update_bad_acl_fails(shared_zone_test_context): @@ -72,17 +73,17 @@ def test_update_bad_acl_fails(shared_zone_test_context): Test that updating a zone with a bad ACL rule fails """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.ok_zone + zone = copy.deepcopy(shared_zone_test_context.ok_zone) acl_bad_regex = { - 'accessLevel': 'Read', - 'description': 'test-acl-updated-by-updatezn-bad', - 'userId': 'ok', - 'recordMask': '*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-updated-by-updatezn-bad", + "userId": "ok", + "recordMask": "*", + "recordTypes": ["A", "AAAA", "CNAME"] } - zone['acl']['rules'] = [acl_bad_regex] + zone["acl"]["rules"] = [acl_bad_regex] client.update_zone(zone, status=400) @@ -92,16 +93,16 @@ def test_update_acl_no_group_or_user_fails(shared_zone_test_context): Test that updating a zone with an ACL with no user/group fails """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.ok_zone + zone = copy.deepcopy(shared_zone_test_context.ok_zone) bad_acl = { - 'accessLevel': 'Read', - 'description': 'test-acl-updated-by-updatezn-bad-ids', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-updated-by-updatezn-bad-ids", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - zone['acl']['rules'] = [bad_acl] + zone["acl"]["rules"] = [bad_acl] client.update_zone(zone, status=400) @@ -115,47 +116,47 @@ def test_update_missing_zone_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time.' + zone_name = "one-time." zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result[u'zone'][u'id']) + result_zone = result["zone"] + client.wait_until_zone_active(result["zone"]["id"]) update_zone = { - 'id': result_zone['id'], - 'name': result_zone['name'], - 'random_key': 'some_value', - 'another_key': 'meaningless_data', - 'adminGroupId': zone['adminGroupId'] + "id": result_zone["id"], + "name": result_zone["name"], + "random_key": "some_value", + "another_key": "meaningless_data", + "adminGroupId": zone["adminGroupId"] } - errors = client.update_zone(update_zone, status=400)['errors'] - assert_that(errors, contains_inanyorder('Missing Zone.email')) + errors = client.update_zone(update_zone, status=400)["errors"] + assert_that(errors, contains_inanyorder("Missing Zone.email")) # Check that the failed update didn't go through - zone_get = client.get_zone(result_zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name)) + zone_get = client.get_zone(result_zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name)) finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) @pytest.mark.serial @@ -166,46 +167,46 @@ def test_update_invalid_zone_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = 'one-time.' + zone_name = "one-time." zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.ok_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": zone_name, + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.ok_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } } result = client.create_zone(zone, status=202) - result_zone = result['zone'] - client.wait_until_zone_active(result[u'zone'][u'id']) + result_zone = result["zone"] + client.wait_until_zone_active(result["zone"]["id"]) update_zone = { - 'id': result_zone['id'], - 'name': result_zone['name'], - 'email': 'test@test.com', - 'adminGroupId': True + "id": result_zone["id"], + "name": result_zone["name"], + "email": "test@test.com", + "adminGroupId": True } - errors = client.update_zone(update_zone, status=400)['errors'] - assert_that(errors, contains_inanyorder('Do not know how to convert JBool(true) into class java.lang.String')) + errors = client.update_zone(update_zone, status=400)["errors"] + assert_that(errors, contains_inanyorder("Do not know how to convert JBool(true) into class java.lang.String")) # Check that the failed update didn't go through - zone_get = client.get_zone(result_zone['id'])['zone'] - assert_that(zone_get['name'], is_(zone_name)) + zone_get = client.get_zone(result_zone["id"])["zone"] + assert_that(zone_get["name"], is_(zone_name)) finally: if result_zone: - client.abandon_zones([result_zone['id']], status=202) + client.abandon_zones([result_zone["id"]], status=202) @pytest.mark.serial @@ -215,22 +216,22 @@ def test_update_zone_returns_404_if_zone_not_found(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone = { - 'name': 'one-time.', - 'email': 'test@test.com', - 'id': 'nothere', - 'connection': { - 'name': 'old-shared.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "one-time.", + "email": "test@test.com", + "id": "nothere", + "connection": { + "name": "old-shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'old-shared.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "old-shared.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'adminGroupId': shared_zone_test_context.ok_group['id'] + "adminGroupId": shared_zone_test_context.ok_group["id"] } client.update_zone(zone, status=404) @@ -243,23 +244,23 @@ def test_create_acl_group_rule_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-group-id', - 'groupId': shared_zone_test_context.ok_group['id'], - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-group-id", + "groupId": shared_zone_test_context.ok_group["id"], + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, status=202) # This is async, we get a zone change back - acl = result['zone']['acl'] + acl = result["zone"]["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) @@ -272,23 +273,23 @@ def test_create_acl_user_rule_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': 'ok', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "ok", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, status=202) # This is async, we get a zone change back - acl = result['zone']['acl'] + acl = result["zone"]["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) @@ -300,14 +301,14 @@ def test_create_acl_user_rule_invalid_regex_failure(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': '789', - 'recordMask': 'x{5,-3}', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "789", + "recordMask": "x{5,-3}", + "recordTypes": ["A", "AAAA", "CNAME"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400) + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone["id"], acl_rule, status=400) assert_that(errors, contains_string("record mask x{5,-3} is an invalid regex")) @@ -318,16 +319,15 @@ def test_create_acl_user_rule_invalid_cidr_failure(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': '789', - 'recordMask': '10.0.0.0/50', - 'recordTypes': ['PTR'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "789", + "recordMask": "10.0.0.0/50", + "recordTypes": ["PTR"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) - assert_that(errors, - contains_string("PTR types must have no mask or a valid CIDR mask: IPv4 mask must be between 0 and 32")) + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone["id"], acl_rule, status=400) + assert_that(errors, contains_string("PTR types must have no mask or a valid CIDR mask: IPv4 mask must be between 0 and 32")) @pytest.mark.serial @@ -338,24 +338,24 @@ def test_create_acl_user_rule_valid_cidr_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': 'ok', - 'recordMask': '10.0.0.0/20', - 'recordTypes': ['PTR'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "ok", + "recordMask": "10.0.0.0/20", + "recordTypes": ["PTR"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone["id"], acl_rule, status=202) # This is async, we get a zone change back - acl = result['zone']['acl'] + acl = result["zone"]["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) @@ -367,14 +367,14 @@ def test_create_acl_user_rule_multiple_cidr_failure(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': '789', - 'recordMask': '10.0.0.0/20', - 'recordTypes': ['PTR', 'A', 'AAAA'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "789", + "recordMask": "10.0.0.0/20", + "recordTypes": ["PTR", "A", "AAAA"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone["id"], acl_rule, status=400) assert_that(errors, contains_string("Multiple record types including PTR must have no mask")) @@ -386,23 +386,23 @@ def test_create_acl_user_rule_multiple_none_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': 'ok', - 'recordTypes': ['PTR', 'A', 'AAAA'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "ok", + "recordTypes": ["PTR", "A", "AAAA"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone["id"], acl_rule, status=202) # This is async, we get a zone change back - acl = result['zone']['acl'] + acl = result["zone"]["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) @@ -414,14 +414,14 @@ def test_create_acl_user_rule_multiple_non_cidr_failure(shared_zone_test_context client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-user-id', - 'userId': '789', - 'recordMask': 'www-*', - 'recordTypes': ['PTR', 'A', 'AAAA'] + "accessLevel": "Read", + "description": "test-acl-user-id", + "userId": "789", + "recordMask": "www-*", + "recordTypes": ["PTR", "A", "AAAA"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone["id"], acl_rule, status=400) assert_that(errors, contains_string("Multiple record types including PTR must have no mask")) @@ -433,19 +433,20 @@ def test_create_acl_idempotent(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Write', - 'description': 'test-acl-idempotent', - 'userId': 'ok', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Write", + "description": "test-acl-idempotent", + "userId": "ok", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result1 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) - result2 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) - result3 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + test_zone_id = shared_zone_test_context.system_test_zone["id"] + client.add_zone_acl_rule_with_wait(test_zone_id, acl_rule, status=202) + client.add_zone_acl_rule_with_wait(test_zone_id, acl_rule, status=202) + client.add_zone_acl_rule_with_wait(test_zone_id, acl_rule, status=202) - zone = client.get_zone(shared_zone_test_context.system_test_zone['id'])['zone'] + zone = client.get_zone(test_zone_id)["zone"] - acl = zone['acl'] + acl = zone["acl"] # we should only have one rule that we created verify_acl_rule_is_present_once(acl_rule, acl) @@ -459,29 +460,29 @@ def test_delete_acl_group_rule_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-delete-group-id', - 'groupId': shared_zone_test_context.ok_group['id'], - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-delete-group-id", + "groupId": shared_zone_test_context.ok_group["id"], + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, status=202) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # delete the rule - result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, + status=202) # make sure that our acl is not on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - verify_acl_rule_is_not_present(acl_rule, zone['acl']) + verify_acl_rule_is_not_present(acl_rule, zone["acl"]) @pytest.mark.serial @@ -492,29 +493,29 @@ def test_delete_acl_user_rule_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-delete-user-id', - 'userId': 'ok', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-delete-user-id", + "userId": "ok", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, status=202) # make sure that our acl rule appears on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - acl = zone['acl'] + acl = zone["acl"] verify_acl_rule_is_present_once(acl_rule, acl) # delete the rule - result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, + status=202) # make sure that our acl is not on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - verify_acl_rule_is_not_present(acl_rule, zone['acl']) + verify_acl_rule_is_not_present(acl_rule, zone["acl"]) def test_delete_non_existent_acl_rule_success(shared_zone_test_context): @@ -524,55 +525,52 @@ def test_delete_non_existent_acl_rule_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'description': 'test-acl-delete-non-existent-user-id', - 'userId': '789', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test-acl-delete-non-existent-user-id", + "userId": "789", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } # delete the rule - result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, + status=202) # make sure that our acl is not on the zone - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(result["zone"]["id"])["zone"] - verify_acl_rule_is_not_present(acl_rule, zone['acl']) + verify_acl_rule_is_not_present(acl_rule, zone["acl"]) @pytest.mark.serial def test_delete_acl_idempotent(shared_zone_test_context): """ - Test deleting the same acl rule multiple times results in only one rule remomved + Test deleting the same acl rule multiple times results in only one rule removed """ client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Write', - 'description': 'test-delete-acl-idempotent', - 'userId': 'ok', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Write", + "description": "test-delete-acl-idempotent", + "userId": "ok", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + system_test_zone_id = shared_zone_test_context.system_test_zone["id"] + client.add_zone_acl_rule_with_wait(system_test_zone_id, acl_rule, status=202) - zone = client.get_zone(shared_zone_test_context.system_test_zone['id'])['zone'] - - acl = zone['acl'] + zone = client.get_zone(system_test_zone_id)["zone"] + acl = zone["acl"] # we should only have one rule that we created verify_acl_rule_is_present_once(acl_rule, acl) - result1 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) - result2 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) - result3 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, - status=202) + client.delete_zone_acl_rule_with_wait(system_test_zone_id, acl_rule, status=202) + client.delete_zone_acl_rule_with_wait(system_test_zone_id, acl_rule, status=202) + client.delete_zone_acl_rule_with_wait(system_test_zone_id, acl_rule, status=202) - zone = client.get_zone(result['zone']['id'])['zone'] + zone = client.get_zone(system_test_zone_id)["zone"] - verify_acl_rule_is_not_present(acl_rule, zone['acl']) + verify_acl_rule_is_not_present(acl_rule, zone["acl"]) @pytest.mark.serial @@ -584,45 +582,45 @@ def test_delete_acl_removes_permissions(shared_zone_test_context): ok_client = shared_zone_test_context.ok_vinyldns_client # ok adds and deletes acl rule dummy_client = shared_zone_test_context.dummy_vinyldns_client # dummy should not be able to see ok_zone once acl rule is deleted - ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone["id"])["zone"] - ok_view = ok_client.list_zones()['zones'] + ok_view = ok_client.list_zones()["zones"] assert_that(ok_view, has_item(ok_zone)) # ok can see ok_zone # verify dummy cannot see ok_zone - dummy_view = dummy_client.list_zones()['zones'] + dummy_view = dummy_client.list_zones()["zones"] assert_that(dummy_view, is_not(has_item(ok_zone))) # cannot view zone # add acl rule acl_rule = { - 'accessLevel': 'Read', - 'description': 'test_delete_acl_removes_permissions', - 'userId': 'dummy', # give dummy permission to see ok_zone - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "description": "test_delete_acl_removes_permissions", + "userId": "dummy", # give dummy permission to see ok_zone + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - result = ok_client.add_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone['id'], acl_rule, status=202) - ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] - verify_acl_rule_is_present_once(acl_rule, ok_zone['acl']) + ok_client.add_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone["id"], acl_rule, status=202) + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone["id"])["zone"] + verify_acl_rule_is_present_once(acl_rule, ok_zone["acl"]) - ok_view = ok_client.list_zones()['zones'] + ok_view = ok_client.list_zones()["zones"] assert_that(ok_view, has_item(ok_zone)) # ok can still see ok_zone # verify dummy can see ok_zone - dummy_view = dummy_client.list_zones()['zones'] - ok_zone_dummy_view = dummy_client.list_zones(name_filter=ok_zone['name'])['zones'][0] + dummy_view = dummy_client.list_zones()["zones"] + ok_zone_dummy_view = dummy_client.list_zones(name_filter=ok_zone["name"])["zones"][0] assert_that(dummy_view, has_item(has_entries(ok_zone_dummy_view))) # can view zone # delete acl rule - result = ok_client.delete_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone['id'], acl_rule, status=202) - ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] - verify_acl_rule_is_not_present(acl_rule, ok_zone['acl']) + result = ok_client.delete_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone["id"], acl_rule, status=202) + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone["id"])["zone"] + verify_acl_rule_is_not_present(acl_rule, ok_zone["acl"]) - ok_view = ok_client.list_zones()['zones'] + ok_view = ok_client.list_zones()["zones"] assert_that(ok_view, has_item(ok_zone)) # ok can still see ok_zone # verify dummy can not see ok_zone - dummy_view = dummy_client.list_zones()['zones'] + dummy_view = dummy_client.list_zones()["zones"] assert_that(dummy_view, is_not(has_item(ok_zone_dummy_view))) # can still view zone @@ -632,21 +630,21 @@ def test_update_reverse_v4_zone(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.ip4_reverse_zone - zone['email'] = 'update-test@bar.com' + zone = copy.deepcopy(shared_zone_test_context.ip4_reverse_zone) + zone["email"] = "update-test@bar.com" update_result = client.update_zone(zone, status=202) client.wait_until_zone_change_status_synced(update_result) - assert_that(update_result['changeType'], is_('Update')) - assert_that(update_result['userId'], is_('ok')) - assert_that(update_result, has_key('created')) + assert_that(update_result["changeType"], is_("Update")) + assert_that(update_result["userId"], is_("ok")) + assert_that(update_result, has_key("created")) - get_result = client.get_zone(zone['id']) + get_result = client.get_zone(zone["id"]) - uz = get_result['zone'] - assert_that(uz['email'], is_('update-test@bar.com')) - assert_that(uz['updated'], is_not(none())) + uz = get_result["zone"] + assert_that(uz["email"], is_("update-test@bar.com")) + assert_that(uz["updated"], is_not(none())) def test_update_reverse_v6_zone(shared_zone_test_context): @@ -655,21 +653,21 @@ def test_update_reverse_v6_zone(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone = shared_zone_test_context.ip6_reverse_zone - zone['email'] = 'update-test@bar.com' + zone = copy.deepcopy(shared_zone_test_context.ip6_reverse_zone) + zone["email"] = "update-test@bar.com" update_result = client.update_zone(zone, status=202) client.wait_until_zone_change_status_synced(update_result) - assert_that(update_result['changeType'], is_('Update')) - assert_that(update_result['userId'], is_('ok')) - assert_that(update_result, has_key('created')) + assert_that(update_result["changeType"], is_("Update")) + assert_that(update_result["userId"], is_("ok")) + assert_that(update_result, has_key("created")) - get_result = client.get_zone(zone['id']) + get_result = client.get_zone(zone["id"]) - uz = get_result['zone'] - assert_that(uz['email'], is_('update-test@bar.com')) - assert_that(uz['updated'], is_not(none())) + uz = get_result["zone"] + assert_that(uz["email"], is_("update-test@bar.com")) + assert_that(uz["updated"], is_not(none())) def test_activate_reverse_v4_zone_with_bad_key_fails(shared_zone_test_context): @@ -678,8 +676,8 @@ def test_activate_reverse_v4_zone_with_bad_key_fails(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - update = dict(shared_zone_test_context.ip4_reverse_zone) - update['connection']['key'] = 'f00sn+4G2ldMn0q1CV3vsg==' + update = copy.deepcopy(shared_zone_test_context.ip4_reverse_zone) + update["connection"]["key"] = "f00sn+4G2ldMn0q1CV3vsg==" client.update_zone(update, status=400) @@ -689,8 +687,8 @@ def test_activate_reverse_v6_zone_with_bad_key_fails(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - update = dict(shared_zone_test_context.ip6_reverse_zone) - update['connection']['key'] = 'f00sn+4G2ldMn0q1CV3vsg==' + update = copy.deepcopy(shared_zone_test_context.ip6_reverse_zone) + update["connection"]["key"] = "f00sn+4G2ldMn0q1CV3vsg==" client.update_zone(update, status=400) @@ -698,10 +696,9 @@ def test_user_cannot_update_zone_to_nonexisting_admin_group(shared_zone_test_con """ Test user cannot update a zone adminGroupId to a group that does not exist """ - - zone_update = shared_zone_test_context.ok_zone - zone_update['adminGroupId'] = "some-bad-id" - zone_update['connection']['key'] = VinylDNSTestContext.dns_key + zone_update = copy.deepcopy(shared_zone_test_context.ok_zone) + zone_update["adminGroupId"] = "some-bad-id" + zone_update["connection"]["key"] = VinylDNSTestContext.dns_key shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=400) @@ -711,58 +708,57 @@ def test_user_can_update_zone_to_another_admin_group(shared_zone_test_context): """ Test user can update a zone with an admin group they are a member of """ - client = shared_zone_test_context.dummy_vinyldns_client group = None - + zone = None try: result = client.create_zone( { - 'name': 'one-time.', - 'email': 'test@test.com', - 'adminGroupId': shared_zone_test_context.dummy_group['id'], - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "name": "one-time.", + "email": "test@test.com", + "adminGroupId": shared_zone_test_context.dummy_group["id"], + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202 ) - zone = result['zone'] - client.wait_until_zone_active(result[u'zone'][u'id']) + zone = result["zone"] + client.wait_until_zone_active(result["zone"]["id"]) import json - print json.dumps(zone, indent=3) + print(json.dumps(zone, indent=3)) new_joint_group = { - 'name': 'new-ok-group', - 'email': 'test@test.com', - 'description': 'this is a description', - 'members': [{'id': 'ok', 'id': 'dummy'}], - 'admins': [{'id': 'ok'}] + "name": "new-ok-group", + "email": "test@test.com", + "description": "this is a description", + "members": [{"id": "ok"}, {"id": "dummy"}], + "admins": [{"id": "ok"}] } group = client.create_group(new_joint_group, status=200) # changing the zone zone_update = dict(zone) - zone_update['adminGroupId'] = group['id'] + zone_update["adminGroupId"] = group["id"] result = client.update_zone(zone_update, status=202) client.wait_until_zone_change_status_synced(result) finally: if zone: - client.delete_zone(zone['id'], status=202) - client.wait_until_zone_deleted(zone['id']) + client.delete_zone(zone["id"], status=202) + client.wait_until_zone_deleted(zone["id"]) if group: - shared_zone_test_context.ok_vinyldns_client.delete_group(group['id'], status=(200, 404)) + shared_zone_test_context.ok_vinyldns_client.delete_group(group["id"], status=(200, 404)) @pytest.mark.serial @@ -773,8 +769,8 @@ def test_user_cannot_update_zone_to_nonmember_admin_group(shared_zone_test_conte # TODO: I don't know why this consistently fails but marking serial # TODO: STRANGE! When doing ALL serially it returns 400, when separating PAR from SER it returns a 403 # TODO: somehow changing the order of when this run changes the status code! Who is messing with the ok_zone? - zone_update = shared_zone_test_context.ok_zone - zone_update['adminGroupId'] = shared_zone_test_context.history_group['id'] + zone_update = copy.deepcopy(shared_zone_test_context.ok_zone) + zone_update["adminGroupId"] = shared_zone_test_context.history_group["id"] shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=403) @@ -786,14 +782,14 @@ def test_acl_rule_missing_access_level(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'description': 'test-acl-no-access-level', - 'groupId': '456', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "description": "test-acl-no-access-level", + "groupId": "456", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400)['errors'] + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone["id"], acl_rule, status=400)["errors"] assert_that(errors, has_length(1)) - assert_that(errors, contains_inanyorder('Missing ACLRule.accessLevel')) + assert_that(errors, contains_inanyorder("Missing ACLRule.accessLevel")) @pytest.mark.serial @@ -803,16 +799,16 @@ def test_acl_rule_both_user_and_group(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client acl_rule = { - 'accessLevel': 'Read', - 'userId': '789', - 'groupId': '456', - 'description': 'test-acl-no-user-or-group-level', - 'recordMask': 'www-*', - 'recordTypes': ['A', 'AAAA', 'CNAME'] + "accessLevel": "Read", + "userId": "789", + "groupId": "456", + "description": "test-acl-no-user-or-group-level", + "recordMask": "www-*", + "recordTypes": ["A", "AAAA", "CNAME"] } - errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400)['errors'] + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone["id"], acl_rule, status=400)["errors"] assert_that(errors, has_length(1)) - assert_that(errors, contains_inanyorder('Cannot specify both a userId and a groupId')) + assert_that(errors, contains_inanyorder("Cannot specify both a userId and a groupId")) def test_update_zone_no_authorization(shared_zone_test_context): @@ -822,9 +818,9 @@ def test_update_zone_no_authorization(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = { - 'id': '12345', - 'name': str(uuid.uuid4()), - 'email': 'test@test.com', + "id": "12345", + "name": str(uuid.uuid4()), + "email": "test@test.com", } client.update_zone(zone, sign_request=False, status=401) @@ -836,12 +832,12 @@ def test_normal_user_cannot_update_shared_zone_flag(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone(shared_zone_test_context.ok_zone['id'], status=200) - zone_update = result['zone'] - zone_update['shared'] = True + result = client.get_zone(shared_zone_test_context.ok_zone["id"], status=200) + zone_update = result["zone"] + zone_update["shared"] = True error = shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=403) - assert_that(error, contains_string('Not authorized to update zone shared status from false to true.')) + assert_that(error, contains_string("Not authorized to update zone shared status from false to true.")) def test_toggle_test_flag(shared_zone_test_context): @@ -849,13 +845,13 @@ def test_toggle_test_flag(shared_zone_test_context): Test the isTest flag is ignored in update requests """ client = shared_zone_test_context.shared_zone_vinyldns_client - zone_update = shared_zone_test_context.non_test_shared_zone - zone_update['isTest'] = True + zone_update = copy.deepcopy(shared_zone_test_context.non_test_shared_zone) + zone_update["isTest"] = True change = client.update_zone(zone_update, status=202) client.wait_until_zone_change_status_synced(change) - assert_that(change['zone']['isTest'], is_(False)) + assert_that(change["zone"]["isTest"], is_(False)) @pytest.mark.serial @@ -867,36 +863,36 @@ def test_update_connection_info_success(shared_zone_test_context): zone = shared_zone_test_context.system_test_zone # validating current zone state - to_update = client.get_zone(zone['id'])['zone'] - assert_that(to_update, has_key('connection')) - assert_that(to_update, has_key('transferConnection')) + to_update = client.get_zone(zone["id"])["zone"] + assert_that(to_update, has_key("connection")) + assert_that(to_update, has_key("transferConnection")) - to_update.pop('connection') - to_update.pop('transferConnection') - to_update['backendId'] = 'func-test-backend' + to_update.pop("connection") + to_update.pop("transferConnection") + to_update["backendId"] = "func-test-backend" test_rs = None try: change = client.update_zone(to_update, status=202) client.wait_until_zone_change_status_synced(change) - new_zone = change['zone'] + new_zone = change["zone"] - assert_that(new_zone, is_not(has_key('connection'))) - assert_that(new_zone, is_not(has_key('transferConnection'))) - assert_that(new_zone['backendId'], is_('func-test-backend')) + assert_that(new_zone, is_not(has_key("connection"))) + assert_that(new_zone, is_not(has_key("transferConnection"))) + assert_that(new_zone["backendId"], is_("func-test-backend")) # test adding a recordset - validates the key - new_rs = get_recordset_json(new_zone, 'test-update-connection-info-success', 'CNAME', - [{'cname': 'test-cname.'}]) + new_rs = create_recordset(new_zone, "test-update-connection-info-success", "CNAME", + [{"cname": "test-cname."}]) create_rs = client.create_recordset(new_rs, status=202) - test_rs = client.wait_until_recordset_change_status(create_rs, 'Complete')['recordSet'] + test_rs = client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] finally: revert = client.update_zone(zone, status=202) client.wait_until_zone_change_status_synced(revert) if test_rs: - delete_result = client.delete_recordset(test_rs['zoneId'], test_rs['id'], status=202) - client.wait_until_recordset_change_status(delete_result, 'Complete') - + delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) + client.wait_until_recordset_change_status(delete_result, "Complete") +@pytest.mark.serial def test_update_connection_info_invalid_backendid(shared_zone_test_context): """ Test user can update zone to bad backendId fails @@ -904,10 +900,10 @@ def test_update_connection_info_invalid_backendid(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone - to_update = client.get_zone(zone['id'])['zone'] - to_update.pop('connection') - to_update.pop('transferConnection') - to_update['backendId'] = 'bad-backend-id' + to_update = client.get_zone(zone["id"])["zone"] + to_update.pop("connection") + to_update.pop("transferConnection") + to_update["backendId"] = "bad-backend-id" result = client.update_zone(to_update, status=400) assert_that(result, contains_string("Invalid backendId")) diff --git a/modules/api/functional_test/perf_tests/uat_sync_test.py b/modules/api/functional_test/perf_tests/uat_sync_test.py index 3bb6fe863..446685d73 100644 --- a/modules/api/functional_test/perf_tests/uat_sync_test.py +++ b/modules/api/functional_test/perf_tests/uat_sync_test.py @@ -1,63 +1,64 @@ -from hamcrest import * -from vinyldns_client import VinylDNSClient -from vinyldns_context import VinylDNSTestContext import time +from hamcrest import * + +from vinyldns_context import VinylDNSTestContext +from vinyldns_python import VinylDNSClient + + def test_sync_zone_success(): """ Test syncing a zone """ - zone_name = 'small' - client = VinylDNSClient() + with VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") as client: + zone_name = "small" + zones = client.list_zones()["zones"] + zone = [z for z in zones if z["name"] == zone_name + "."] - zones = client.list_zones()['zones'] - zone = [z for z in zones if z['name'] == zone_name + "."] - - lastLatestSync = [] - new = True - if zone: - zone = zone[0] - lastLatestSync = zone['latestSync'] - new = False - - else: - # create zone if it doesnt exist - zone = { - 'name': zone_name, - 'email': 'test@test.com', - 'connection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip - }, - 'transferConnection': { - 'name': 'vinyldns.', - 'keyName': VinylDNSTestContext.dns_key_name, - 'key': VinylDNSTestContext.dns_key, - 'primaryServer': VinylDNSTestContext.dns_ip + last_latest_sync = [] + new = True + if zone: + zone = zone[0] + last_latest_sync = zone["latestSync"] + new = False + else: + # create zone if it doesnt exist + zone = { + "name": zone_name, + "email": "test@test.com", + "connection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip + }, + "transferConnection": { + "name": "vinyldns.", + "keyName": VinylDNSTestContext.dns_key_name, + "key": VinylDNSTestContext.dns_key, + "primaryServer": VinylDNSTestContext.name_server_ip + } } - } - zone_change = client.create_zone(zone, status=202) - zone = zone_change['zone'] - client.wait_until_zone_active(zone_change[u'zone'][u'id']) + zone_change = client.create_zone(zone, status=202) + zone = zone_change["zone"] + client.wait_until_zone_active(zone_change["zone"]["id"]) - zone_id = zone['id'] + zone_id = zone["id"] - # run sync - change = client.sync_zone(zone_id, status=202) + # run sync + client.sync_zone(zone_id, status=202) - # brief wait for zone status change. Can't use getZoneHistory here to check on the changeset itself, - # the action times out (presumably also querying the same record change table that the sync itself - # is interacting with) - time.sleep(0.5) - client.wait_until_zone_status(zone_id, 'Active') + # brief wait for zone status change. Can't use getZoneHistory here to check on the changeset itself, + # the action times out (presumably also querying the same record change table that the sync itself + # is interacting with) + time.sleep(0.5) + client.wait_until_zone_status(zone_id, "Active") - # confirm zone has been updated - get_result = client.get_zone(zone_id) - synced_zone = get_result['zone'] - latestSync = synced_zone['latestSync'] - assert_that(synced_zone['updated'], is_not(none())) - assert_that(latestSync, is_not(none())) - if not new: - assert_that(latestSync, is_not(lastLatestSync)) + # confirm zone has been updated + get_result = client.get_zone(zone_id) + synced_zone = get_result["zone"] + latest_sync = synced_zone["latestSync"] + assert_that(synced_zone["updated"], is_not(none())) + assert_that(latest_sync, is_not(none())) + if not new: + assert_that(latest_sync, is_not(last_latest_sync)) diff --git a/modules/api/functional_test/pytest.ini b/modules/api/functional_test/pytest.ini index 3e7692550..4e186a482 100644 --- a/modules/api/functional_test/pytest.ini +++ b/modules/api/functional_test/pytest.ini @@ -1,3 +1,4 @@ [pytest] -norecursedirs=.virtualenv eggs +norecursedirs=.virtualenv eggs .venv_win addopts = -rfesxX --capture=sys --junitxml=../target/pytest_reports/pytest.xml --durations=30 + diff --git a/modules/api/functional_test/pytest.sh b/modules/api/functional_test/pytest.sh new file mode 100644 index 000000000..7272c01e0 --- /dev/null +++ b/modules/api/functional_test/pytest.sh @@ -0,0 +1,32 @@ +#!/usr/bin/env bash +set -euo pipefail + +clean_up() { + echo "Cleaning up.." + if [ -d "./.virtualenv" ]; then + rm -rf ./.virtualenv + fi + exit 1 +} + +if [ ! -d "./.virtualenv" ]; then + # If we're interrupted during this process, make sure we cleanup + trap clean_up INT TERM + echo -n "Creating virtualenv..." + python3 -m venv --clear ./.virtualenv + echo "done" + source ./.virtualenv/bin/activate + pip3 install -r requirements.txt +else + # Try to activate; on failure, clean up + source ./.virtualenv/bin/activate || clean_up + + # We can pass --update as the first parameter to rerun the pip install + if [ "$1" == "--update" ]; then + echo "Updating dependencies..." + pip3 install -r requirements.txt + shift + fi +fi + +PYTHONPATH=. pytest "$@" diff --git a/modules/api/functional_test/requirements.txt b/modules/api/functional_test/requirements.txt index ee3576feb..716727e4b 100644 --- a/modules/api/functional_test/requirements.txt +++ b/modules/api/functional_test/requirements.txt @@ -1,15 +1,12 @@ -# requirements.txt v1.0 -# --------------------- -# Add project specific python requirements to this file. -# Do not commit them in the project! -# Make sure they exist on our corporate PyPi server. - -pyhamcrest==1.8.0 +pyhamcrest==2.0.2 pytz>=2014 -pytest==4.4.1 -mock==1.0.1 -dnspython==1.14.0 -boto==2.48.0 -future==0.17.0 -requests==2.20.0 -pytest-xdist==1.29.0 +pytest==6.2.5 +mock==4.0.3 +dnspython==2.1.0 +boto3==1.18.47 +botocore==1.21.47 +requests==2.26.0 +pytest-xdist==2.4.0 +python-dateutil==2.8.2 +filelock==3.0.12 +pytest-custom_exit_code==0.3.0 \ No newline at end of file diff --git a/modules/api/functional_test/run.py b/modules/api/functional_test/run.py deleted file mode 100755 index 10b4ed671..000000000 --- a/modules/api/functional_test/run.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python -import os -import sys - -basedir = os.path.dirname(os.path.realpath(__file__)) -vedir = os.path.join(basedir, '.virtualenv') -os.system('./bootstrap.sh') - -activate_virtualenv = os.path.join(vedir, 'bin', 'activate_this.py') -print('Activating virtualenv at ' + activate_virtualenv) - -report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports') -if not os.path.exists(report_dir): - os.system('mkdir -p ' + report_dir) - -execfile(activate_virtualenv, dict(__file__=activate_virtualenv)) - -import pytest -result = pytest.main(list(sys.argv[1:])) - -sys.exit(result) - - diff --git a/modules/api/functional_test/run.sh b/modules/api/functional_test/run.sh new file mode 100644 index 000000000..0616da2a3 --- /dev/null +++ b/modules/api/functional_test/run.sh @@ -0,0 +1,13 @@ +#!/usr/bin/env bash + +set -euo pipefail + +UPDATE_DEPS="" +if [ "$1" == "--update" ]; then + UPDATE_DEPS="$1" + shift +fi + +PARAMS=("$@") +./pytest.sh "${UPDATE_DEPS}" --suppress-no-test-exit-code -v live_tests -m "serial" --teardown=False "${PARAMS[@]}" +./pytest.sh --suppress-no-test-exit-code -v live_tests -n 2 -m "not serial" --teardown=True "${PARAMS[@]}" diff --git a/modules/api/functional_test/utils.py b/modules/api/functional_test/utils.py index eae7d56b7..4e7f747ad 100644 --- a/modules/api/functional_test/utils.py +++ b/modules/api/functional_test/utils.py @@ -1,33 +1,28 @@ -import sys -import pytest -import uuid import json +import uuid + import dns.query import dns.tsigkeyring import dns.update - -from utils import * -from hamcrest import * -from vinyldns_python import VinylDNSClient -from vinyldns_context import VinylDNSTestContext -from test_data import TestData from dns.resolver import * -import copy +from hamcrest import * + +from vinyldns_context import VinylDNSTestContext def verify_recordset(actual, expected): """ Runs basic assertions on the recordset to ensure that actual matches the expected """ - assert_that(actual['name'], is_(expected['name'])) - assert_that(actual['zoneId'], is_(expected['zoneId'])) - assert_that(actual['type'], is_(expected['type'])) - assert_that(actual['ttl'], is_(expected['ttl'])) - assert_that(actual, has_key('created')) - assert_that(actual['status'], is_not(none())) - assert_that(actual['id'], is_not(none())) - actual_records = [json.dumps(x) for x in actual['records']] - expected_records = [json.dumps(x) for x in expected['records']] + assert_that(actual["name"], is_(expected["name"])) + assert_that(actual["zoneId"], is_(expected["zoneId"])) + assert_that(actual["type"], is_(expected["type"])) + assert_that(actual["ttl"], is_(expected["ttl"])) + assert_that(actual, has_key("created")) + assert_that(actual["status"], is_not(none())) + assert_that(actual["id"], is_not(none())) + actual_records = [json.dumps(x) for x in actual["records"]] + expected_records = [json.dumps(x) for x in expected["records"]] for expected_record in expected_records: assert_that(actual_records, has_item(expected_record)) @@ -37,28 +32,28 @@ def gen_zone(): Generates a random zone """ return { - 'name': str(uuid.uuid4()) + '.', - 'email': 'test@test.com', - 'adminGroupId': 'test-group-id' + "name": str(uuid.uuid4()) + ".", + "email": "test@test.com", + "adminGroupId": "test-group-id" } def verify_acl_rule_is_present_once(rule, acl): def match(acl_rule): # remove displayName if it exists (allows for aclRule and aclRuleInfo comparison) - acl_rule.pop('displayName', None) + acl_rule.pop("displayName", None) return acl_rule == rule - matches = filter(match, acl['rules']) - assert_that(matches, has_length(1), 'Did not find exactly one match for acl rule') + matches = list(filter(match, acl["rules"])) + assert_that(matches, has_length(1), "Did not find exactly one match for acl rule") def verify_acl_rule_is_not_present(rule, acl): def match(acl_rule): return acl_rule != rule - matches = filter(match, acl['rules']) - assert_that(matches, has_length(len(acl['rules'])), 'ACL Rule was found but should not have been present') + matches = list(filter(match, acl["rules"])) + assert_that(matches, has_length(len(acl["rules"])), "ACL Rule was found but should not have been present") def rdata(dns_answers): @@ -69,7 +64,7 @@ def rdata(dns_answers): """ rdata_strings = [] if dns_answers: - rdata_strings = [x['rdata'] for x in dns_answers] + rdata_strings = [x["rdata"] for x in dns_answers] return rdata_strings @@ -80,10 +75,13 @@ def dns_server_port(zone): :param zone: a populated zone model :return: a tuple (host, port), port is an int """ - name_server = zone['connection']['primaryServer'] + name_server = zone["connection"]["primaryServer"] name_server_port = 53 - if ':' in name_server: - parts = name_server.split(':') + if VinylDNSTestContext.resolver_ip is not None: + name_server = VinylDNSTestContext.resolver_ip + + if ":" in name_server: + parts = name_server.split(":") name_server = parts[0] name_server_port = int(parts[1]) @@ -94,22 +92,22 @@ def dns_do_command(zone, record_name, record_type, command, ttl=0, rdata=""): """ Helper for dns add, update, delete """ + # Get the algorithm name from the DNS library (vinylDNS uses "-" in the name and dnspython uses "_") + algo_name = getattr(dns.tsig, VinylDNSTestContext.dns_key_algo.replace("-", "_")) keyring = dns.tsigkeyring.from_text({ - zone['connection']['keyName']: VinylDNSTestContext.dns_key + zone["connection"]["keyName"]: (algo_name, VinylDNSTestContext.dns_key) }) - name_server, name_server_port = dns_server_port(zone) + (name_server, name_server_port) = dns_server_port(zone) + fqdn = record_name + "." + zone["name"] + print("updating " + fqdn + " to have data " + rdata) + update = dns.update.Update(zone["name"], keyring=keyring) - fqdn = record_name + "." + zone['name'] - - print "updating " + fqdn + " to have data " + rdata - - update = dns.update.Update(zone['name'], keyring=keyring) - if (command == 'add'): + if command == "add": update.add(fqdn, ttl, record_type, rdata) - elif (command == 'update'): + elif command == "update": update.replace(fqdn, ttl, record_type, rdata) - elif (command == 'delete'): + elif command == "delete": update.delete(fqdn, record_type) response = dns.query.udp(update, name_server, port=name_server_port, ignore_unexpected=True) @@ -167,42 +165,40 @@ def dns_resolve(zone, record_name, record_type): vinyldns_resolver.nameservers = [name_server] vinyldns_resolver.port = name_server_port - vinyldns_resolver.domain = zone['name'] + vinyldns_resolver.domain = zone["name"] - fqdn = record_name + '.' + vinyldns_resolver.domain + fqdn = record_name + "." + vinyldns_resolver.domain if record_name == vinyldns_resolver.domain: # assert that we are looking up the zone name / @ symbol fqdn = vinyldns_resolver.domain - print "looking up " + fqdn - try: - answers = vinyldns_resolver.query(fqdn, record_type) + answers = vinyldns_resolver.resolve(fqdn, record_type) except NXDOMAIN: - print "query returned NXDOMAIN" + print("query returned NXDOMAIN") answers = [] except dns.resolver.NoAnswer: - print "query returned NoAnswer" + print("query returned NoAnswer") answers = [] if answers: # dns python is goofy, looks like we have to parse text # each record in the rrset is delimited by a \n - records = str(answers.rrset).split('\n') + records = str(answers.rrset).split("\n") # for each record, we have exactly 4 fields in order: 1 record name; 2 TTL; 3 DCLASS; 4 TYPE; 5 RDATA # construct a simple dictionary based on that split - return map(lambda x: parse_record(x), records) + return [parse_record(x) for x in records] else: return [] def parse_record(record_string): # for each record, we have exactly 4 fields in order: 1 record name; 2 TTL; 3 DCLASS; 4 TYPE; 5 RDATA - parts = record_string.split(' ') + parts = record_string.split(" ") - print "record parts" - print str(parts) + print("record parts") + print(str(parts)) # any parts over 4 have to be kept together offset = record_string.find(parts[3]) + len(parts[3]) + 1 @@ -210,31 +206,31 @@ def parse_record(record_string): record_data = record_string[offset:offset + length] record = { - 'name': parts[0], - 'ttl': int(str(parts[1])), - 'dclass': parts[2], - 'type': parts[3], - 'rdata': record_data + "name": parts[0], + "ttl": int(str(parts[1])), + "dclass": parts[2], + "type": parts[3], + "rdata": record_data } - print "parsed record:" - print str(record) + print("parsed record:") + print(str(record)) return record def generate_acl_rule(access_level, **kw): acl_rule = { - 'accessLevel': access_level, - 'description': 'some_test_rule' + "accessLevel": access_level, + "description": "some_test_rule" } - if ('userId' in kw): - acl_rule['userId'] = kw['userId'] - if ('groupId' in kw): - acl_rule['groupId'] = kw['groupId'] - if ('recordTypes' in kw): - acl_rule['recordTypes'] = kw['recordTypes'] - if ('recordMask' in kw): - acl_rule['recordMask'] = kw['recordMask'] + if "userId" in kw: + acl_rule["userId"] = kw["userId"] + if "groupId" in kw: + acl_rule["groupId"] = kw["groupId"] + if "recordTypes" in kw: + acl_rule["recordTypes"] = kw["recordTypes"] + if "recordMask" in kw: + acl_rule["recordMask"] = kw["recordMask"] return acl_rule @@ -243,10 +239,10 @@ def add_rules_to_zone(zone, new_rules): import copy updated_zone = copy.deepcopy(zone) - updated_rules = updated_zone['acl']['rules'] - rules_to_add = filter(lambda x: x not in updated_rules, new_rules) + updated_rules = updated_zone["acl"]["rules"] + rules_to_add = [x for x in new_rules if x not in updated_rules] updated_rules.extend(rules_to_add) - updated_zone['acl']['rules'] = updated_rules + updated_zone["acl"]["rules"] = updated_rules return updated_zone @@ -254,9 +250,9 @@ def remove_rules_from_zone(zone, deleted_rules): import copy updated_zone = copy.deepcopy(zone) - existing_rules = updated_zone['acl']['rules'] - trimmed_rules = filter(lambda x: x in existing_rules, deleted_rules) - updated_zone['acl']['rules'] = trimmed_rules + existing_rules = updated_zone["acl"]["rules"] + trimmed_rules = [x for x in deleted_rules if x in existing_rules] + updated_zone["acl"]["rules"] = trimmed_rules return updated_zone @@ -292,28 +288,28 @@ def add_classless_acl_rules(test_context, rules): def remove_ok_acl_rules(test_context, rules): - zone = test_context.ok_vinyldns_client.get_zone(test_context.ok_zone['id'])['zone'] + zone = test_context.ok_vinyldns_client.get_zone(test_context.ok_zone["id"])["zone"] updated_zone = remove_rules_from_zone(zone, rules) update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def remove_ip4_acl_rules(test_context, rules): - zone = test_context.ok_vinyldns_client.get_zone(test_context.ip4_reverse_zone['id'])['zone'] + zone = test_context.ok_vinyldns_client.get_zone(test_context.ip4_reverse_zone["id"])["zone"] updated_zone = remove_rules_from_zone(zone, rules) update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def remove_ip6_acl_rules(test_context, rules): - zone = test_context.ok_vinyldns_client.get_zone(test_context.ip6_reverse_zone['id'])['zone'] + zone = test_context.ok_vinyldns_client.get_zone(test_context.ip6_reverse_zone["id"])["zone"] updated_zone = remove_rules_from_zone(zone, rules) update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def remove_classless_acl_rules(test_context, rules): - zone = test_context.ok_vinyldns_client.get_zone(test_context.classless_zone_delegation_zone['id'])['zone'] + zone = test_context.ok_vinyldns_client.get_zone(test_context.classless_zone_delegation_zone["id"])["zone"] updated_zone = remove_rules_from_zone(zone, rules) update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) @@ -321,71 +317,71 @@ def remove_classless_acl_rules(test_context, rules): def clear_ok_acl_rules(test_context): zone = test_context.ok_zone - zone['acl']['rules'] = [] + zone["acl"]["rules"] = [] update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_shared_zone_acl_rules(test_context): zone = test_context.shared_zone - zone['acl']['rules'] = [] + zone["acl"]["rules"] = [] update_change = test_context.shared_zone_vinyldns_client.update_zone(zone, status=202) test_context.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip4_acl_rules(test_context): zone = test_context.ip4_reverse_zone - zone['acl']['rules'] = [] + zone["acl"]["rules"] = [] update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip6_acl_rules(test_context): zone = test_context.ip6_reverse_zone - zone['acl']['rules'] = [] + zone["acl"]["rules"] = [] update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_classless_acl_rules(test_context): zone = test_context.classless_zone_delegation_zone - zone['acl']['rules'] = [] + zone["acl"]["rules"] = [] update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) -def seed_text_recordset(client, record_name, zone, records=[{'text': 'someText'}]): +def seed_text_recordset(client, record_name, zone, records=[{"text": "someText"}]): new_rs = { - 'zoneId': zone['id'], - 'name': record_name, - 'type': 'TXT', - 'ttl': 100, - 'records': records + "zoneId": zone["id"], + "name": record_name, + "type": "TXT", + "ttl": 100, + "records": records } result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - if client.wait_until_recordset_exists(result_rs['zoneId'], result_rs['id']): - print "\r\n!!! record set exists !!!" + result_rs = result["recordSet"] + if client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]): + print("\r\n!!! record set exists !!!") else: - print "\r\n!!! record set does not exist !!!" + print("\r\n!!! record set does not exist !!!") return result_rs -def seed_ptr_recordset(client, record_name, zone, records=[{'ptrdname': 'foo.com.'}]): +def seed_ptr_recordset(client, record_name, zone, records=[{"ptrdname": "foo.com."}]): new_rs = { - 'zoneId': zone['id'], - 'name': record_name, - 'type': 'PTR', - 'ttl': 100, - 'records': records + "zoneId": zone["id"], + "name": record_name, + "type": "PTR", + "ttl": 100, + "records": records } result = client.create_recordset(new_rs, status=202) - result_rs = result['recordSet'] - if client.wait_until_recordset_exists(result_rs['zoneId'], result_rs['id']): - print "\r\n!!! record set exists !!!" + result_rs = result["recordSet"] + if client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]): + print("\r\n!!! record set exists !!!") else: - print "\r\n!!! record set does not exist !!!" + print("\r\n!!! record set does not exist !!!") return result_rs @@ -393,23 +389,23 @@ def seed_ptr_recordset(client, record_name, zone, records=[{'ptrdname': 'foo.com def clear_zones(client): # Get the groups for the ok user groups = client.list_all_my_groups() - group_ids = map(lambda x: x['id'], groups) + group_ids = [x["id"] for x in groups] - zones = client.list_zones()['zones'] - - # we only want to delete zones that the ok user "owns" - zones_to_delete = filter(lambda x: (x['adminGroupId'] in group_ids) or (x['account'] in group_ids), zones) - zoneids_to_delete = map(lambda x: x['id'], zones_to_delete) - - client.abandon_zones(zoneids_to_delete) + zones = client.list_zones()["zones"] + if len(zones) > 0: + # we only want to delete zones that the ok user "owns" + zones_to_delete = [x for x in zones if (x["adminGroupId"] in group_ids or x["account"] in group_ids) and x["accessLevel"] == "Delete"] + zone_ids_to_delete = [x["id"] for x in zones_to_delete] + if len(zone_ids_to_delete) > 0: + client.abandon_zones(zone_ids_to_delete) def clear_groups(client, exclude=[]): groups = client.list_all_my_groups() - group_ids = map(lambda x: x['id'], groups) + group_ids = [x["id"] for x in groups] for group_id in group_ids: - if not group_id in exclude: + if group_id not in exclude: client.delete_group(group_id, status=200) @@ -519,31 +515,31 @@ def get_change_MX_json(input_name, ttl=200, preference=None, exchange=None, chan return json -def get_recordset_json(zone, rname, type, rdata_list, ttl=200, ownergroup_id=None): - json = { - "zoneId": zone['id'], +def create_recordset(zone, rname, recordset_type, rdata_list, ttl=200, ownergroup_id=None): + recordset_data = { + "zoneId": zone["id"], "name": rname, - "type": type, + "type": recordset_type, "ttl": ttl, "records": rdata_list } if ownergroup_id is not None: - json["ownerGroupId"] = ownergroup_id + recordset_data["ownerGroupId"] = ownergroup_id - return json + return recordset_data def clear_recordset_list(to_delete, client): delete_changes = [] for result_rs in to_delete: try: - delete_result = client.delete_recordset(result_rs['zone']['id'], result_rs['recordSet']['id'], status=202) + delete_result = client.delete_recordset(result_rs["zone"]["id"], result_rs["recordSet"]["id"], status=202) delete_changes.append(delete_result) except: pass for change in delete_changes: try: - client.wait_until_recordset_change_status(change, 'Complete') + client.wait_until_recordset_change_status(change, "Complete") except: pass @@ -558,19 +554,19 @@ def clear_zoneid_rsid_tuple_list(to_delete, client): pass for change in delete_changes: try: - client.wait_until_recordset_change_status(change, 'Complete') + client.wait_until_recordset_change_status(change, "Complete") except: pass -def get_group_json(group_name, email="test@test.com", description="this is a description", members=[{'id': 'ok'}], - admins=[{'id': 'ok'}]): +def get_group_json(group_name, email="test@test.com", description="this is a description", members=[{"id": "ok"}], + admins=[{"id": "ok"}]): return { - 'name': group_name, - 'email': email, - 'description': description, - 'members': members, - 'admins': admins + "name": group_name, + "email": email, + "description": description, + "members": members, + "admins": admins } @@ -579,15 +575,15 @@ def generate_record_name(zone_name=None): previous_frame = inspect.currentframe().f_back (filename, line_number, function_name, lines, index) = inspect.getframeinfo(previous_frame) if zone_name: - return '{0}-{1}.{2}'.format(function_name[:58], line_number, zone_name).replace('_', '-') + return "{0}-{1}.{2}".format(function_name[:58], line_number, zone_name).replace("_", "-") else: - return '{0}-{1}'.format(function_name[:58], line_number).replace('_', '-') + return "{0}-{1}".format(function_name[:58], line_number).replace("_", "-") def find_recordset_by_name(zone_id, rs_name, client): r = client.list_recordsets_by_zone(zone_id, record_name_filter=rs_name, status=200) - if r and 'recordSets' in r and len(r['recordSets']) > 0: - return r['recordSets'][0] + if r and "recordSets" in r and len(r["recordSets"]) > 0: + return r["recordSets"][0] else: return None @@ -595,8 +591,8 @@ def find_recordset_by_name(zone_id, rs_name, client): def delete_recordset_by_name(zone_id, rs_name, client): rs = find_recordset_by_name(zone_id, rs_name, client) if rs: - client.delete_recordset(rs['zoneId'], rs['id']) - client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + client.delete_recordset(rs["zoneId"], rs["id"]) + client.wait_until_recordset_deleted(rs["zoneId"], rs["id"]) return rs else: return None diff --git a/modules/api/functional_test/vinyldns_context.py b/modules/api/functional_test/vinyldns_context.py index b9308add9..fa518462e 100644 --- a/modules/api/functional_test/vinyldns_context.py +++ b/modules/api/functional_test/vinyldns_context.py @@ -1,18 +1,22 @@ class VinylDNSTestContext: - dns_ip = 'localhost' - dns_zone_name = 'vinyldns.' - dns_rev_v4_zone_name = '10.10.in-addr.arpa.' - dns_rev_v6_zone_name = '1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.' - dns_key_name = 'vinyldns.' - dns_key = 'nzisn+4G2ldMn0q1CV3vsg==' - vinyldns_url = 'http://localhost:9000' - teardown = True + name_server_ip: str = None + resolver_ip: str = None + dns_zone_name: str = None + dns_key_name: str = None + dns_key: str = None + dns_key_algo: str = None + vinyldns_url: str = None + teardown: bool = False + enable_safety_check: bool = False @staticmethod - def configure(ip, zone, key_name, key, url, teardown): - VinylDNSTestContext.dns_ip = ip + def configure(name_server_ip: str, resolver_ip: str, zone: str, key_name: str, key: str, key_algo: str, url: str, teardown: bool, enable_safety_check: bool = False) -> None: + VinylDNSTestContext.name_server_ip = name_server_ip + VinylDNSTestContext.resolver_ip = resolver_ip VinylDNSTestContext.dns_zone_name = zone VinylDNSTestContext.dns_key_name = key_name VinylDNSTestContext.dns_key = key + VinylDNSTestContext.dns_key_algo = key_algo VinylDNSTestContext.vinyldns_url = url - VinylDNSTestContext.teardown = teardown.lower() == 'true' + VinylDNSTestContext.teardown = teardown + VinylDNSTestContext.enable_safety_check = enable_safety_check diff --git a/modules/api/functional_test/vinyldns_python.py b/modules/api/functional_test/vinyldns_python.py index 45489b971..0faf8fd6e 100644 --- a/modules/api/functional_test/vinyldns_python.py +++ b/modules/api/functional_test/vinyldns_python.py @@ -1,55 +1,47 @@ import json -import time import logging -import collections +import time +from typing import Iterable +from urllib.parse import urlparse, urlsplit, parse_qs, urljoin import requests -from requests.adapters import HTTPAdapter -from requests.packages.urllib3.util.retry import Retry from hamcrest import * +from requests.adapters import HTTPAdapter, Retry -# TODO: Didn't like this boto request signer, fix when moving back -from boto_request_signer import BotoRequestSigner - -# Python 2/3 compatibility -from requests.compat import urljoin, urlparse, urlsplit -from builtins import str -from future.utils import iteritems -from future.moves.urllib.parse import parse_qs - -try: - basestring -except NameError: - basestring = str +from aws_request_signer import AwsSigV4RequestSigner logger = logging.getLogger(__name__) -__all__ = [u'VinylDNSClient', u'MAX_RETRIES', u'RETRY_WAIT'] +__all__ = ["VinylDNSClient", "MAX_RETRIES", "RETRY_WAIT"] MAX_RETRIES = 40 RETRY_WAIT = 0.05 + class VinylDNSClient(object): def __init__(self, url, access_key, secret_key): self.index_url = url self.headers = { - u'Accept': u'application/json, text/plain', - u'Content-Type': u'application/json' + "Accept": "application/json, text/plain", + "Content-Type": "application/json" } - self.signer = BotoRequestSigner(self.index_url, - access_key, secret_key) - + self.signer = AwsSigV4RequestSigner(self.index_url, access_key, secret_key) self.session = self.requests_retry_session() self.session_not_found_ok = self.requests_retry_not_found_ok_session() - def requests_retry_not_found_ok_session(self, - retries=5, - backoff_factor=0.4, - status_forcelist=(500, 502, 504), - session=None, - ): + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.tear_down() + + def tear_down(self): + self.session.close() + self.session_not_found_ok.close() + + def requests_retry_not_found_ok_session(self, retries=5, backoff_factor=0.4, status_forcelist=(500, 502, 504), session=None): session = session or requests.Session() retry = Retry( total=retries, @@ -59,16 +51,11 @@ class VinylDNSClient(object): status_forcelist=status_forcelist, ) adapter = HTTPAdapter(max_retries=retry) - session.mount(u'http://', adapter) - session.mount(u'https://', adapter) + session.mount("http://", adapter) + session.mount("https://", adapter) return session - def requests_retry_session(self, - retries=5, - backoff_factor=0.4, - status_forcelist=(500, 502, 504), - session=None, - ): + def requests_retry_session(self, retries=5, backoff_factor=0.4, status_forcelist=(500, 502, 504), session=None): session = session or requests.Session() retry = Retry( total=retries, @@ -77,18 +64,18 @@ class VinylDNSClient(object): backoff_factor=backoff_factor, status_forcelist=status_forcelist, ) - adapter = HTTPAdapter(max_retries=retry) - session.mount(u'http://', adapter) - session.mount(u'https://', adapter) + adapter = HTTPAdapter(max_retries=retry, pool_connections=100, pool_maxsize=100) + session.mount("http://", adapter) + session.mount("https://", adapter) return session - def make_request(self, url, method=u'GET', headers=None, body_string=None, sign_request=True, not_found_ok=False, **kwargs): + def make_request(self, url, method="GET", headers=None, body_string=None, sign_request=True, not_found_ok=False, **kwargs): # pull out status or None - status_code = kwargs.pop(u'status', None) + status_code = kwargs.pop("status", None) # remove retries arg if provided - kwargs.pop(u'retries', None) + kwargs.pop("retries", None) path = urlparse(url).path @@ -99,12 +86,11 @@ class VinylDNSClient(object): if query: # the problem with parse_qs is that it will return a list for ALL params, even if they are a single value # we need to essentially flatten the params if a param has only one value - query = dict((k, v if len(v)>1 else v[0]) - for k, v in iteritems(query)) + query = dict((k, v if len(v) > 1 else v[0]) + for k, v in query.items()) if sign_request: - signed_headers, signed_body = self.build_vinyldns_request(method, path, body_string, query, - with_headers=headers or {}, **kwargs) + signed_headers, signed_body = self.sign_request(method, path, body_string, query, with_headers=headers or {}, **kwargs) else: signed_headers = headers or {} signed_body = body_string @@ -115,13 +101,13 @@ class VinylDNSClient(object): response = self.session.request(method, url, data=signed_body, headers=signed_headers, **kwargs) if status_code is not None: - if isinstance(status_code, collections.Iterable): - if not response.status_code in status_code: - print response.text + if isinstance(status_code, Iterable): + if response.status_code not in status_code: + print(response.text) assert_that(response.status_code, is_in(status_code)) else: if response.status_code != status_code: - print response.text + print(response.text) assert_that(response.status_code, is_(status_code)) try: @@ -134,7 +120,7 @@ class VinylDNSClient(object): Simple ping request :return: the content of the response, which should be PONG """ - url = urljoin(self.index_url, '/ping') + url = urljoin(self.index_url, "/ping") response, data = self.make_request(url) return data @@ -144,7 +130,7 @@ class VinylDNSClient(object): Gets processing status :return: the content of the response """ - url = urljoin(self.index_url, '/status') + url = urljoin(self.index_url, "/status") response, data = self.make_request(url) @@ -155,8 +141,8 @@ class VinylDNSClient(object): Update processing status :return: the content of the response """ - url = urljoin(self.index_url, '/status?processingDisabled={}'.format(status)) - response, data = self.make_request(url, 'POST', self.headers) + url = urljoin(self.index_url, "/status?processingDisabled={}".format(status)) + response, data = self.make_request(url, "POST", self.headers) return data @@ -165,7 +151,7 @@ class VinylDNSClient(object): Gets the current color for the application :return: the content of the response, which should be "blue" or "green" """ - url = urljoin(self.index_url, '/color') + url = urljoin(self.index_url, "/color") response, data = self.make_request(url) return data @@ -174,7 +160,7 @@ class VinylDNSClient(object): Checks the health of the app, asserts that a 200 should be returned, otherwise this will fail """ - url = urljoin(self.index_url, '/health') + url = urljoin(self.index_url, "/health") self.make_request(url, sign_request=False) def create_group(self, group, **kwargs): @@ -184,8 +170,8 @@ class VinylDNSClient(object): :return: the content of the response, which should be a group json """ - url = urljoin(self.index_url, u'/groups') - response, data = self.make_request(url, u'POST', self.headers, json.dumps(group), **kwargs) + url = urljoin(self.index_url, "/groups") + response, data = self.make_request(url, "POST", self.headers, json.dumps(group), **kwargs) return data @@ -196,8 +182,8 @@ class VinylDNSClient(object): :return: the group json """ - url = urljoin(self.index_url, u'/groups/' + group_id) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + url = urljoin(self.index_url, "/groups/" + group_id) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data @@ -207,9 +193,8 @@ class VinylDNSClient(object): :param group_id: Id of the group to delete :return: the group json """ - - url = urljoin(self.index_url, u'/groups/' + group_id) - response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/groups/" + group_id) + response, data = self.make_request(url, "DELETE", self.headers, not_found_ok=True, **kwargs) return data @@ -221,8 +206,8 @@ class VinylDNSClient(object): :return: the content of the response, which should be a group json """ - url = urljoin(self.index_url, u'/groups/{0}'.format(group_id)) - response, data = self.make_request(url, u'PUT', self.headers, json.dumps(group), not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/groups/{0}".format(group_id)) + response, data = self.make_request(url, "PUT", self.headers, json.dumps(group), not_found_ok=True, **kwargs) return data @@ -238,16 +223,16 @@ class VinylDNSClient(object): args = [] if group_name_filter: - args.append(u'groupNameFilter={0}'.format(group_name_filter)) + args.append("groupNameFilter={0}".format(group_name_filter)) if start_from: - args.append(u'startFrom={0}'.format(start_from)) + args.append("startFrom={0}".format(start_from)) if max_items is not None: - args.append(u'maxItems={0}'.format(max_items)) + args.append("maxItems={0}".format(max_items)) if ignore_access is not False: - args.append(u'ignoreAccess={0}'.format(ignore_access)) + args.append("ignoreAccess={0}".format(ignore_access)) - url = urljoin(self.index_url, u'/groups') + u'?' + u'&'.join(args) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + url = urljoin(self.index_url, "/groups") + "?" + "&".join(args) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data @@ -261,23 +246,23 @@ class VinylDNSClient(object): groups = [] args = [] if group_name_filter: - args.append(u'groupNameFilter={0}'.format(group_name_filter)) + args.append("groupNameFilter={0}".format(group_name_filter)) - url = urljoin(self.index_url, u'/groups') + u'?' + u'&'.join(args) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + url = urljoin(self.index_url, "/groups") + "?" + "&".join(args) + response, data = self.make_request(url, "GET", self.headers, **kwargs) - groups.extend(data[u'groups']) + groups.extend(data["groups"]) - while u'nextId' in data: + while "nextId" in data: args = [] if group_name_filter: - args.append(u'groupNameFilter={0}'.format(group_name_filter)) - if u'nextId' in data: - args.append(u'startFrom={0}'.format(data[u'nextId'])) + args.append("groupNameFilter={0}".format(group_name_filter)) + if "nextId" in data: + args.append("startFrom={0}".format(data["nextId"])) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) - groups.extend(data[u'groups']) + response, data = self.make_request(url, "GET", self.headers, **kwargs) + groups.extend(data["groups"]) return groups @@ -290,17 +275,17 @@ class VinylDNSClient(object): :return: the json of the members """ if start_from is None and max_items is None: - url = urljoin(self.index_url, u'/groups/{0}/members'.format(group_id)) + url = urljoin(self.index_url, "/groups/{0}/members".format(group_id)) elif start_from is None and max_items is not None: - url = urljoin(self.index_url, u'/groups/{0}/members?maxItems={1}'.format(group_id, max_items)) + url = urljoin(self.index_url, "/groups/{0}/members?maxItems={1}".format(group_id, max_items)) elif start_from is not None and max_items is None: - url = urljoin(self.index_url, u'/groups/{0}/members?startFrom={1}'.format(group_id, start_from)) + url = urljoin(self.index_url, "/groups/{0}/members?startFrom={1}".format(group_id, start_from)) elif start_from is not None and max_items is not None: - url = urljoin(self.index_url, u'/groups/{0}/members?startFrom={1}&maxItems={2}'.format(group_id, - start_from, - max_items)) + url = urljoin(self.index_url, "/groups/{0}/members?startFrom={1}&maxItems={2}".format(group_id, + start_from, + max_items)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data @@ -310,8 +295,8 @@ class VinylDNSClient(object): :param group_id: the Id of the group :return: the user info of the admins """ - url = urljoin(self.index_url, u'/groups/{0}/admins'.format(group_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/groups/{0}/admins".format(group_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data @@ -324,17 +309,17 @@ class VinylDNSClient(object): :return: the json of the members """ if start_from is None and max_items is None: - url = urljoin(self.index_url, u'/groups/{0}/activity'.format(group_id)) + url = urljoin(self.index_url, "/groups/{0}/activity".format(group_id)) elif start_from is None and max_items is not None: - url = urljoin(self.index_url, u'/groups/{0}/activity?maxItems={1}'.format(group_id, max_items)) + url = urljoin(self.index_url, "/groups/{0}/activity?maxItems={1}".format(group_id, max_items)) elif start_from is not None and max_items is None: - url = urljoin(self.index_url, u'/groups/{0}/activity?startFrom={1}'.format(group_id, start_from)) + url = urljoin(self.index_url, "/groups/{0}/activity?startFrom={1}".format(group_id, start_from)) elif start_from is not None and max_items is not None: - url = urljoin(self.index_url, u'/groups/{0}/activity?startFrom={1}&maxItems={2}'.format(group_id, - start_from, - max_items)) + url = urljoin(self.index_url, "/groups/{0}/activity?startFrom={1}&maxItems={2}".format(group_id, + start_from, + max_items)) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data @@ -344,8 +329,10 @@ class VinylDNSClient(object): :param zone: the zone to be created :return: the content of the response """ - url = urljoin(self.index_url, u'/zones') - response, data = self.make_request(url, u'POST', self.headers, json.dumps(zone), **kwargs) + + url = urljoin(self.index_url, "/zones") + response, data = self.make_request(url, "POST", self.headers, json.dumps(zone), **kwargs) + return data def update_zone(self, zone, **kwargs): @@ -354,18 +341,19 @@ class VinylDNSClient(object): :param zone: the zone to be created :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}'.format(zone[u'id'])) - response, data = self.make_request(url, u'PUT', self.headers, json.dumps(zone), not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/{0}".format(zone["id"])) + response, data = self.make_request(url, "PUT", self.headers, json.dumps(zone), not_found_ok=True, **kwargs) + return data def sync_zone(self, zone_id, **kwargs): """ Syncs a zone - :param zone: the zone to be updated + :param zone_id: the id of the zone to be updated :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}/sync'.format(zone_id)) - response, data = self.make_request(url, u'POST', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/{0}/sync".format(zone_id)) + response, data = self.make_request(url, "POST", self.headers, not_found_ok=True, **kwargs) return data def delete_zone(self, zone_id, **kwargs): @@ -374,8 +362,8 @@ class VinylDNSClient(object): :param zone_id: the id of the zone to be deleted :return: nothing, will fail if the status code was not expected """ - url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) - response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/{0}".format(zone_id)) + response, data = self.make_request(url, "DELETE", self.headers, not_found_ok=True, **kwargs) return data @@ -385,8 +373,8 @@ class VinylDNSClient(object): :param zone_id: the id of the zone to retrieve :return: the zone, or will 404 if not found """ - url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/{0}".format(zone_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data @@ -396,8 +384,8 @@ class VinylDNSClient(object): :param zone_name: the name of the zone to retrieve :return: the zone, or will 404 if not found """ - url = urljoin(self.index_url, u'/zones/name/{0}'.format(zone_name)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/name/{0}".format(zone_name)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data @@ -406,8 +394,8 @@ class VinylDNSClient(object): Gets list of configured backend ids :return: list of strings """ - url = urljoin(self.index_url, u'/zones/backendids') - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/backendids") + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data @@ -421,12 +409,12 @@ class VinylDNSClient(object): """ args = [] if start_from: - args.append(u'startFrom={0}'.format(start_from)) + args.append("startFrom={0}".format(start_from)) if max_items is not None: - args.append(u'maxItems={0}'.format(max_items)) - url = urljoin(self.index_url, u'/zones/{0}/changes'.format(zone_id)) + u'?' + u'&'.join(args) + args.append("maxItems={0}".format(max_items)) + url = urljoin(self.index_url, "/zones/{0}/changes".format(zone_id)) + "?" + "&".join(args) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data def list_recordset_changes(self, zone_id, start_from=None, max_items=None, **kwargs): @@ -439,12 +427,12 @@ class VinylDNSClient(object): """ args = [] if start_from: - args.append(u'startFrom={0}'.format(start_from)) + args.append("startFrom={0}".format(start_from)) if max_items is not None: - args.append(u'maxItems={0}'.format(max_items)) - url = urljoin(self.index_url, u'/zones/{0}/recordsetchanges'.format(zone_id)) + u'?' + u'&'.join(args) + args.append("maxItems={0}".format(max_items)) + url = urljoin(self.index_url, "/zones/{0}/recordsetchanges".format(zone_id)) + "?" + "&".join(args) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, **kwargs) return data def list_zones(self, name_filter=None, start_from=None, max_items=None, ignore_access=False, **kwargs): @@ -452,25 +440,25 @@ class VinylDNSClient(object): Gets a list of zones that currently exist :return: a list of zones """ - url = urljoin(self.index_url, u'/zones') + url = urljoin(self.index_url, "/zones") query = [] if name_filter: - query.append(u'nameFilter=' + name_filter) + query.append("nameFilter=" + name_filter) if start_from: - query.append(u'startFrom=' + str(start_from)) + query.append("startFrom=" + str(start_from)) if max_items: - query.append(u'maxItems=' + str(max_items)) + query.append("maxItems=" + str(max_items)) if ignore_access: - query.append(u'ignoreAccess=' + str(ignore_access)) + query.append("ignoreAccess=" + str(ignore_access)) if query: - url = url + u'?' + u'&'.join(query) + url = url + "?" + "&".join(query) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data def create_recordset(self, recordset, **kwargs): @@ -479,11 +467,12 @@ class VinylDNSClient(object): :param recordset: the recordset to be created :return: the content of the response """ - if recordset and u'name' in recordset: - recordset[u'name'] = recordset[u'name'].replace(u'_', u'-') + if recordset and "name" in recordset: + recordset["name"] = recordset["name"].replace("_", "-") + + url = urljoin(self.index_url, "/zones/{0}/recordsets".format(recordset["zoneId"])) + response, data = self.make_request(url, "POST", self.headers, json.dumps(recordset), **kwargs) - url = urljoin(self.index_url, u'/zones/{0}/recordsets'.format(recordset[u'zoneId'])) - response, data = self.make_request(url, u'POST', self.headers, json.dumps(recordset), **kwargs) return data def delete_recordset(self, zone_id, rs_id, **kwargs): @@ -493,9 +482,9 @@ class VinylDNSClient(object): :param rs_id: the id of the recordset to be deleted :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, rs_id)) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(zone_id, rs_id)) - response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "DELETE", self.headers, not_found_ok=True, **kwargs) return data def update_recordset(self, recordset, **kwargs): @@ -504,9 +493,9 @@ class VinylDNSClient(object): :param recordset: the recordset to be updated :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(recordset[u'zoneId'], recordset[u'id'])) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(recordset["zoneId"], recordset["id"])) - response, data = self.make_request(url, u'PUT', self.headers, json.dumps(recordset), not_found_ok=True, **kwargs) + response, data = self.make_request(url, "PUT", self.headers, json.dumps(recordset), not_found_ok=True, **kwargs) return data def get_recordset(self, zone_id, rs_id, **kwargs): @@ -516,9 +505,9 @@ class VinylDNSClient(object): :param rs_id: the id of the recordset to be retrieved :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, rs_id)) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(zone_id, rs_id)) - response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "GET", self.headers, None, not_found_ok=True, **kwargs) return data def get_recordset_change(self, zone_id, rs_id, change_id, **kwargs): @@ -529,9 +518,9 @@ class VinylDNSClient(object): :param change_id: the id of the change to be retrieved :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}/changes/{2}'.format(zone_id, rs_id, change_id)) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}/changes/{2}".format(zone_id, rs_id, change_id)) - response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + response, data = self.make_request(url, "GET", self.headers, None, not_found_ok=True, **kwargs) return data def list_recordsets_by_zone(self, zone_id, start_from=None, max_items=None, record_name_filter=None, record_type_filter=None, name_sort=None, **kwargs): @@ -547,19 +536,19 @@ class VinylDNSClient(object): """ args = [] if start_from: - args.append(u'startFrom={0}'.format(start_from)) + args.append("startFrom={0}".format(start_from)) if max_items is not None: - args.append(u'maxItems={0}'.format(max_items)) + args.append("maxItems={0}".format(max_items)) if record_name_filter: - args.append(u'recordNameFilter={0}'.format(record_name_filter)) + args.append("recordNameFilter={0}".format(record_name_filter)) if record_type_filter: - args.append(u'recordTypeFilter={0}'.format(record_type_filter)) + args.append("recordTypeFilter={0}".format(record_type_filter)) if name_sort: - args.append(u'nameSort={0}'.format(name_sort)) + args.append("nameSort={0}".format(name_sort)) - url = urljoin(self.index_url, u'/zones/{0}/recordsets'.format(zone_id)) + u'?' + u'&'.join(args) + url = urljoin(self.index_url, "/zones/{0}/recordsets".format(zone_id)) + "?" + "&".join(args) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data def create_batch_change(self, batch_change_input, allow_manual_review=True, **kwargs): @@ -569,10 +558,10 @@ class VinylDNSClient(object): :param allow_manual_review: if true and manual review is enabled soft failures are treated as hard failures :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/batchrecordchanges') + url = urljoin(self.index_url, "/zones/batchrecordchanges") if allow_manual_review is not None: - url = url + (u'?' + u'allowManualReview={0}'.format(allow_manual_review)) - response, data = self.make_request(url, u'POST', self.headers, json.dumps(batch_change_input), **kwargs) + url = url + ("?" + "allowManualReview={0}".format(allow_manual_review)) + response, data = self.make_request(url, "POST", self.headers, json.dumps(batch_change_input), **kwargs) return data def get_batch_change(self, batch_change_id, **kwargs): @@ -581,8 +570,8 @@ class VinylDNSClient(object): :param batch_change_id: the unique identifier of the batchchange :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/batchrecordchanges/{0}'.format(batch_change_id)) - response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + url = urljoin(self.index_url, "/zones/batchrecordchanges/{0}".format(batch_change_id)) + response, data = self.make_request(url, "GET", self.headers, None, not_found_ok=True, **kwargs) return data def reject_batch_change(self, batch_change_id, reject_batch_change_input=None, **kwargs): @@ -592,8 +581,8 @@ class VinylDNSClient(object): :param reject_batch_change_input: optional body for reject batch change request :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/batchrecordchanges/{0}/reject'.format(batch_change_id)) - _, data = self.make_request(url, u'POST', self.headers, json.dumps(reject_batch_change_input), **kwargs) + url = urljoin(self.index_url, "/zones/batchrecordchanges/{0}/reject".format(batch_change_id)) + _, data = self.make_request(url, "POST", self.headers, json.dumps(reject_batch_change_input), **kwargs) return data def approve_batch_change(self, batch_change_id, approve_batch_change_input=None, **kwargs): @@ -603,8 +592,8 @@ class VinylDNSClient(object): :param approve_batch_change_input: optional body for approve batch change request :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/batchrecordchanges/{0}/approve'.format(batch_change_id)) - _, data = self.make_request(url, u'POST', self.headers, json.dumps(approve_batch_change_input), **kwargs) + url = urljoin(self.index_url, "/zones/batchrecordchanges/{0}/approve".format(batch_change_id)) + _, data = self.make_request(url, "POST", self.headers, json.dumps(approve_batch_change_input), **kwargs) return data def cancel_batch_change(self, batch_change_id, **kwargs): @@ -613,8 +602,8 @@ class VinylDNSClient(object): :param batch_change_id: ID of the batch change to cancel :return: the content of the response """ - url = urljoin(self.index_url, u'/zones/batchrecordchanges/{0}/cancel'.format(batch_change_id)) - _, data = self.make_request(url, u'POST', self.headers, **kwargs) + url = urljoin(self.index_url, "/zones/batchrecordchanges/{0}/cancel".format(batch_change_id)) + _, data = self.make_request(url, "POST", self.headers, **kwargs) return data def list_batch_change_summaries(self, start_from=None, max_items=None, ignore_access=False, approval_status=None, **kwargs): @@ -624,60 +613,19 @@ class VinylDNSClient(object): """ args = [] if start_from: - args.append(u'startFrom={0}'.format(start_from)) + args.append("startFrom={0}".format(start_from)) if max_items is not None: - args.append(u'maxItems={0}'.format(max_items)) + args.append("maxItems={0}".format(max_items)) if ignore_access: - args.append(u'ignoreAccess={0}'.format(ignore_access)) + args.append("ignoreAccess={0}".format(ignore_access)) if approval_status: - args.append(u'approvalStatus={0}'.format(approval_status)) + args.append("approvalStatus={0}".format(approval_status)) - url = urljoin(self.index_url, u'/zones/batchrecordchanges') + u'?' + u'&'.join(args) + url = urljoin(self.index_url, "/zones/batchrecordchanges") + "?" + "&".join(args) - response, data = self.make_request(url, u'GET', self.headers, **kwargs) + response, data = self.make_request(url, "GET", self.headers, **kwargs) return data - def build_vinyldns_request(self, method, path, body_data, params=None, **kwargs): - - if isinstance(body_data, basestring): - body_string = body_data - else: - body_string = json.dumps(body_data) - - new_headers = {u'X-Amz-Target': u'VinylDNS'} - new_headers.update(kwargs.get(u'with_headers', dict())) - - suppress_headers = kwargs.get(u'suppress_headers', list()) - - headers = self.build_headers(new_headers, suppress_headers) - - auth_header = self.signer.build_auth_header(method, path, headers, body_string, params) - headers[u'Authorization'] = auth_header - - return headers, body_string - - @staticmethod - def build_headers(new_headers, suppressed_keys): - """Construct HTTP headers for a request.""" - - def canonical_header_name(field_name): - return u'-'.join(word.capitalize() for word in field_name.split(u'-')) - - import datetime - now = datetime.datetime.utcnow() - headers = {u'Content-Type': u'application/x-amz-json-1.0', - u'Date': now.strftime(u'%a, %d %b %Y %H:%M:%S GMT'), - u'X-Amz-Date': now.strftime(u'%Y%m%dT%H%M%SZ')} - - for k, v in iteritems(new_headers): - headers[canonical_header_name(k)] = v - - for k in map(canonical_header_name, suppressed_keys): - if k in headers: - del headers[k] - - return headers - def add_zone_acl_rule_with_wait(self, zone_id, acl_rule, sign_request=True, **kwargs): """ Puts an acl rule on the zone and waits for success @@ -699,8 +647,8 @@ class VinylDNSClient(object): :param sign_request: An indicator if we should sign the request; useful for testing auth :return: the content of the response """ - url = urljoin(self.index_url, '/zones/{0}/acl/rules'.format(zone_id)) - response, data = self.make_request(url, 'PUT', self.headers, json.dumps(acl_rule), sign_request=sign_request, **kwargs) + url = urljoin(self.index_url, "/zones/{0}/acl/rules".format(zone_id)) + response, data = self.make_request(url, "PUT", self.headers, json.dumps(acl_rule), sign_request=sign_request, **kwargs) return data @@ -725,18 +673,19 @@ class VinylDNSClient(object): :param sign_request: An indicator if we should sign the request; useful for testing auth :return: the content of the response """ - url = urljoin(self.index_url, '/zones/{0}/acl/rules'.format(zone_id)) - response, data = self.make_request(url, 'DELETE', self.headers, json.dumps(acl_rule), sign_request=sign_request, **kwargs) + url = urljoin(self.index_url, "/zones/{0}/acl/rules".format(zone_id)) + response, data = self.make_request(url, "DELETE", self.headers, json.dumps(acl_rule), sign_request=sign_request, + **kwargs) return data def wait_until_recordset_deleted(self, zone_id, record_set_id, **kwargs): retries = MAX_RETRIES - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(zone_id, record_set_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) while response != 404 and retries > 0: - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(zone_id, record_set_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) retries -= 1 time.sleep(RETRY_WAIT) @@ -749,16 +698,16 @@ class VinylDNSClient(object): latest_change = zone_change retries = MAX_RETRIES - while latest_change[u'status'] != 'Synced' and latest_change[u'status'] != 'Failed' and retries > 0: - changes = self.list_zone_changes(zone_change['zone']['id']) - if u'zoneChanges' in changes: - matching_changes = filter(lambda change: change[u'id'] == zone_change[u'id'], changes[u'zoneChanges']) + while latest_change["status"] != "Synced" and latest_change["status"] != "Failed" and retries > 0: + changes = self.list_zone_changes(zone_change["zone"]["id"]) + if "zoneChanges" in changes: + matching_changes = [change for change in changes["zoneChanges"] if change["id"] == zone_change["id"]] if len(matching_changes) > 0: latest_change = matching_changes[0] time.sleep(RETRY_WAIT) retries -= 1 - assert_that(latest_change[u'status'], is_('Synced')) + assert_that(latest_change["status"], is_("Synced")) def wait_until_zone_deleted(self, zone_id, **kwargs): """ @@ -769,11 +718,11 @@ class VinylDNSClient(object): :return: True when the zone deletion is complete False if the timeout expires """ retries = MAX_RETRIES - url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + url = urljoin(self.index_url, "/zones/{0}".format(zone_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) while response != 404 and retries > 0: - url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + url = urljoin(self.index_url, "/zones/{0}".format(zone_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) retries -= 1 time.sleep(RETRY_WAIT) @@ -788,12 +737,12 @@ class VinylDNSClient(object): retries = MAX_RETRIES zone_request = self.get_zone(zone_id) - while (u'zone' not in zone_request or zone_request[u'zone'][u'status'] != 'Active') and retries > 0: - zone_request = self.get_zone(zone_id) + while ("zone" not in zone_request or zone_request["zone"]["status"] != "Active") and retries > 0: time.sleep(RETRY_WAIT) retries -= 1 + zone_request = self.get_zone(zone_id) - assert_that(zone_request[u'zone'][u'status'], is_('Active')) + assert_that(zone_request["zone"]["status"], is_("Active")) def wait_until_recordset_exists(self, zone_id, record_set_id, **kwargs): """ @@ -805,12 +754,12 @@ class VinylDNSClient(object): :return: True when the recordset creation is complete False if the timeout expires """ retries = MAX_RETRIES - url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + url = urljoin(self.index_url, "/zones/{0}/recordsets/{1}".format(zone_id, record_set_id)) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) while response != 200 and retries > 0: - response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) retries -= 1 time.sleep(RETRY_WAIT) + response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) if response == 200: return data @@ -818,7 +767,7 @@ class VinylDNSClient(object): return response == 200 def abandon_zones(self, zone_ids, **kwargs): - #delete each zone + # delete each zone for zone_id in zone_ids: self.delete_zone(zone_id, status=(202, 404)) @@ -835,28 +784,27 @@ class VinylDNSClient(object): """ change = rs_change retries = MAX_RETRIES - while change['status'] != expected_status and retries > 0: - latest_change = self.get_recordset_change(change['recordSet']['zoneId'], change['recordSet']['id'], - change['id'], status=(200,404)) + while change["status"] != expected_status and retries > 0: + time.sleep(RETRY_WAIT) + retries -= 1 + latest_change = self.get_recordset_change(change["recordSet"]["zoneId"], change["recordSet"]["id"], + change["id"], status=(200, 404)) if "Unable to find record set change" in latest_change: change = change else: change = latest_change - time.sleep(RETRY_WAIT) - retries -= 1 + if change["status"] != expected_status: + print("Failed waiting for record change status") + print(json.dumps(change, indent=3)) + if "systemMessage" in change: + print("systemMessage is " + change["systemMessage"]) - if change['status'] != expected_status: - print 'Failed waiting for record change status' - print json.dumps(change, indent=3) - if 'systemMessage' in change: - print 'systemMessage is ' + change['systemMessage'] - - assert_that(change['status'], is_(expected_status)) + assert_that(change["status"], is_(expected_status)) return change def batch_is_completed(self, batch_change): - return batch_change['status'] in ['Complete', 'Failed', 'PartialFailure'] + return batch_change["status"] in ["Complete", "Failed", "PartialFailure"] def wait_until_batch_change_completed(self, batch_change): """ @@ -867,20 +815,35 @@ class VinylDNSClient(object): """ change = batch_change retries = MAX_RETRIES - while not self.batch_is_completed(change) and retries > 0: - latest_change = self.get_batch_change(change['id'], status=(200,404)) + time.sleep(RETRY_WAIT) + retries -= 1 + latest_change = self.get_batch_change(change["id"], status=(200, 404)) if "cannot be found" in latest_change: change = change else: change = latest_change - time.sleep(RETRY_WAIT) - retries -= 1 - if not self.batch_is_completed(change): - print 'Failed waiting for record change status' - print change + print("Failed waiting for record change status") + print(change) assert_that(self.batch_is_completed(change), is_(True)) return change + + def sign_request(self, method, path, body_data, params=None, **kwargs): + if isinstance(body_data, str): + body_string = body_data + else: + body_string = json.dumps(body_data) + + # We need to add the X-Amz-Date header so that we get a date in a format expected by the API + from datetime import datetime + request_headers = { + "X-Amz-Date": datetime.utcnow().strftime("%Y%m%dT%H%M%SZ") + } + request_headers.update(kwargs.get("with_headers", dict())) + + headers = self.signer.sign_request_headers(method, path, request_headers, body_string, params) + + return headers, body_string diff --git a/modules/api/functional_test/zone_inject.py b/modules/api/functional_test/zone_inject.py deleted file mode 100644 index 111058e17..000000000 --- a/modules/api/functional_test/zone_inject.py +++ /dev/null @@ -1,48 +0,0 @@ -import requests -import json - -newzone = "http://localhost:9000/zones" - - -names = ["cap", "video", "aae", "papi", "dns-ops", "ios", "home", "android", "games", "viper", "headwaters", "xtv", "consec", "media", "accounts"]; - -records = ["10.25.3.2","155.65.10.3", "10.1.1.1", "168.82.76.5", "192.168.99.88", "FE80:0000:0000:0000:0202:B3FF:FE1E:8329", "GF77:0000:0000:0000:0411:B3DF:FE2E:4444", "CC42:0000:0000:0000:0509:B3FF:FE3E:6543", "BG50:0000:0000:0000:0203:C2EE:G3F4:9823","AA90:0000:0000:0000:0608:C2EE:FE4E:1234", "staging", "test", "admin", "assets", "admin"]; - -for x in range(0, 15): - zonename = names[x] - zoneemail = 'testuser'+ str(x) +'@example.com' - payload = {"name": zonename, "origin": "vinyldns", "email": zoneemail} - headers = {'Content-type': 'application/json'} - r = requests.post(newzone, data=json.dumps(payload),headers=headers) - print(r.text) - - -zones = requests.get(newzone) -zone_data = zones.json() - -z=0 -for i in zone_data['zones']: - if z<5: - z=z+1 - recurl = newzone + '/' + str(i['id']) + '/recordsets' - print recurl - payload = {"zoneId":i['id'],"name":"record."+i['name'],"type":"A","ttl":300,"records":[{"address":records[z-1]}]} - headers = {'Content-type': 'application/json'} - r = requests.post(recurl, data=json.dumps(payload),headers=headers) - print(r.text) - elif 4 Date: Tue, 28 Sep 2021 12:46:07 -0400 Subject: [PATCH 11/82] Update file permissions --- build/docker/test/run.sh | 0 docker/admin/update-support-user.py | 0 docker/api/run.sh | 0 docker/functest/run.sh | 0 modules/api/functional_test/__init__.py | 0 modules/api/functional_test/pytest.sh | 0 modules/api/functional_test/run.sh | 0 7 files changed, 0 insertions(+), 0 deletions(-) mode change 100644 => 100755 build/docker/test/run.sh mode change 100755 => 100644 docker/admin/update-support-user.py mode change 100644 => 100755 docker/api/run.sh mode change 100644 => 100755 docker/functest/run.sh mode change 100755 => 100644 modules/api/functional_test/__init__.py mode change 100644 => 100755 modules/api/functional_test/pytest.sh mode change 100644 => 100755 modules/api/functional_test/run.sh diff --git a/build/docker/test/run.sh b/build/docker/test/run.sh old mode 100644 new mode 100755 diff --git a/docker/admin/update-support-user.py b/docker/admin/update-support-user.py old mode 100755 new mode 100644 diff --git a/docker/api/run.sh b/docker/api/run.sh old mode 100644 new mode 100755 diff --git a/docker/functest/run.sh b/docker/functest/run.sh old mode 100644 new mode 100755 diff --git a/modules/api/functional_test/__init__.py b/modules/api/functional_test/__init__.py old mode 100755 new mode 100644 diff --git a/modules/api/functional_test/pytest.sh b/modules/api/functional_test/pytest.sh old mode 100644 new mode 100755 diff --git a/modules/api/functional_test/run.sh b/modules/api/functional_test/run.sh old mode 100644 new mode 100755 From 0b3824ad6c9db6ecf454c3a6fa7ee83214696243 Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Wed, 29 Sep 2021 14:07:57 -0400 Subject: [PATCH 12/82] WIP - Functional Test Updates - Add custom network to `docker-compose-func-test.yml` for deterministic IP addresses - Update tests to remove hard-coded zone names - Fix various issues with cleanup --- docker/api/docker.conf | 4 +- .../bind9/etc/_template/named.partition.conf | 2 +- docker/bind9/etc/named.conf.local | 6 +- docker/bind9/etc/named.partition2.conf | 2 +- docker/bind9/etc/named.partition3.conf | 2 +- docker/bind9/etc/named.partition4.conf | 2 +- docker/bind9/zones/_template/sync-test.hosts | 2 +- docker/bind9/zones/partition1/sync-test.hosts | 2 +- docker/bind9/zones/partition2/sync-test.hosts | 2 +- docker/bind9/zones/partition3/sync-test.hosts | 2 +- docker/bind9/zones/partition4/sync-test.hosts | 2 +- docker/docker-compose-func-test.yml | 97 ++- modules/api/functional_test/conftest.py | 4 +- .../live_tests/authentication_test.py | 8 +- .../batch/approve_batch_change_test.py | 2 - .../batch/create_batch_change_test.py | 770 +++++++----------- .../live_tests/batch/get_batch_change_test.py | 31 +- .../batch/list_batch_change_summaries_test.py | 9 +- .../batch/reject_batch_change_test.py | 2 - .../functional_test/live_tests/conftest.py | 2 +- .../live_tests/internal/color_test.py | 3 - .../live_tests/internal/health_test.py | 7 - .../live_tests/internal/ping_test.py | 3 - .../live_tests/internal/status_test.py | 8 +- .../list_batch_summaries_test_context.py | 15 +- .../live_tests/list_groups_test_context.py | 10 +- .../list_recordsets_test_context.py | 3 +- .../live_tests/list_zones_test_context.py | 24 +- .../membership/create_group_test.py | 5 - .../membership/delete_group_test.py | 36 +- .../membership/get_group_changes_test.py | 7 - .../live_tests/membership/get_group_test.py | 19 +- .../membership/list_group_admins_test.py | 17 +- .../membership/list_group_members_test.py | 15 +- .../membership/list_my_groups_test.py | 25 +- .../membership/update_group_test.py | 9 - .../live_tests/production_verify_test.py | 3 +- .../recordsets/create_recordset_test.py | 206 ++--- .../recordsets/delete_recordset_test.py | 98 +-- .../recordsets/get_recordset_test.py | 41 +- .../recordsets/list_recordset_changes_test.py | 1 - .../recordsets/list_recordsets_test.py | 22 +- .../recordsets/update_recordset_test.py | 213 ++--- .../live_tests/shared_zone_test_context.py | 43 +- .../live_tests/zones/create_zone_test.py | 99 +-- .../live_tests/zones/delete_zone_test.py | 10 +- .../live_tests/zones/get_zone_test.py | 13 +- .../zones/list_zone_changes_test.py | 2 +- .../live_tests/zones/list_zones_test.py | 74 +- .../live_tests/zones/sync_zone_test.py | 181 ++-- .../live_tests/zones/update_zone_test.py | 37 +- .../perf_tests/uat_sync_test.py | 64 -- modules/api/functional_test/run.sh | 3 +- modules/api/functional_test/utils.py | 24 +- .../api/functional_test/vinyldns_python.py | 24 +- 55 files changed, 872 insertions(+), 1445 deletions(-) delete mode 100644 modules/api/functional_test/perf_tests/uat_sync_test.py diff --git a/docker/api/docker.conf b/docker/api/docker.conf index 9a851577a..35ca95bfe 100644 --- a/docker/api/docker.conf +++ b/docker/api/docker.conf @@ -296,11 +296,11 @@ vinyldns { global-acl-rules = [ { group-ids: ["global-acl-group-id"], - fqdn-regex-list: [".*shared."] + fqdn-regex-list: [".*shared[0-9]{1}."] }, { group-ids: ["another-global-acl-group"], - fqdn-regex-list: [".*ok."] + fqdn-regex-list: [".*ok[0-9]{1}."] } ] } diff --git a/docker/bind9/etc/_template/named.partition.conf b/docker/bind9/etc/_template/named.partition.conf index bc957bed4..2743a5b4f 100644 --- a/docker/bind9/etc/_template/named.partition.conf +++ b/docker/bind9/etc/_template/named.partition.conf @@ -76,7 +76,7 @@ zone "{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { allow-update { key "vinyldns."; }; }; -zone "0.0.0.{partition}.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { +zone "0.0.0.1.{partition}.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { type master; file "/var/bind/partition{partition}/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; allow-update { key "vinyldns."; }; diff --git a/docker/bind9/etc/named.conf.local b/docker/bind9/etc/named.conf.local index 008d4a9be..37dab1f8f 100755 --- a/docker/bind9/etc/named.conf.local +++ b/docker/bind9/etc/named.conf.local @@ -33,6 +33,6 @@ key "vinyldns-sha512." { //include "/etc/bind/zones.rfc1918"; include "/var/cache/bind/config/named.partition1.conf"; -//include "/var/cache/bind/config/named.partition2.conf"; -//include "/var/cache/bind/config/named.partition3.conf"; -//include "/var/cache/bind/config/named.partition4.conf"; +include "/var/cache/bind/config/named.partition2.conf"; +include "/var/cache/bind/config/named.partition3.conf"; +include "/var/cache/bind/config/named.partition4.conf"; diff --git a/docker/bind9/etc/named.partition2.conf b/docker/bind9/etc/named.partition2.conf index d297d4e4a..51e77a3be 100644 --- a/docker/bind9/etc/named.partition2.conf +++ b/docker/bind9/etc/named.partition2.conf @@ -76,7 +76,7 @@ zone "2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { allow-update { key "vinyldns."; }; }; -zone "0.0.0.2.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { +zone "0.0.0.1.2.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { type master; file "/var/bind/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; allow-update { key "vinyldns."; }; diff --git a/docker/bind9/etc/named.partition3.conf b/docker/bind9/etc/named.partition3.conf index 308d61cca..c2699cf8a 100644 --- a/docker/bind9/etc/named.partition3.conf +++ b/docker/bind9/etc/named.partition3.conf @@ -76,7 +76,7 @@ zone "3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { allow-update { key "vinyldns."; }; }; -zone "0.0.0.3.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { +zone "0.0.0.1.3.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { type master; file "/var/bind/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; allow-update { key "vinyldns."; }; diff --git a/docker/bind9/etc/named.partition4.conf b/docker/bind9/etc/named.partition4.conf index b69d4a4a4..c3538c284 100644 --- a/docker/bind9/etc/named.partition4.conf +++ b/docker/bind9/etc/named.partition4.conf @@ -76,7 +76,7 @@ zone "4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { allow-update { key "vinyldns."; }; }; -zone "0.0.0.4.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { +zone "0.0.0.1.4.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { type master; file "/var/bind/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; allow-update { key "vinyldns."; }; diff --git a/docker/bind9/zones/_template/sync-test.hosts b/docker/bind9/zones/_template/sync-test.hosts index 8866d3d60..ae95585af 100644 --- a/docker/bind9/zones/_template/sync-test.hosts +++ b/docker/bind9/zones/_template/sync-test.hosts @@ -12,6 +12,6 @@ test IN A 3.3.3.3 test IN A 4.4.4.4 @ IN A 5.5.5.5 already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 +fqdn.sync-test{partition}. IN A 7.7.7.7 _sip._tcp IN SRV 10 60 5060 foo.sync-test. existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition1/sync-test.hosts b/docker/bind9/zones/partition1/sync-test.hosts index 54e597099..365b097a7 100644 --- a/docker/bind9/zones/partition1/sync-test.hosts +++ b/docker/bind9/zones/partition1/sync-test.hosts @@ -12,6 +12,6 @@ test IN A 3.3.3.3 test IN A 4.4.4.4 @ IN A 5.5.5.5 already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 +fqdn.sync-test1. IN A 7.7.7.7 _sip._tcp IN SRV 10 60 5060 foo.sync-test. existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition2/sync-test.hosts b/docker/bind9/zones/partition2/sync-test.hosts index 01622aaee..01260c520 100644 --- a/docker/bind9/zones/partition2/sync-test.hosts +++ b/docker/bind9/zones/partition2/sync-test.hosts @@ -12,6 +12,6 @@ test IN A 3.3.3.3 test IN A 4.4.4.4 @ IN A 5.5.5.5 already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 +fqdn.sync-test2. IN A 7.7.7.7 _sip._tcp IN SRV 10 60 5060 foo.sync-test. existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition3/sync-test.hosts b/docker/bind9/zones/partition3/sync-test.hosts index 0fc832f5d..988d2d0f0 100644 --- a/docker/bind9/zones/partition3/sync-test.hosts +++ b/docker/bind9/zones/partition3/sync-test.hosts @@ -12,6 +12,6 @@ test IN A 3.3.3.3 test IN A 4.4.4.4 @ IN A 5.5.5.5 already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 +fqdn.sync-test3. IN A 7.7.7.7 _sip._tcp IN SRV 10 60 5060 foo.sync-test. existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/partition4/sync-test.hosts b/docker/bind9/zones/partition4/sync-test.hosts index 535849f62..a014733fe 100644 --- a/docker/bind9/zones/partition4/sync-test.hosts +++ b/docker/bind9/zones/partition4/sync-test.hosts @@ -12,6 +12,6 @@ test IN A 3.3.3.3 test IN A 4.4.4.4 @ IN A 5.5.5.5 already-exists IN A 6.6.6.6 -fqdn.sync-test. IN A 7.7.7.7 +fqdn.sync-test4. IN A 7.7.7.7 _sip._tcp IN SRV 10 60 5060 foo.sync-test. existing.dotted IN A 9.9.9.9 diff --git a/docker/docker-compose-func-test.yml b/docker/docker-compose-func-test.yml index 515a55c1f..d24194acc 100644 --- a/docker/docker-compose-func-test.yml +++ b/docker/docker-compose-func-test.yml @@ -1,40 +1,6 @@ -version: "3.0" +version: "3.5" + services: - mysql: - image: "mysql:5.7" - env_file: - .env - container_name: "vinyldns-mysql" - ports: - - "19002:3306" - logging: - driver: none - - bind9: - image: "vinyldns/bind9:0.0.4" - env_file: - .env - container_name: "vinyldns-bind9" - volumes: - - ./bind9/etc:/var/cache/bind/config - - ./bind9/zones:/var/cache/bind/zones - ports: - - "19001:53/tcp" - - "19001:53/udp" - logging: - driver: none - - localstack: - image: localstack/localstack:0.10.4 - container_name: "vinyldns-localstack" - ports: - - "19006:19006" - - "19007:19007" - - "19009:19009" - environment: - - SERVICES=sns:19006,sqs:19007,route53:19009 - - START_WEB=0 - - HOSTNAME_EXTERNAL=vinyldns-localstack # this file is copied into the target directory to get the jar! won't run in place as is! api: @@ -49,6 +15,54 @@ services: - mysql - bind9 - localstack + networks: + vinyldns: + ipv4_address: 172.10.10.2 + + mysql: + image: "mysql:5.7" + env_file: + .env + container_name: "vinyldns-mysql" + ports: + - "19002:3306" + logging: + driver: none + networks: + vinyldns: + ipv4_address: 172.10.10.3 + + localstack: + image: localstack/localstack:0.10.4 + container_name: "vinyldns-localstack" + ports: + - "19006:19006" + - "19007:19007" + - "19009:19009" + environment: + - SERVICES=sns:19006,sqs:19007,route53:19009 + - START_WEB=0 + - HOSTNAME_EXTERNAL=vinyldns-localstack + networks: + vinyldns: + ipv4_address: 172.10.10.4 + + bind9: + image: "vinyldns/bind9:0.0.4" + env_file: + .env + container_name: "vinyldns-bind9" + volumes: + - ./bind9/etc:/var/cache/bind/config + - ./bind9/zones:/var/cache/bind/zones + ports: + - "19001:53/tcp" + - "19001:53/udp" + logging: + driver: none + networks: + vinyldns: + ipv4_address: 172.10.10.10 functest: build: @@ -60,3 +74,14 @@ services: container_name: "vinyldns-functest" depends_on: - api + networks: + - vinyldns + +networks: + # Custom network so that we have some control over IP space and deterministic container IPs + vinyldns: + name: vinyldns + driver: bridge + ipam: + config: + - subnet: 172.10.10.0/24 diff --git a/modules/api/functional_test/conftest.py b/modules/api/functional_test/conftest.py index 2e89919d1..64caf366b 100644 --- a/modules/api/functional_test/conftest.py +++ b/modules/api/functional_test/conftest.py @@ -3,6 +3,7 @@ import logging import os import ssl import sys +import traceback import _pytest.config import pytest @@ -109,7 +110,8 @@ def retrieve_resolver(resolver_name: str) -> str: resolver_address = [resolver_address] + parts[1:] resolver_address = ":".join(resolver_address) logger.warning("Translating `%s` resolver to `%s`", resolver_name, resolver_address) - except: + except Exception: + traceback.print_exc() logger.error("Cannot translate `%s` into a usable resolver address", resolver_name) pytest.exit(1) diff --git a/modules/api/functional_test/live_tests/authentication_test.py b/modules/api/functional_test/live_tests/authentication_test.py index 9a83ae3fb..52578a533 100644 --- a/modules/api/functional_test/live_tests/authentication_test.py +++ b/modules/api/functional_test/live_tests/authentication_test.py @@ -1,10 +1,9 @@ -from utils import * from hamcrest import * -from vinyldns_python import VinylDNSClient -from dns.resolver import * -from vinyldns_context import VinylDNSTestContext from requests.compat import urljoin +from vinyldns_context import VinylDNSTestContext +from vinyldns_python import VinylDNSClient + def test_request_fails_when_user_account_is_locked(): """ @@ -42,6 +41,7 @@ def test_request_fails_when_accessing_non_existent_route(): assert_that(data, is_("The requested path [/no-existo] does not exist.")) + def test_request_fails_with_unsupported_http_method_for_route(): """ Test request fails with MethodNotAllowed (405) when HTTP Method is not supported for specified route diff --git a/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py b/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py index b0ce3fb78..5006a4026 100644 --- a/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py @@ -118,7 +118,6 @@ def test_approve_batch_change_with_invalid_batch_change_id_fails(shared_zone_tes """ Test approving a batch change with invalid batch change ID """ - client = shared_zone_test_context.ok_vinyldns_client error = client.approve_batch_change("some-id", status=404) @@ -130,7 +129,6 @@ def test_approve_batch_change_with_comments_exceeding_max_length_fails(shared_zo """ Test approving a batch change with comments exceeding 1024 characters fails """ - client = shared_zone_test_context.ok_vinyldns_client approve_batch_change_input = { "reviewComment": "a" * 1025 diff --git a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py index 8a879c459..3a56caf63 100644 --- a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py @@ -38,8 +38,8 @@ def assert_failed_change_in_error_response(input_json, change_type="Add", input_ return -def assert_successful_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", - ttl=200, record_data: Optional[Union[str, dict]] = "1.1.1.1"): +def assert_successful_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", ttl=200, + record_data: Optional[Union[str, dict]] = "1.1.1.1"): validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data) assert_that("errors" in input_json, is_(False)) return @@ -285,7 +285,6 @@ def test_create_batch_change_with_adds_success(shared_zone_test_context): "ttl": 200, "records": [{"preference": 1000, "exchange": "bar.foo."}]} verify_recordset(rs16, expected16) - finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -381,8 +380,8 @@ def test_create_batch_change_with_soft_failures_scheduled_time_and_allow_manual_ response = client.create_batch_change(batch_change_input, False, status=400) assert_failed_change_in_error_response(response[0], input_name="non.existent.", record_type="A", record_data="4.5.6.7", - error_messages=[ - "Zone Discovery Failed: zone for \"non.existent.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"non.existent.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) def test_create_batch_change_without_scheduled_time_succeeds(shared_zone_test_context): @@ -563,7 +562,6 @@ def test_create_batch_change_with_updates_deletes_success(shared_zone_test_conte "ttl": 200, "records": [{"preference": 1000, "exchange": "foo.bar."}]} verify_recordset(rs9, expected9) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs[0] == dummy_zone["id"]] @@ -598,8 +596,7 @@ def test_create_batch_change_without_comments_succeeds(shared_zone_test_context) completed_batch = client.wait_until_batch_change_completed(result) to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success(result["changes"], zone=parent_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.5.6.7") + assert_change_success(result["changes"], zone=parent_zone, index=0, record_name=test_record_name, input_name=test_record_fqdn, record_data="4.5.6.7") finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -625,10 +622,8 @@ def test_create_batch_change_with_owner_group_id_succeeds(shared_zone_test_conte completed_batch = client.wait_until_batch_change_completed(result) to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.3.2.1") + assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, input_name=test_record_fqdn, record_data="4.3.2.1") assert_that(completed_batch["ownerGroupId"], is_(shared_zone_test_context.ok_group["id"])) - finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -653,10 +648,8 @@ def test_create_batch_change_without_owner_group_id_succeeds(shared_zone_test_co completed_batch = client.wait_until_batch_change_completed(result) to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] - assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, - input_name=test_record_fqdn, record_data="4.3.2.1") + assert_change_success(result["changes"], zone=ok_zone, index=0, record_name=test_record_name, input_name=test_record_fqdn, record_data="4.3.2.1") assert_that(completed_batch, is_not(has_key("ownerGroupId"))) - finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -714,7 +707,6 @@ def test_create_batch_change_with_missing_ttl_returns_default_or_existing(shared new_record = client.get_recordset(record_set_list[2][0], record_set_list[2][1])["recordSet"] assert_that(new_record["ttl"], is_(7200)) - finally: clear_zoneid_rsid_tuple_list(to_delete, client) @@ -744,7 +736,6 @@ def test_create_batch_change_partial_failure(shared_zone_test_context): to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS assert_that(completed_batch["status"], is_("PartialFailure")) - finally: clear_zoneid_rsid_tuple_list(to_delete, client) dns_delete(shared_zone_test_context.ok_zone, "direct-to-backend", "A") @@ -772,7 +763,6 @@ def test_create_batch_change_failed(shared_zone_test_context): completed_batch = client.wait_until_batch_change_completed(result) assert_that(completed_batch["status"], is_("Failed")) - finally: dns_delete(shared_zone_test_context.ok_zone, "backend-foo", "A") dns_delete(shared_zone_test_context.ok_zone, "backend-already-exists", "A") @@ -782,7 +772,6 @@ def test_empty_batch_fails(shared_zone_test_context): """ Test creating batch without any changes fails with """ - batch_change_input = { "comments": "this should fail processing", "changes": [] @@ -908,7 +897,6 @@ def test_create_batch_change_with_high_value_domain_fails(shared_zone_test_conte """ Test creating a batch change with a high value domain as an inputName fails """ - client = shared_zone_test_context.ok_vinyldns_client ok_zone_name = shared_zone_test_context.ok_zone["name"] ip4_prefix = shared_zone_test_context.ip4_classless_prefix @@ -937,30 +925,18 @@ def test_create_batch_change_with_high_value_domain_fails(shared_zone_test_conte response = client.create_batch_change(batch_change_input, status=400) - assert_error(response[0], error_messages=[ - f'Record name "high-value-domain-add.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[1], error_messages=[ - f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[2], error_messages=[ - f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[3], error_messages=[ - f'Record name "high-value-domain-delete.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[4], error_messages=[ - f'Record name "{ip4_prefix}.252" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[5], error_messages=[ - f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[6], error_messages=[ - f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[7], error_messages=[ - f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[8], error_messages=[ - f'Record name "{ip6_prefix}:0:0:0:0:ffff" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[9], error_messages=[ - f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[10], error_messages=[ - f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) - assert_error(response[11], error_messages=[ - f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[0], error_messages=[f'Record name "high-value-domain-add.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[1], error_messages=[f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[2], error_messages=[f'Record name "high-value-domain-update.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[3], error_messages=[f'Record name "high-value-domain-delete.{ok_zone_name}" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[4], error_messages=[f'Record name "{ip4_prefix}.252" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[5], error_messages=[f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[6], error_messages=[f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[7], error_messages=[f'Record name "{ip4_prefix}.253" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[8], error_messages=[f'Record name "{ip6_prefix}:0:0:0:0:ffff" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[9], error_messages=[f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[10], error_messages=[f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) + assert_error(response[11], error_messages=[f'Record name "{ip6_prefix}:0:0:0:ffff:0" is configured as a High Value Domain, so it cannot be modified.']) assert_that(response[12], is_not(has_key("errors"))) @@ -970,7 +946,6 @@ def test_create_batch_change_with_domains_requiring_review_succeeds(shared_zone_ """ Test creating a batch change with an input name requiring review is accepted """ - rejecter = shared_zone_test_context.support_user_client client = shared_zone_test_context.ok_vinyldns_client ok_zone_name = shared_zone_test_context.ok_zone["name"] @@ -1008,7 +983,6 @@ def test_create_batch_change_with_domains_requiring_review_succeeds(shared_zone_ assert_that(get_batch["changes"][i]["status"], is_("NeedsReview")) assert_that(get_batch["changes"][i]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) assert_that(get_batch["changes"][12]["validationErrors"], empty()) - finally: # Clean up so data doesn't change if response: @@ -1034,8 +1008,8 @@ def test_create_batch_change_with_soft_failures_and_allow_manual_review_disabled response = client.create_batch_change(batch_change_input, False, status=400) assert_failed_change_in_error_response(response[0], input_name="non.existent.", record_type="A", record_data="4.5.6.7", - error_messages=[ - "Zone Discovery Failed: zone for \"non.existent.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"non.existent.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) def test_create_batch_change_with_invalid_record_type_fails(shared_zone_test_context): @@ -1362,17 +1336,13 @@ def test_create_batch_change_with_invalid_duplicate_record_names_fails(shared_zo response = client.create_batch_change(batch_change_input, status=400) assert_successful_change_in_error_response(response[0], input_name=f"thing1.{ok_zone_name}", record_data="4.5.6.7") - assert_failed_change_in_error_response(response[1], input_name=f"thing1.{ok_zone_name}", record_type="CNAME", - record_data="test.com.", - error_messages=[f'Record Name "thing1.{ok_zone_name}" Not Unique In Batch Change:' - ' cannot have multiple "CNAME" records with the same name.']) + assert_failed_change_in_error_response(response[1], input_name=f"thing1.{ok_zone_name}", record_type="CNAME", record_data="test.com.", + error_messages=[f'Record Name "thing1.{ok_zone_name}" Not Unique In Batch Change: ' + f'cannot have multiple "CNAME" records with the same name.']) assert_successful_change_in_error_response(response[2], input_name=f"delete1.{ok_zone_name}", change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[3], input_name=f"delete1.{ok_zone_name}", record_type="CNAME", - record_data="test.com.") + assert_successful_change_in_error_response(response[3], input_name=f"delete1.{ok_zone_name}", record_type="CNAME", record_data="test.com.") assert_successful_change_in_error_response(response[4], input_name=f"delete-this1.{ok_zone_name}", record_data="4.5.6.7") - assert_successful_change_in_error_response(response[5], input_name=f"delete-this1.{ok_zone_name}", - change_type="DeleteRecordSet", record_type="CNAME") - + assert_successful_change_in_error_response(response[5], input_name=f"delete-this1.{ok_zone_name}", change_type="DeleteRecordSet", record_type="CNAME") finally: clear_recordset_list(to_delete, client) @@ -1480,49 +1450,46 @@ def test_a_recordtype_add_checks(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, - record_data="1.2.3.4") + assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, record_data="1.2.3.4") # ttl, domain name, reverse zone input validations assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_data="1.2.3.4", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' - "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) assert_failed_change_in_error_response(response[2], input_name="reverse-zone.10.10.in-addr.arpa.", record_data="1.2.3.4", - error_messages=[ - "Invalid Record Type In Reverse Zone: record with name \"reverse-zone.10.10.in-addr.arpa.\" and type \"A\" is not allowed in a reverse zone."]) + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse-zone.10.10.in-addr.arpa.\" and " + "type \"A\" is not allowed in a reverse zone."]) # zone discovery failure assert_failed_change_in_error_response(response[3], input_name=f"no.subzone.{parent_zone_name}", record_data="1.2.3.4", - error_messages=[ - f'Zone Discovery Failed: zone for "no.subzone.{parent_zone_name}" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=[f'Zone Discovery Failed: zone for "no.subzone.{parent_zone_name}" does not exist in VinylDNS. ' + f'If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[4], input_name="no.zone.at.all.", record_data="1.2.3.4", - error_messages=[ - 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. ' + 'If zone exists, then it must be connected to in VinylDNS.']) # context validations: duplicate name failure is always on the cname assert_failed_change_in_error_response(response[5], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) assert_successful_change_in_error_response(response[6], input_name=f"cname-duplicate.{parent_zone_name}", record_data="1.2.3.4") # context validations: conflicting recordsets, unauthorized error assert_failed_change_in_error_response(response[7], input_name=existing_a_fqdn, record_data="1.2.3.4", - error_messages=[ - f"Record \"{existing_a_fqdn}\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + error_messages=[f"Record \"{existing_a_fqdn}\" Already Exists: " + f"cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[8], input_name=existing_cname_fqdn, record_data="1.2.3.4", - error_messages=[ - f'CNAME Conflict: CNAME record names must be unique. Existing record with name "{existing_cname_fqdn}" and type \"CNAME\" conflicts with this record.']) + error_messages=[f'CNAME Conflict: CNAME record names must be unique. ' + f'Existing record with name "{existing_cname_fqdn}" and type \"CNAME\" conflicts with this record.']) assert_failed_change_in_error_response(response[9], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_data="1.2.3.4", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: clear_recordset_list(to_delete, client) @@ -1629,33 +1596,33 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): # input validations failures assert_failed_change_in_error_response(response[3], input_name="$invalid.host.name.", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid domain name: "$invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$invalid.host.name.", valid domain names must be letters, ' + 'numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[4], input_name="reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid Record Type In Reverse Zone: record with name "reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.']) + error_messages=['Invalid Record Type In Reverse Zone: record with name "reverse.zone.in-addr.arpa." and type "A" ' + 'is not allowed in a reverse zone.']) assert_failed_change_in_error_response(response[5], input_name="$another.invalid.host.name.", ttl=300, - error_messages=[ - 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, ' + 'numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[6], input_name="$another.invalid.host.name.", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, ' + 'numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[7], input_name="another.reverse.zone.in-addr.arpa.", ttl=10, - error_messages=[ - 'Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.', - 'Invalid TTL: "10", must be a number between 30 and 2147483647.']) + error_messages=['Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." ' + 'and type "A" is not allowed in a reverse zone.', + 'Invalid TTL: "10", must be a number between 30 and 2147483647.']) assert_failed_change_in_error_response(response[8], input_name="another.reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.']) + error_messages=['Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." ' + 'and type "A" is not allowed in a reverse zone.']) # zone discovery failure assert_failed_change_in_error_response(response[9], input_name="zone.discovery.error.", change_type="DeleteRecordSet", - error_messages=[ - 'Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=['Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. ' + 'If zone exists, then it must be connected to in VinylDNS.']) # context validation failures: record does not exist, not authorized assert_failed_change_in_error_response(response[10], input_name=f"non-existent.{ok_zone_name}", @@ -1674,7 +1641,6 @@ def test_a_recordtype_update_delete_checks(shared_zone_test_context): error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) assert_failed_change_in_error_response(response[15], input_name=rs_update_dummy_with_owner_fqdn, ttl=300, error_messages=[f'User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes.']) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] @@ -1695,13 +1661,11 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): existing_aaaa_name = generate_record_name() existing_aaaa_fqdn = existing_aaaa_name + "." + shared_zone_test_context.parent_zone["name"] - existing_aaaa = create_recordset(shared_zone_test_context.parent_zone, existing_aaaa_name, "AAAA", - [{"address": "1::1"}], 100) + existing_aaaa = create_recordset(shared_zone_test_context.parent_zone, existing_aaaa_name, "AAAA", [{"address": "1::1"}], 100) existing_cname_name = generate_record_name() existing_cname_fqdn = existing_cname_name + "." + shared_zone_test_context.parent_zone["name"] - existing_cname = create_recordset(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", - [{"cname": "cname.data."}], 100) + existing_cname = create_recordset(shared_zone_test_context.parent_zone, existing_cname_name, "CNAME", [{"cname": "cname.data."}], 100) good_record_name = generate_record_name() good_record_fqdn = good_record_name + "." + shared_zone_test_context.parent_zone["name"] @@ -1737,49 +1701,47 @@ def test_aaaa_recordtype_add_checks(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, - record_type="AAAA", record_data="1::1") + assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, record_type="AAAA", record_data="1::1") # ttl, domain name, reverse zone input validations assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_type="AAAA", record_data="1::1", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' - "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) assert_failed_change_in_error_response(response[2], input_name="reverse-zone.1.2.3.ip6.arpa.", record_type="AAAA", record_data="1::1", - error_messages=[ - "Invalid Record Type In Reverse Zone: record with name \"reverse-zone.1.2.3.ip6.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse-zone.1.2.3.ip6.arpa.\" " + "and type \"AAAA\" is not allowed in a reverse zone."]) # zone discovery failures assert_failed_change_in_error_response(response[3], input_name=f"no.subzone.{parent_zone_name}", record_type="AAAA", record_data="1::1", - error_messages=[ - f'Zone Discovery Failed: zone for \"no.subzone.{parent_zone_name}\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=[f'Zone Discovery Failed: zone for \"no.subzone.{parent_zone_name}\" does not exist in VinylDNS. ' + f'If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[4], input_name="no.zone.at.all.", record_type="AAAA", record_data="1::1", - error_messages=[ - "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validations: duplicate name failure (always on the cname), conflicting recordsets, unauthorized error assert_failed_change_in_error_response(response[5], input_name=f"cname-duplicate.{parent_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"cname-duplicate.{parent_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) assert_successful_change_in_error_response(response[6], input_name=f"cname-duplicate.{parent_zone_name}", record_type="AAAA", record_data="1::1") assert_failed_change_in_error_response(response[7], input_name=existing_aaaa_fqdn, record_type="AAAA", record_data="1::1", - error_messages=[f"Record \"{existing_aaaa_fqdn}\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + error_messages=[f"Record \"{existing_aaaa_fqdn}\" Already Exists: cannot add an existing record; " + f"to update it, issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[8], input_name=existing_cname_fqdn, record_type="AAAA", record_data="1::1", - error_messages=[ - f"CNAME Conflict: CNAME record names must be unique. Existing record with name \"{existing_cname_fqdn}\" and type \"CNAME\" conflicts with this record."]) + error_messages=[f"CNAME Conflict: CNAME record names must be unique. Existing record with name \"{existing_cname_fqdn}\" " + f"and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[9], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="AAAA", record_data="1::1", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: clear_recordset_list(to_delete, client) @@ -1810,8 +1772,7 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): rs_update_dummy_name = generate_record_name() rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" - rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], - 200) + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], 200) batch_change_input = { "comments": "this is optional", @@ -1824,10 +1785,8 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): # input validations failures get_change_A_AAAA_json(f"invalid-name$.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), get_change_A_AAAA_json("reverse.zone.in-addr.arpa.", record_type="AAAA", change_type="DeleteRecordSet"), - get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", record_type="AAAA", - change_type="DeleteRecordSet"), - get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", ttl=29, record_type="AAAA", - address="1:2:3:4:5:6:7:8"), + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", ttl=29, record_type="AAAA", address="1:2:3:4:5:6:7:8"), # zone discovery failure get_change_A_AAAA_json("no.zone.at.all.", record_type="AAAA", change_type="DeleteRecordSet"), @@ -1871,24 +1830,21 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): # input validations failures: invalid input name, reverse zone error, invalid ttl assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - f'Invalid domain name: "invalid-name$.{ok_zone_name}", ' - f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=[f'Invalid domain name: "invalid-name$.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[4], input_name="reverse.zone.in-addr.arpa.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Invalid Record Type In Reverse Zone: record with name \"reverse.zone.in-addr.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse.zone.in-addr.arpa.\" and " + "type \"AAAA\" is not allowed in a reverse zone."]) assert_failed_change_in_error_response(response[5], input_name=f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' - f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=[f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[6], input_name=f"bad-ttl-and-invalid-name$-update.{ok_zone_name}", ttl=29, record_type="AAAA", record_data="1:2:3:4:5:6:7:8", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' - f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + f'Invalid domain name: "bad-ttl-and-invalid-name$-update.{ok_zone_name}", ' + f'valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="AAAA", @@ -1913,7 +1869,6 @@ def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="AAAA", record_data=None, change_type="DeleteRecordSet", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] @@ -2004,29 +1959,23 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=forward_fqdn, - record_type="CNAME", record_data="test.com.") - assert_successful_change_in_error_response(response[1], input_name=reverse_fqdn, - record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[0], input_name=forward_fqdn, record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[1], input_name=reverse_fqdn, record_type="CNAME", record_data="test.com.") # successful changes - delete and add of same record name but different type - assert_successful_change_in_error_response(response[2], input_name=rs_a_to_cname_ok_fqdn, - change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[3], input_name=rs_a_to_cname_ok_fqdn, record_type="CNAME", - record_data="test.com.") + assert_successful_change_in_error_response(response[2], input_name=rs_a_to_cname_ok_fqdn, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[3], input_name=rs_a_to_cname_ok_fqdn, record_type="CNAME", record_data="test.com.") assert_successful_change_in_error_response(response[4], input_name=rs_cname_to_A_ok_fqdn) - assert_successful_change_in_error_response(response[5], input_name=rs_cname_to_A_ok_fqdn, record_type="CNAME", - change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[5], input_name=rs_cname_to_A_ok_fqdn, record_type="CNAME", change_type="DeleteRecordSet") # ttl, domain name, data assert_failed_change_in_error_response(response[6], input_name=f"bad-ttl-and-invalid-name$.{parent_zone_name}", ttl=29, record_type="CNAME", record_data="also$bad.name.", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' - "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.", - 'Invalid domain name: "also$bad.name.", valid domain names must be letters, numbers, underscores, and hyphens, ' - "joined by dots, and terminated with a dot."]) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + f'Invalid domain name: "bad-ttl-and-invalid-name$.{parent_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.", + 'Invalid domain name: "also$bad.name.", valid domain names must be letters, numbers, underscores, and hyphens, ' + "joined by dots, and terminated with a dot."]) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.com.", record_type="CNAME", record_data="test.com.", @@ -2072,7 +2021,6 @@ def test_cname_recordtype_add_checks(shared_zone_test_context): assert_failed_change_in_error_response(response[16], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="CNAME", record_data="test.com.", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: clear_recordset_list(to_delete, client) @@ -2175,18 +2123,19 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): # ttl, domain name, data assert_failed_change_in_error_response(response[6], input_name="$invalid.host.name.", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid domain name: "$invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$invalid.host.name.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[7], input_name="$another.invalid.host.name.", record_type="CNAME", change_type="DeleteRecordSet", - error_messages=[ - 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[8], input_name="$another.invalid.host.name.", ttl=20, record_type="CNAME", record_data="$another.invalid.cname.", - error_messages=[ - 'Invalid TTL: "20", must be a number between 30 and 2147483647.', - 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.', - 'Invalid domain name: "$another.invalid.cname.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid TTL: "20", must be a number between 30 and 2147483647.', + 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.', + 'Invalid domain name: "$another.invalid.cname.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.']) # zone discovery failure assert_failed_change_in_error_response(response[9], input_name="zone.discovery.error.", record_type="CNAME", @@ -2225,7 +2174,6 @@ def test_cname_recordtype_update_delete_checks(shared_zone_test_context): error_messages=[f"Record Name \"existing-cname2.{parent_zone_name}\" Not Unique In Batch Change: " f"cannot have multiple \"CNAME\" records with the same name."]) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] @@ -2243,6 +2191,7 @@ def test_ptr_recordtype_auth_checks(shared_zone_test_context): ok_client = shared_zone_test_context.ok_vinyldns_client ip4_prefix = shared_zone_test_context.ip4_classless_prefix ip6_prefix = shared_zone_test_context.ip6_prefix + ok_group_name = shared_zone_test_context.ok_group["name"] no_auth_ipv4 = create_recordset(shared_zone_test_context.classless_base_zone, "25", "PTR", [{"ptrdname": "ptrdname.data."}], 200) @@ -2271,19 +2220,19 @@ def test_ptr_recordtype_auth_checks(shared_zone_test_context): assert_failed_change_in_error_response(errors[0], input_name=f"{ip4_prefix}.5", record_type="PTR", record_data="not.authorized.ipv4.ptr.base.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[1], input_name=f"{ip4_prefix}.193", record_type="PTR", record_data="not.authorized.ipv4.ptr.classless.delegation.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[2], input_name=f"{ip6_prefix}:1000::1234", record_type="PTR", record_data="not.authorized.ipv6.ptr.", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[3], input_name=f"{ip4_prefix}.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes."]) assert_failed_change_in_error_response(errors[4], input_name=f"{ip6_prefix}:1000::1234", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=["User \"dummy\" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes."]) + error_messages=[f"User \"dummy\" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes."]) finally: clear_recordset_list(to_delete, ok_client) @@ -2320,7 +2269,7 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): get_change_CNAME_json(f"55.192/30.{ip4_zone_name}"), # delegated zone # zone discovery failure - get_change_PTR_json(f"{ip4_prefix}.192"), + get_change_PTR_json(f"192.1.1.100"), # context validation failures get_change_PTR_json(f"{ip4_prefix}.193", ptrdname="existing-ptr."), @@ -2362,16 +2311,16 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): assert_successful_change_in_error_response(response[9], input_name=f"55.192/30.{ip4_zone_name}", record_type="CNAME", record_data="test.com.") # zone discovery failure - assert_failed_change_in_error_response(response[10], input_name="192.0.1.192", record_type="PTR", record_data="test.com.", - error_messages=['Zone Discovery Failed: zone for "192.0.1.192" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + assert_failed_change_in_error_response(response[10], input_name="192.1.1.100", record_type="PTR", record_data="test.com.", + error_messages=['Zone Discovery Failed: zone for "192.1.1.100" does not exist in VinylDNS. ' + 'If zone exists, then it must be connected to in VinylDNS.']) # context validations: existing cname recordset assert_failed_change_in_error_response(response[11], input_name=f"{ip4_prefix}.193", record_type="PTR", record_data="existing-ptr.", - error_messages=['Record f"{ip4_prefix}.193" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) + error_messages=[f'Record "{ip4_prefix}.193" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) assert_failed_change_in_error_response(response[12], input_name=f"{ip4_prefix}.199", record_type="PTR", record_data="existing-cname.", error_messages=[ f'CNAME Conflict: CNAME record names must be unique. Existing record with name "{ip4_prefix}.199" and type "CNAME" conflicts with this record.']) - finally: clear_recordset_list(to_delete, client) @@ -2402,7 +2351,7 @@ def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): # assert_failed_change_in_error_response(response[0], input_name=f"{ip4_prefix}.1", record_type="PTR", # record_data="test.com.", # error_messages=[ -# 'Zone Discovery Failed: zone for f"{ip4_prefix}.1" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) +# f'Zone Discovery Failed: zone for "{ip4_prefix}.1" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) # # finally: # # re-create classless base zone and update zone info in shared_zone_test_context for use in future tests @@ -2448,7 +2397,7 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): get_change_PTR_json("192.0.2.", ttl=29, ptrdname="failed-update$.ptr"), # zone discovery failure - get_change_PTR_json("192.0.1.25", change_type="DeleteRecordSet"), + get_change_PTR_json("192.1.1.25", change_type="DeleteRecordSet"), # context validation failures get_change_PTR_json(f"{ip4_prefix}.199", change_type="DeleteRecordSet"), @@ -2494,29 +2443,25 @@ def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): error_messages=['Invalid IP address: "192.0.2.".']) assert_failed_change_in_error_response(response[9], ttl=29, input_name="192.0.2.", record_type="PTR", record_data="failed-update$.ptr.", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid IP address: "192.0.2.".', - 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid IP address: "192.0.2.".', + 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, underscores, and hyphens, ' + 'joined by dots, and terminated with a dot.']) # zone discovery failure - assert_failed_change_in_error_response(response[10], input_name="192.0.1.25", record_type="PTR", + assert_failed_change_in_error_response(response[10], input_name="192.1.1.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Zone Discovery Failed: zone for \"192.0.1.25\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"192.1.1.25\" does not exist in VinylDNS. If zone exists, " + "then it must be connected to in VinylDNS."]) # context validation failures: record does not exist assert_failed_change_in_error_response(response[11], input_name=f"{ip4_prefix}.199", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"192.0.2.199\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[12], ttl=300, input_name=f"{ip4_prefix}.200", record_type="PTR", - record_data="has-updated.ptr.") + error_messages=[f"Record \"{ip4_prefix}.199\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[12], ttl=300, input_name=f"{ip4_prefix}.200", record_type="PTR", record_data="has-updated.ptr.") assert_failed_change_in_error_response(response[13], input_name=f"{ip4_prefix}.200", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"192.0.2.200\" Does Not Exist: cannot delete a record that does not exist."]) - + error_messages=[f"Record \"{ip4_prefix}.200\" Does Not Exist: cannot delete a record that does not exist."]) finally: clear_recordset_list(to_delete, ok_client) @@ -2565,12 +2510,11 @@ def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): # independent validations: bad TTL, malformed host name/IP address, duplicate record assert_failed_change_in_error_response(response[1], input_name=f"{ip6_prefix}:1000::abe", ttl=29, record_type="PTR", record_data="test.com.", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) assert_failed_change_in_error_response(response[2], input_name=f"{ip6_prefix}:1000::bae", record_type="PTR", record_data="$malformed.hostname.", - error_messages=[ - 'Invalid domain name: "$malformed.hostname.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "$malformed.hostname.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[3], input_name="fd69:27cc:fe91de::ab", record_type="PTR", record_data="malformed.ip.address.", error_messages=['Invalid IP address: "fd69:27cc:fe91de::ab".']) @@ -2578,15 +2522,14 @@ def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): # zone discovery failure assert_failed_change_in_error_response(response[4], input_name="fedc:ba98:7654::abc", record_type="PTR", record_data="zone.discovery.error.", - error_messages=[ - "Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validations: existing record sets pre-request assert_failed_change_in_error_response(response[5], input_name=f"{ip6_prefix}:1000::bbbb", record_type="PTR", record_data="existing.ptr.", - error_messages=[ - "Record \"fd69:27cc:fe91:1000::bbbb\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) - + error_messages=[f"Record \"{ip6_prefix}:1000::bbbb\" Already Exists: cannot add an existing record; " + "to update it, issue a DeleteRecordSet then an Add."]) finally: clear_recordset_list(to_delete, client) @@ -2656,29 +2599,26 @@ def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): error_messages=['Invalid IP address: "fd69:27cc:fe91de::ba".']) assert_failed_change_in_error_response(response[5], ttl=29, input_name="fd69:27cc:fe91de::ba", record_type="PTR", record_data="failed-update$.ptr.", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - 'Invalid IP address: "fd69:27cc:fe91de::ba".', - 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid IP address: "fd69:27cc:fe91de::ba".', + 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, underscores, ' + 'and hyphens, joined by dots, and terminated with a dot.']) # zone discovery failure assert_failed_change_in_error_response(response[6], input_name="fedc:ba98:7654::abc", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, failure on update with double add assert_failed_change_in_error_response(response[7], input_name=f"{ip6_prefix}:1000::60", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"fd69:27cc:fe91:1000::60\" Does Not Exist: cannot delete a record that does not exist."]) + error_messages=[f"Record \"{ip6_prefix}:1000::60\" Does Not Exist: cannot delete a record that does not exist."]) assert_successful_change_in_error_response(response[8], ttl=300, input_name=f"{ip6_prefix}:1000::65", record_type="PTR", record_data="has-updated.ptr.") assert_failed_change_in_error_response(response[9], input_name=f"{ip6_prefix}:1000::65", record_type="PTR", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"fd69:27cc:fe91:1000::65\" Does Not Exist: cannot delete a record that does not exist."]) - + error_messages=[f"Record \"{ip6_prefix}:1000::65\" Does Not Exist: cannot delete a record that does not exist."]) finally: clear_recordset_list(to_delete, ok_client) @@ -2702,7 +2642,7 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): existing_cname = create_recordset(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", [{"cname": "test."}], 100) - good_record_fqdn = generate_record_name("ok.") + good_record_fqdn = generate_record_name(ok_zone_name) batch_change_input = { "changes": [ # valid change @@ -2747,28 +2687,27 @@ def test_txt_recordtype_add_checks(shared_zone_test_context): # zone discovery failure assert_failed_change_in_error_response(response[2], input_name="no.zone.at.all.", record_type="TXT", record_data="test", - error_messages=[ - 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. ' + 'If zone exists, then it must be connected to in VinylDNS.']) # context validations: cname duplicate assert_failed_change_in_error_response(response[3], input_name=f"cname-duplicate.{ok_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - "Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"cname-duplicate.{ok_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) # context validations: conflicting recordsets, unauthorized error assert_failed_change_in_error_response(response[5], input_name=existing_txt_fqdn, record_type="TXT", record_data="test", - error_messages=[ - "Record \"" + existing_txt_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + error_messages=[f"Record \"{existing_txt_fqdn}\" Already Exists: " + f"cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[6], input_name=existing_cname_fqdn, record_type="TXT", record_data="test", - error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) + error_messages=[f"CNAME Conflict: CNAME record names must be unique. " + f"Existing record with name \"{existing_cname_fqdn}\" and type \"CNAME\" conflicts with this record."]) assert_failed_change_in_error_response(response[7], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="TXT", record_data="test", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: clear_recordset_list(to_delete, client) @@ -2845,50 +2784,35 @@ def test_txt_recordtype_update_delete_checks(shared_zone_test_context): response = ok_client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=rs_delete_fqdn, record_type="TXT", - record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[1], input_name=rs_update_fqdn, record_type="TXT", - record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[2], ttl=300, input_name=rs_update_fqdn, record_type="TXT", - record_data="test") + assert_successful_change_in_error_response(response[0], input_name=rs_delete_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name=rs_update_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], ttl=300, input_name=rs_update_fqdn, record_type="TXT", record_data="test") # input validations failures: invalid input name, reverse zone error, invalid ttl - assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="TXT", - record_data="test", change_type="DeleteRecordSet", - error_messages=[ - f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) - assert_failed_change_in_error_response(response[4], input_name=f"invalid-ttl.{ok_zone_name}", ttl=29, record_type="TXT", - record_data="bad-ttl", - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) + assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="TXT", record_data="test", change_type="DeleteRecordSet", + error_messages=[f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be ' + f'letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name=f"invalid-ttl.{ok_zone_name}", ttl=29, record_type="TXT", record_data="bad-ttl", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) # zone discovery failure - assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="TXT", - record_data=None, change_type="DeleteRecordSet", + assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", error_messages=[ - "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, not authorized - assert_failed_change_in_error_response(response[6], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="TXT", - record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_failed_change_in_error_response(response[7], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", - record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) - assert_successful_change_in_error_response(response[8], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", - record_data="test") - assert_failed_change_in_error_response(response[9], input_name=rs_delete_dummy_fqdn, record_type="TXT", - record_data=None, change_type="DeleteRecordSet", + assert_failed_change_in_error_response(response[6], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=[f"Record \"delete-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) + assert_failed_change_in_error_response(response[7], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=[f"Record \"update-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[8], input_name=f"update-nonexistent.{ok_zone_name}", record_type="TXT", record_data="test") + assert_failed_change_in_error_response(response[9], input_name=rs_delete_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(response[10], input_name=rs_update_dummy_fqdn, record_type="TXT", - record_data="test", + assert_failed_change_in_error_response(response[10], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data="test", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - assert_failed_change_in_error_response(response[11], input_name=rs_update_dummy_fqdn, record_type="TXT", - record_data=None, change_type="DeleteRecordSet", + assert_failed_change_in_error_response(response[11], input_name=rs_update_dummy_fqdn, record_type="TXT", record_data=None, change_type="DeleteRecordSet", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] @@ -2908,16 +2832,14 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): ip4_zone_name = shared_zone_test_context.classless_base_zone["name"] existing_mx_name = generate_record_name() - existing_mx_fqdn = existing_mx_name + f".{ok_zone_name}" - existing_mx = create_recordset(shared_zone_test_context.ok_zone, existing_mx_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 100) + existing_mx_fqdn = f"{existing_mx_name}.{ok_zone_name}" + existing_mx = create_recordset(shared_zone_test_context.ok_zone, existing_mx_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 100) existing_cname_name = generate_record_name() - existing_cname_fqdn = existing_cname_name + f".{ok_zone_name}" - existing_cname = create_recordset(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", - [{"cname": "test."}], 100) + existing_cname_fqdn = f"{existing_cname_name}.{ok_zone_name}" + existing_cname = create_recordset(shared_zone_test_context.ok_zone, existing_cname_name, "CNAME", [{"cname": "test."}], 100) - good_record_fqdn = generate_record_name("ok.") + good_record_fqdn = generate_record_name(ok_zone_name) batch_change_input = { "changes": [ # valid change @@ -2951,54 +2873,50 @@ def test_mx_recordtype_add_checks(shared_zone_test_context): response = client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, record_type="MX", - record_data={"preference": 1, "exchange": "foo.bar."}) + assert_successful_change_in_error_response(response[0], input_name=good_record_fqdn, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}) # ttl, domain name, record data - assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29, - record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.', - f'Invalid domain name: "bad-ttl-and-invalid-name$.{ok_zone_name}", ' - "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) + assert_failed_change_in_error_response(response[1], input_name=f"bad-ttl-and-invalid-name$.{ok_zone_name}", ttl=29, record_type="MX", + record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + f'Invalid domain name: "bad-ttl-and-invalid-name$.{ok_zone_name}", ' + "valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot."]) assert_failed_change_in_error_response(response[2], input_name=f"bad-exchange.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, - error_messages=[ - 'Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, underscores, and hyphens, ' + 'joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[3], input_name=f"mx.{ip4_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" and type "MX" is not allowed in a reverse zone.']) + error_messages=[f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" and type "MX" is not allowed in a reverse zone.']) # zone discovery failures assert_failed_change_in_error_response(response[4], input_name=f"no.subzone.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - f'Zone Discovery Failed: zone for "no.subzone.{ok_zone_name}" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=[f'Zone Discovery Failed: zone for "no.subzone.{ok_zone_name}" does not exist in VinylDNS. ' + f'If zone exists, then it must be connected to in VinylDNS.']) assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - 'Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS.']) + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. ' + 'If zone exists, then it must be connected to in VinylDNS.']) # context validations: cname duplicate assert_failed_change_in_error_response(response[6], input_name=f"cname-duplicate.{ok_zone_name}", record_type="CNAME", record_data="test.com.", - error_messages=[ - "Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + error_messages=[f"Record Name \"cname-duplicate.{ok_zone_name}\" Not Unique In Batch Change: " + f"cannot have multiple \"CNAME\" records with the same name."]) # context validations: conflicting recordsets, unauthorized error assert_failed_change_in_error_response(response[8], input_name=existing_mx_fqdn, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - "Record \"" + existing_mx_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + error_messages=[f"Record \"{existing_mx_fqdn}\" Already Exists: cannot add an existing record; to update it, " + f"issue a DeleteRecordSet then an Add."]) assert_failed_change_in_error_response(response[9], input_name=existing_cname_fqdn, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - "CNAME Conflict: CNAME record names must be unique. Existing record with name \"" + existing_cname_fqdn + "\" and type \"CNAME\" conflicts with this record."]) - assert_failed_change_in_error_response(response[10], input_name=f"user-add-unauthorized.{dummy_zone_name}", - record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=["CNAME Conflict: CNAME record names must be unique. " + f"Existing record with name \"{existing_cname_fqdn}\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[10], input_name=f"user-add-unauthorized.{dummy_zone_name}", record_type="MX", + record_data={"preference": 1, "exchange": "foo.bar."}, error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: clear_recordset_list(to_delete, client) @@ -3027,13 +2945,11 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): rs_delete_dummy_name = generate_record_name() rs_delete_dummy_fqdn = rs_delete_dummy_name + f".{dummy_zone_name}" - rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_delete_dummy = create_recordset(dummy_zone, rs_delete_dummy_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) rs_update_dummy_name = generate_record_name() rs_update_dummy_fqdn = rs_update_dummy_name + f".{dummy_zone_name}" - rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "MX", - [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_update_dummy = create_recordset(dummy_zone, rs_update_dummy_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) batch_change_input = { "comments": "this is optional", @@ -3081,47 +2997,40 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): response = ok_client.create_batch_change(batch_change_input, status=400) # successful changes - assert_successful_change_in_error_response(response[0], input_name=rs_delete_fqdn, record_type="MX", - record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[1], input_name=rs_update_fqdn, record_type="MX", - record_data=None, change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[2], ttl=300, input_name=rs_update_fqdn, record_type="MX", - record_data={"preference": 1, "exchange": "foo.bar."}) + assert_successful_change_in_error_response(response[0], input_name=rs_delete_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name=rs_update_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], ttl=300, input_name=rs_update_fqdn, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}) # input validations failures: invalid input name, reverse zone error, invalid ttl - assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="MX", - record_data={"preference": 1, "exchange": "foo.bar."}, + assert_failed_change_in_error_response(response[3], input_name=f"invalid-name$.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, change_type="DeleteRecordSet", - error_messages=[ - f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=[f'Invalid domain name: "invalid-name$.{ok_zone_name}", valid domain names must be letters, ' + f'numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[4], input_name=f"delete.{ok_zone_name}", ttl=29, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - 'Invalid TTL: "29", must be a number between 30 and 2147483647.']) + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) assert_failed_change_in_error_response(response[5], input_name=f"bad-exchange.{ok_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, - error_messages=[ - 'Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminated with a dot.']) + error_messages=['Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, ' + 'underscores, and hyphens, joined by dots, and terminated with a dot.']) assert_failed_change_in_error_response(response[6], input_name=f"mx.{ip4_zone_name}", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, - error_messages=[ - f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" and type "MX" is not allowed in a reverse zone.']) + error_messages=[f'Invalid Record Type In Reverse Zone: record with name "mx.{ip4_zone_name}" ' + f'and type "MX" is not allowed in a reverse zone.']) # zone discovery failure assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS."]) + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. " + "If zone exists, then it must be connected to in VinylDNS."]) # context validation failures: record does not exist, not authorized assert_failed_change_in_error_response(response[8], input_name=f"delete-nonexistent.{ok_zone_name}", record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + error_messages=[f"Record \"delete-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) assert_failed_change_in_error_response(response[9], input_name=f"update-nonexistent.{ok_zone_name}", record_type="MX", record_data=None, change_type="DeleteRecordSet", - error_messages=[ - "Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + error_messages=[f"Record \"update-nonexistent.{ok_zone_name}\" Does Not Exist: cannot delete a record that does not exist."]) assert_successful_change_in_error_response(response[10], input_name=f"update-nonexistent.{ok_zone_name}", record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."}) assert_failed_change_in_error_response(response[11], input_name=rs_delete_dummy_fqdn, record_type="MX", @@ -3133,7 +3042,6 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): assert_failed_change_in_error_response(response[13], input_name=rs_update_dummy_fqdn, record_type="MX", record_data=None, change_type="DeleteRecordSet", error_messages=[f"User \"ok\" is not authorized. Contact zone owner group: {dummy_group_name} at test@test.com to make DNS changes."]) - finally: # Clean up updates dummy_deletes = [rs for rs in to_delete if rs["zone"]["id"] == dummy_zone["id"]] @@ -3142,78 +3050,6 @@ def test_mx_recordtype_update_delete_checks(shared_zone_test_context): clear_recordset_list(ok_deletes, ok_client) -def test_user_validation_ownership(shared_zone_test_context): - """ - Confirm that test users cannot add/edit/delete records in non-test zones (via zone admin group) - """ - client = shared_zone_test_context.shared_zone_vinyldns_client - batch_change_input = { - "changes": [ - get_change_A_AAAA_json("add-test-batch.non.test.shared."), - get_change_A_AAAA_json("update-test-batch.non.test.shared.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-test-batch.non.test.shared."), - get_change_A_AAAA_json("delete-test-batch.non.test.shared.", change_type="DeleteRecordSet"), - - get_change_A_AAAA_json("add-test-batch.shared."), - get_change_A_AAAA_json("update-test-batch.shared.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-test-batch.shared."), - get_change_A_AAAA_json("delete-test-batch.shared.", change_type="DeleteRecordSet"), - ], - "ownerGroupId": "shared-zone-group" - } - - response = client.create_batch_change(batch_change_input, status=400) - assert_failed_change_in_error_response(response[0], input_name="add-test-batch.non.test.shared.", - record_data="1.1.1.1", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[1], input_name="update-test-batch.non.test.shared.", - change_type="DeleteRecordSet", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[2], input_name="update-test-batch.non.test.shared.", - record_data="1.1.1.1", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[3], input_name="delete-test-batch.non.test.shared.", - change_type="DeleteRecordSet", - error_messages=["User \"sharedZoneUser\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - - assert_successful_change_in_error_response(response[4], input_name="add-test-batch.shared.") - assert_successful_change_in_error_response(response[5], input_name="update-test-batch.shared.", - change_type="DeleteRecordSet") - assert_successful_change_in_error_response(response[6], input_name="update-test-batch.shared.") - assert_successful_change_in_error_response(response[7], input_name="delete-test-batch.shared.", - change_type="DeleteRecordSet") - - -def test_user_validation_shared(shared_zone_test_context): - """ - Confirm that test users cannot add/edit/delete records in non-test zones (via shared access) - """ - client = shared_zone_test_context.ok_vinyldns_client - batch_change_input = { - "changes": [ - get_change_A_AAAA_json("add-test-batch.non.test.shared."), - get_change_A_AAAA_json("update-test-batch.non.test.shared.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-test-batch.non.test.shared."), - get_change_A_AAAA_json("delete-test-batch.non.test.shared.", change_type="DeleteRecordSet") - ], - "ownerGroupId": shared_zone_test_context.ok_group["id"] - } - - response = client.create_batch_change(batch_change_input, status=400) - assert_failed_change_in_error_response(response[0], input_name="add-test-batch.non.test.shared.", - record_data="1.1.1.1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[1], input_name="update-test-batch.non.test.shared.", - change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[2], input_name="update-test-batch.non.test.shared.", - record_data="1.1.1.1", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - assert_failed_change_in_error_response(response[3], input_name="delete-test-batch.non.test.shared.", - change_type="DeleteRecordSet", - error_messages=["User \"ok\" is not authorized. Contact zone owner group: testSharedZoneGroup at email to make DNS changes."]) - - def test_create_batch_change_does_not_save_owner_group_id_for_non_shared_zone(shared_zone_test_context): """ Test successfully creating a batch change with owner group ID doesn't save value for records in non-shared zone @@ -3258,7 +3094,6 @@ def test_create_batch_change_does_not_save_owner_group_id_for_non_shared_zone(sh for (zoneId, recordSetId) in to_delete: get_recordset = ok_client.get_recordset(zoneId, recordSetId, status=200) assert_that(get_recordset["recordSet"], is_not(has_key("ownerGroupId"))) - finally: clear_zoneid_rsid_tuple_list(to_delete, ok_client) @@ -3270,20 +3105,19 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo """ shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone + shared_zone_name = shared_zone_test_context.shared_zone["name"] shared_record_group = shared_zone_test_context.shared_record_group without_group_name = generate_record_name() - without_group_fqdn = without_group_name + ".shared." - update_rs_without_owner_group = create_recordset(shared_zone, without_group_name, "A", - [{"address": "127.0.0.1"}], 300) + without_group_fqdn = f"{without_group_name}.{shared_zone_name}" + update_rs_without_owner_group = create_recordset(shared_zone, without_group_name, "A", [{"address": "127.0.0.1"}], 300) with_group_name = generate_record_name() - with_group_fqdn = with_group_name + ".shared." - update_rs_with_owner_group = create_recordset(shared_zone, with_group_name, "A", - [{"address": "127.0.0.1"}], 300, shared_record_group["id"]) + with_group_fqdn = f"{with_group_name}.{shared_zone_name}" + update_rs_with_owner_group = create_recordset(shared_zone, with_group_name, "A", [{"address": "127.0.0.1"}], 300, shared_record_group["id"]) create_name = generate_record_name() - create_fqdn = create_name + ".shared." + create_fqdn = f"{create_name}.{shared_zone_name}" batch_change_input = { "changes": [ get_change_A_AAAA_json(create_fqdn, address="4.3.2.1"), @@ -3301,16 +3135,14 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo create_result = shared_client.create_recordset(update_rs_without_owner_group, status=202) to_delete.append(shared_client.wait_until_recordset_change_status(create_result, "Complete")) - create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], - create_result["recordSet"]["id"], status=200) + create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], create_result["recordSet"]["id"], status=200) assert_that(create_result["recordSet"], is_not(has_key("ownerGroupId"))) # Create second record for updating and verify that owner group ID is set create_result = shared_client.create_recordset(update_rs_with_owner_group, status=202) to_delete.append(shared_client.wait_until_recordset_change_status(create_result, "Complete")) - create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], - create_result["recordSet"]["id"], status=200) + create_result = shared_client.get_recordset(create_result["recordSet"]["zoneId"], create_result["recordSet"]["id"], status=200) assert_that(create_result["recordSet"]["ownerGroupId"], is_(shared_record_group["id"])) # Create batch @@ -3322,25 +3154,11 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo to_delete = [(change["zoneId"], change["recordSetId"]) for change in completed_batch["changes"]] assert_that(result["ownerGroupId"], is_("shared-zone-group")) - assert_change_success(result["changes"], zone=shared_zone, index=0, - record_name=create_name, - input_name=create_fqdn, record_data="4.3.2.1") - assert_change_success(result["changes"], zone=shared_zone, index=1, - record_name=without_group_name, - input_name=without_group_fqdn, - record_data="1.2.3.4") - assert_change_success(result["changes"], zone=shared_zone, index=2, - record_name=without_group_name, - input_name=without_group_fqdn, - change_type="DeleteRecordSet", record_data=None) - assert_change_success(result["changes"], zone=shared_zone, index=3, - record_name=with_group_name, - input_name=with_group_fqdn, - record_data="1.2.3.4") - assert_change_success(result["changes"], zone=shared_zone, index=4, - record_name=with_group_name, - input_name=with_group_fqdn, - change_type="DeleteRecordSet", record_data=None) + assert_change_success(result["changes"], zone=shared_zone, index=0, record_name=create_name, input_name=create_fqdn, record_data="4.3.2.1") + assert_change_success(result["changes"], zone=shared_zone, index=1, record_name=without_group_name, input_name=without_group_fqdn, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=shared_zone, index=2, record_name=without_group_name, input_name=without_group_fqdn, change_type="DeleteRecordSet", record_data=None) + assert_change_success(result["changes"], zone=shared_zone, index=3, record_name=with_group_name, input_name=with_group_fqdn, record_data="1.2.3.4") + assert_change_success(result["changes"], zone=shared_zone, index=4, record_name=with_group_name, input_name=with_group_fqdn, change_type="DeleteRecordSet", record_data=None) for (zoneId, recordSetId) in to_delete: get_recordset = shared_client.get_recordset(zoneId, recordSetId, status=200) @@ -3348,7 +3166,6 @@ def test_create_batch_change_for_shared_zone_owner_group_applied_logic(shared_zo assert_that(get_recordset["recordSet"]["ownerGroupId"], is_(shared_record_group["id"])) else: assert_that(get_recordset["recordSet"]["ownerGroupId"], is_(batch_change_input["ownerGroupId"])) - finally: clear_zoneid_rsid_tuple_list(to_delete, shared_client) @@ -3358,10 +3175,11 @@ def test_create_batch_change_for_shared_zone_with_invalid_owner_group_id_fails(s Test creating a batch change with invalid owner group ID fails """ shared_client = shared_zone_test_context.shared_zone_vinyldns_client + shared_zone_name = shared_zone_test_context.shared_zone["name"] batch_change_input = { "changes": [ - get_change_A_AAAA_json("no-owner-group-id.shared.", address="4.3.2.1") + get_change_A_AAAA_json(f"no-owner-group-id.{shared_zone_name}", address="4.3.2.1") ], "ownerGroupId": "non-existent-owner-group-id" } @@ -3376,17 +3194,17 @@ def test_create_batch_change_for_shared_zone_with_unauthorized_owner_group_id_fa """ shared_client = shared_zone_test_context.shared_zone_vinyldns_client ok_group = shared_zone_test_context.ok_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] batch_change_input = { "changes": [ - get_change_A_AAAA_json("no-owner-group-id.shared.", address="4.3.2.1") + get_change_A_AAAA_json(f"no-owner-group-id.{shared_zone_name}", address="4.3.2.1") ], "ownerGroupId": ok_group["id"] } errors = shared_client.create_batch_change(batch_change_input, status=400)["errors"] - assert_that(errors, contains_exactly('User "sharedZoneUser" must be a member of group "' + ok_group[ - "id"] + '" to apply this group to batch changes.')) + assert_that(errors, contains_exactly('User "sharedZoneUser" must be a member of group "' + ok_group["id"] + '" to apply this group to batch changes.')) def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_context): @@ -3408,37 +3226,36 @@ def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_con shared_zone = shared_zone_test_context.shared_zone ok_zone = shared_zone_test_context.ok_zone ok_zone_name = shared_zone_test_context.ok_zone["name"] + shared_zone_name = shared_zone_test_context.shared_zone["name"] # record sets to setup private_update_name = generate_record_name() - private_update_fqdn = private_update_name + f".{ok_zone_name}" + private_update_fqdn = f"{private_update_name}.{ok_zone_name}" private_update = create_recordset(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_no_group_name = generate_record_name() - shared_update_no_group_fqdn = shared_update_no_group_name + ".shared." - shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", - [{"address": "1.1.1.1"}], 200) + shared_update_no_group_fqdn = f"{shared_update_no_group_name}.{shared_zone_name}" + shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_group_name = generate_record_name() - shared_update_group_fqdn = shared_update_group_name + ".shared." - shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", - [{"address": "1.1.1.1"}], 200, shared_group["id"]) + shared_update_group_fqdn = f"{shared_update_group_name}.{shared_zone_name}" + shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", [{"address": "1.1.1.1"}], 200, shared_group["id"]) private_delete_name = generate_record_name() - private_delete_fqdn = private_delete_name + f".{ok_zone_name}" + private_delete_fqdn = f"{private_delete_name}.{ok_zone_name}" private_delete = create_recordset(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) shared_delete_name = generate_record_name() - shared_delete_fqdn = shared_delete_name + ".shared." + shared_delete_fqdn = f"{shared_delete_name}.{shared_zone_name}" shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) to_delete_ok = {} to_delete_shared = {} private_create_name = generate_record_name() - private_create_fqdn = private_create_name + f".{ok_zone_name}" + private_create_fqdn = f"{private_create_name}.{ok_zone_name}" shared_create_name = generate_record_name() - shared_create_fqdn = shared_create_name + ".shared." + shared_create_fqdn = f"{shared_create_name}.{shared_zone_name}" batch_change_input = { "changes": [ get_change_A_AAAA_json(private_create_fqdn), @@ -3527,7 +3344,6 @@ def test_create_batch_change_validation_with_owner_group_id(shared_zone_test_con assert_that(rs_result["recordSet"]["ownerGroupId"], is_(shared_group["id"])) else: assert_that(rs_result["recordSet"]["ownerGroupId"], is_(ok_group["id"])) - finally: for tup in to_delete_ok: delete_result = ok_client.delete_recordset(tup[0], tup[1], status=202) @@ -3548,37 +3364,36 @@ def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_ shared_zone = shared_zone_test_context.shared_zone ok_zone = shared_zone_test_context.ok_zone ok_zone_name = shared_zone_test_context.ok_zone["name"] + shared_zone_name = shared_zone_test_context.shared_zone["name"] # record sets to setup private_update_name = generate_record_name() - private_update_fqdn = private_update_name + f".{ok_zone_name}" + private_update_fqdn = f"{private_update_name}.{ok_zone_name}" private_update = create_recordset(ok_zone, private_update_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_no_group_name = generate_record_name() - shared_update_no_group_fqdn = shared_update_no_group_name + ".shared." - shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", - [{"address": "1.1.1.1"}], 200) + shared_update_no_group_fqdn = f"{shared_update_no_group_name}.{shared_zone_name}" + shared_update_no_owner_group = create_recordset(shared_zone, shared_update_no_group_name, "A", [{"address": "1.1.1.1"}], 200) shared_update_group_name = generate_record_name() - shared_update_group_fqdn = shared_update_group_name + ".shared." - shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", - [{"address": "1.1.1.1"}], 200, shared_group["id"]) + shared_update_group_fqdn = f"{shared_update_group_name}.{shared_zone_name}" + shared_update_existing_owner_group = create_recordset(shared_zone, shared_update_group_name, "A", [{"address": "1.1.1.1"}], 200, shared_group["id"]) private_delete_name = generate_record_name() private_delete_fqdn = private_delete_name + f".{ok_zone_name}" private_delete = create_recordset(ok_zone, private_delete_name, "A", [{"address": "1.1.1.1"}], 200) shared_delete_name = generate_record_name() - shared_delete_fqdn = shared_delete_name + ".shared." + shared_delete_fqdn = f"{shared_delete_name}.{shared_zone_name}" shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200) to_delete_ok = [] to_delete_shared = [] private_create_name = generate_record_name() - private_create_fqdn = private_create_name + f".{ok_zone_name}" + private_create_fqdn = f"{private_create_name}.{ok_zone_name}" shared_create_name = generate_record_name() - shared_create_fqdn = shared_create_name + ".shared." + shared_create_fqdn = f"{shared_create_name}.{shared_zone_name}" batch_change_input = { "changes": [ get_change_A_AAAA_json(private_create_fqdn), @@ -3601,32 +3416,23 @@ def test_create_batch_change_validation_without_owner_group_id(shared_zone_test_ for rs in [shared_update_no_owner_group, shared_update_existing_owner_group, shared_delete]: create_rs = shared_client.create_recordset(rs, status=202) - to_delete_shared.append( - shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"]["id"]) + to_delete_shared.append(shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"]["id"]) response = ok_client.create_batch_change(batch_change_input, status=400) assert_successful_change_in_error_response(response[0], input_name=private_create_fqdn) - assert_failed_change_in_error_response(response[1], input_name=shared_create_fqdn, error_messages=[ - "Zone \"shared.\" is a shared zone, so owner group ID must be specified for record \"" + shared_create_name + "\"."]) + assert_failed_change_in_error_response(response[1], input_name=shared_create_fqdn, + error_messages=[f"Zone \"{shared_zone_name}\" is a shared zone, so owner group ID must be specified for record \"{shared_create_name}\"."]) assert_successful_change_in_error_response(response[2], input_name=private_update_fqdn, ttl=300) - assert_successful_change_in_error_response(response[3], change_type="DeleteRecordSet", - input_name=private_update_fqdn) + assert_successful_change_in_error_response(response[3], change_type="DeleteRecordSet", input_name=private_update_fqdn) assert_failed_change_in_error_response(response[4], input_name=shared_update_no_group_fqdn, - error_messages=[ - "Zone \"shared.\" is a shared zone, so owner group ID must be specified for record \"" + shared_update_no_group_name + "\"."], + error_messages=[f"Zone \"{shared_zone_name}\" is a shared zone, so owner group ID must be specified for record \"{shared_update_no_group_name}\"."], ttl=300) - assert_successful_change_in_error_response(response[5], change_type="DeleteRecordSet", - input_name=shared_update_no_group_fqdn) - assert_successful_change_in_error_response(response[6], input_name=shared_update_group_fqdn, - ttl=300) - assert_successful_change_in_error_response(response[7], change_type="DeleteRecordSet", - input_name=shared_update_group_fqdn) - assert_successful_change_in_error_response(response[8], change_type="DeleteRecordSet", - input_name=private_delete_fqdn) - assert_successful_change_in_error_response(response[9], change_type="DeleteRecordSet", - input_name=shared_delete_fqdn) - + assert_successful_change_in_error_response(response[5], change_type="DeleteRecordSet", input_name=shared_update_no_group_fqdn) + assert_successful_change_in_error_response(response[6], input_name=shared_update_group_fqdn, ttl=300) + assert_successful_change_in_error_response(response[7], change_type="DeleteRecordSet", input_name=shared_update_group_fqdn) + assert_successful_change_in_error_response(response[8], change_type="DeleteRecordSet", input_name=private_delete_fqdn) + assert_successful_change_in_error_response(response[9], change_type="DeleteRecordSet", input_name=shared_delete_fqdn) finally: for rsId in to_delete_ok: delete_result = ok_client.delete_recordset(ok_zone["id"], rsId, status=202) @@ -3645,11 +3451,11 @@ def test_create_batch_delete_recordset_for_unassociated_user_in_owner_group_succ ok_client = shared_zone_test_context.ok_vinyldns_client shared_zone = shared_zone_test_context.shared_zone shared_group = shared_zone_test_context.shared_record_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] shared_delete_name = generate_record_name() - shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, - shared_group["id"]) + shared_delete_fqdn = f"{shared_delete_name}.{shared_zone_name}" + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, shared_group["id"]) batch_change_input = { "changes": [ get_change_A_AAAA_json(shared_delete_fqdn, change_type="DeleteRecordSet") @@ -3676,12 +3482,13 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ unassociated_client = shared_zone_test_context.unassociated_client shared_zone = shared_zone_test_context.shared_zone shared_group = shared_zone_test_context.shared_record_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] + shared_group_name = shared_group["name"] create_rs = None shared_delete_name = generate_record_name() - shared_delete_fqdn = shared_delete_name + ".shared." - shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, - shared_group["id"]) + shared_delete_fqdn = f"{shared_delete_name}.{shared_zone_name}" + shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, shared_group["id"]) batch_change_input = { "changes": [ @@ -3697,8 +3504,8 @@ def test_create_batch_delete_recordset_for_unassociated_user_not_in_owner_group_ assert_failed_change_in_error_response(response[0], input_name=shared_delete_fqdn, change_type="DeleteRecordSet", - error_messages=['User "list-group-user" is not authorized. Contact record owner group: record-ownergroup at test@test.com to make DNS changes.']) - + error_messages=[f'User "list-group-user" is not authorized. Contact record owner group: ' + f'{shared_group_name} at test@test.com to make DNS changes.']) finally: if create_rs: delete_rs = shared_client.delete_recordset(shared_zone["id"], create_rs["recordSet"]["id"], status=202) @@ -3713,9 +3520,10 @@ def test_create_batch_delete_recordset_for_zone_admin_not_in_owner_group_succeed ok_client = shared_zone_test_context.ok_vinyldns_client shared_zone = shared_zone_test_context.shared_zone ok_group = shared_zone_test_context.ok_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] shared_delete_name = generate_record_name() - shared_delete_fqdn = shared_delete_name + ".shared." + shared_delete_fqdn = f"{shared_delete_name}.{shared_zone_name}" shared_delete = create_recordset(shared_zone, shared_delete_name, "A", [{"address": "1.1.1.1"}], 200, ok_group["id"]) batch_change_input = { @@ -3745,10 +3553,11 @@ def test_create_batch_update_record_in_shared_zone_for_unassociated_user_in_owne ok_client = shared_zone_test_context.ok_vinyldns_client shared_zone = shared_zone_test_context.shared_zone shared_record_group = shared_zone_test_context.shared_record_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] create_rs = None shared_update_name = generate_record_name() - shared_update_fqdn = shared_update_name + ".shared." + shared_update_fqdn = f"{shared_update_name}.{shared_zone_name}" shared_update = create_recordset(shared_zone, shared_update_name, "MX", [{"preference": 1, "exchange": "foo.bar."}], 200, shared_record_group["id"]) @@ -3773,7 +3582,6 @@ def test_create_batch_update_record_in_shared_zone_for_unassociated_user_in_owne assert_change_success(completed_batch["changes"], zone=shared_zone, index=1, record_name=shared_update_name, record_type="MX", input_name=shared_update_fqdn, record_data=None, change_type="DeleteRecordSet") - finally: if create_rs: delete_rs = shared_client.delete_recordset(shared_zone["id"], create_rs["recordSet"]["id"], status=202) @@ -3794,9 +3602,10 @@ def test_create_batch_with_global_acl_rule_applied_succeeds(shared_zone_test_con dummy_group_id = shared_zone_test_context.dummy_group["id"] dummy_group_name = shared_zone_test_context.dummy_group["name"] ip4_prefix = shared_zone_test_context.ip4_classless_prefix + shared_zone_name = shared_zone_test_context.shared_zone["name"] a_name = generate_record_name() - a_fqdn = a_name + ".shared." + a_fqdn = f"{a_name}.{shared_zone_name}" a_record = create_recordset(shared_zone, a_name, "A", [{"address": "1.1.1.1"}], 200, "shared-zone-group") ptr_record = create_recordset(classless_base_zone, "44", "PTR", [{"ptrdname": "foo."}], 200, None) @@ -3836,7 +3645,6 @@ def test_create_batch_with_global_acl_rule_applied_succeeds(shared_zone_test_con record_name="44", record_type="PTR", input_name=f"{ip4_prefix}.44", record_data=None, change_type="DeleteRecordSet") - finally: if create_a_rs: retrieved = shared_client.get_recordset(shared_zone["id"], create_a_rs["recordSet"]["id"]) @@ -3868,11 +3676,12 @@ def test_create_batch_with_irrelevant_global_acl_rule_applied_fails(shared_zone_ shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone ip4_prefix = shared_zone_test_context.ip4_classless_prefix + shared_zone_name = shared_zone_test_context.shared_zone["name"] create_a_rs = None a_name = generate_record_name() - a_fqdn = a_name + ".shared." + a_fqdn = f"{a_name}.{shared_zone_name}" a_record = create_recordset(shared_zone, a_name, "A", [{"address": "1.1.1.1"}], 200, "shared-zone-group") batch_change_input = { @@ -3890,7 +3699,6 @@ def test_create_batch_with_irrelevant_global_acl_rule_applied_fails(shared_zone_ assert_failed_change_in_error_response(response[0], input_name=a_fqdn, record_type="A", change_type="Add", record_data=f"{ip4_prefix}.45", error_messages=['User "testuser" is not authorized. Contact record owner group: testSharedZoneGroup at email to make DNS changes.']) - finally: if create_a_rs: delete_a_rs = shared_client.delete_recordset(shared_zone["id"], create_a_rs["recordSet"]["id"], status=202) @@ -3904,12 +3712,13 @@ def test_create_batch_with_zone_name_requiring_manual_review(shared_zone_test_co """ rejecter = shared_zone_test_context.support_user_client client = shared_zone_test_context.ok_vinyldns_client + review_zone_name = shared_zone_test_context.requires_review_zone["name"] batch_change_input = { "changes": [ - get_change_A_AAAA_json("add-test-batch.zone.requires.review."), - get_change_A_AAAA_json("update-test-batch.zone.requires.review.", change_type="DeleteRecordSet"), - get_change_A_AAAA_json("update-test-batch.zone.requires.review."), - get_change_A_AAAA_json("delete-test-batch.zone.requires.review.", change_type="DeleteRecordSet") + get_change_A_AAAA_json(f"add-test-batch.{review_zone_name}"), + get_change_A_AAAA_json(f"update-test-batch.{review_zone_name}", change_type="DeleteRecordSet"), + get_change_A_AAAA_json(f"update-test-batch.{review_zone_name}"), + get_change_A_AAAA_json(f"delete-test-batch.{review_zone_name}", change_type="DeleteRecordSet") ], "ownerGroupId": shared_zone_test_context.ok_group["id"] } @@ -3924,7 +3733,6 @@ def test_create_batch_with_zone_name_requiring_manual_review(shared_zone_test_co for i in range(0, 3): assert_that(get_batch["changes"][i]["status"], is_("NeedsReview")) assert_that(get_batch["changes"][i]["validationErrors"][0]["errorType"], is_("RecordRequiresManualReview")) - finally: # Clean up so data doesn't change if response: @@ -3959,10 +3767,9 @@ def test_create_batch_delete_record_for_invalid_record_data_fails(shared_zone_te errors = client.create_batch_change(batch_change_input, status=400) assert_failed_change_in_error_response(errors[0], input_name=f"delete-non-existent-record.{ok_zone_name}", record_data="1.1.1.1", change_type="DeleteRecordSet", - error_messages=['Record f"delete-non-existent-record.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) + error_messages=[f'Record "delete-non-existent-record.{ok_zone_name}" Does Not Exist: cannot delete a record that does not exist.']) assert_failed_change_in_error_response(errors[1], input_name=a_delete_fqdn, record_data="4.5.6.7", change_type="DeleteRecordSet", error_messages=["Record data 4.5.6.7 does not exist for \"" + a_delete_fqdn + "\"."]) - finally: clear_recordset_list(to_delete, client) @@ -3977,6 +3784,7 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): dummy_client = shared_zone_test_context.dummy_vinyldns_client dummy_group_id = shared_zone_test_context.dummy_group["id"] ok_zone_name = shared_zone_test_context.ok_zone["name"] + ok_group_name = shared_zone_test_context.ok_group["name"] a_delete_acl = generate_acl_rule("Delete", groupId=dummy_group_id, recordMask=".*", recordTypes=["A"]) txt_write_acl = generate_acl_rule("Write", groupId=dummy_group_id, recordMask=".*", recordTypes=["TXT"]) @@ -4025,8 +3833,7 @@ def test_create_batch_delete_record_access_checks(shared_zone_test_context): assert_successful_change_in_error_response(response[3], input_name=txt_update_fqdn, record_type="TXT", record_data="test", change_type="DeleteRecordSet") assert_successful_change_in_error_response(response[4], input_name=txt_update_fqdn, record_type="TXT", record_data="updated text") assert_failed_change_in_error_response(response[5], input_name=txt_delete_fqdn, record_type="TXT", record_data="test", change_type="DeleteRecordSet", - error_messages=['User "dummy" is not authorized. Contact zone owner group: ok-group at test@test.com to make DNS changes.']) - + error_messages=[f'User "dummy" is not authorized. Contact zone owner group: {ok_group_name} at test@test.com to make DNS changes.']) finally: clear_ok_acl_rules(shared_zone_test_context) clear_recordset_list(to_delete, ok_client) @@ -4272,7 +4079,6 @@ def test_create_batch_multi_record_update_succeeds(shared_zone_test_context): elif rs_name == txt_update_record_only_name: assert_that(records, contains_exactly({"text": "again"})) assert_that(records, is_not(contains_exactly({"text": "hello"}))) - finally: clear_recordset_list(to_delete, client) @@ -4329,7 +4135,6 @@ def test_create_batch_deletes_succeeds(shared_zone_test_context): updated_rs = client.get_recordset(create_multi_rs["zone"]["id"], create_multi_rs["recordSet"]["id"], status=200)["recordSet"] assert_that(updated_rs["records"], is_([{"address": "1.1.1.1"}])) client.get_recordset(create_multi_rs_2["zone"]["id"], create_multi_rs_2["recordSet"]["id"], status=404) - finally: clear_recordset_list(to_delete, client) @@ -4382,6 +4187,5 @@ def test_create_batch_change_with_multi_record_adds_with_multi_record_support(sh assert_successful_change_in_error_response(response[7], input_name=f"multi-mx.{ok_zone_name}", record_type="MX", record_data={"preference": 1000, "exchange": "bar.foo."}) assert_failed_change_in_error_response(response[8], input_name=rs_fqdn, record_data="1.1.1.1", error_messages=["Record \"" + rs_fqdn + "\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) - finally: clear_recordset_list(to_delete, client) diff --git a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py index cf5937d33..fbad9be8f 100644 --- a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py @@ -1,16 +1,17 @@ -from hamcrest import * from utils import * + def test_get_batch_change_success(shared_zone_test_context): """ Test successfully getting a batch change """ client = shared_zone_test_context.ok_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json(generate_record_name("parent.com."), address="4.5.6.7"), - get_change_A_AAAA_json(generate_record_name("ok."), record_type="AAAA", address="fd69:27cc:fe91::60") + get_change_A_AAAA_json(generate_record_name(shared_zone_test_context.parent_zone["name"]), address="4.5.6.7"), + get_change_A_AAAA_json(generate_record_name(shared_zone_test_context.ok_zone["name"]), record_type="AAAA", address=f"{ip6_prefix}::60") ] } to_delete = [] @@ -37,7 +38,8 @@ def test_get_batch_change_success(shared_zone_test_context): try: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -47,10 +49,12 @@ def test_get_batch_change_with_record_owner_group_success(shared_zone_test_conte """ client = shared_zone_test_context.shared_zone_vinyldns_client group = shared_zone_test_context.shared_record_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] + batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json("testing-get-batch-with-owner-group.shared.", address="1.1.1.1") + get_change_A_AAAA_json(f"testing-get-batch-with-owner-group.{shared_zone_name}", address="1.1.1.1") ], "ownerGroupId": group["id"] } @@ -66,7 +70,6 @@ def test_get_batch_change_with_record_owner_group_success(shared_zone_test_conte assert_that(result, is_(completed_batch)) assert_that(result["ownerGroupId"], is_(group["id"])) assert_that(result["ownerGroupName"], is_(group["name"])) - finally: for result_rs in to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) @@ -79,16 +82,17 @@ def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_te with the ownerGroupName attribute set to None """ client = shared_zone_test_context.shared_zone_vinyldns_client + shared_zone_name = shared_zone_test_context.shared_zone["name"] temp_group = { "name": "test-get-batch-record-owner-group2", "email": "test@test.com", "description": "for testing that a get batch change still works when record owner group is deleted", - "members": [ { "id": "sharedZoneUser"} ], - "admins": [ { "id": "sharedZoneUser"} ] + "members": [{"id": "sharedZoneUser"}], + "admins": [{"id": "sharedZoneUser"}] } rs_name = generate_record_name() - rs_fqdn = rs_name + ".shared." + rs_fqdn = f"{rs_name}.{shared_zone_name}" record_to_delete = [] try: @@ -124,7 +128,6 @@ def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_te assert_that(result, is_(completed_batch)) assert_that(result["ownerGroupId"], is_(group_to_delete["id"])) assert_that(result, is_not(has_key("ownerGroupName"))) - finally: for result_rs in record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) @@ -148,11 +151,12 @@ def test_get_batch_change_with_unauthorized_user_fails(shared_zone_test_context) """ client = shared_zone_test_context.ok_vinyldns_client dummy_client = shared_zone_test_context.dummy_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix batch_change_input = { "comments": "this is optional", "changes": [ - get_change_A_AAAA_json(generate_record_name("parent.com."), address="4.5.6.7"), - get_change_A_AAAA_json(generate_record_name("ok."), record_type="AAAA", address="fd69:27cc:fe91::60") + get_change_A_AAAA_json(generate_record_name(shared_zone_test_context.parent_zone["name"]), address="4.5.6.7"), + get_change_A_AAAA_json(generate_record_name(shared_zone_test_context.ok_zone["name"]), record_type="AAAA", address=f"{ip6_prefix}::60") ] } to_delete = [] @@ -170,5 +174,6 @@ def test_get_batch_change_with_unauthorized_user_fails(shared_zone_test_context) try: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass diff --git a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py index 07b603d28..6f0d1b655 100644 --- a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py +++ b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py @@ -105,6 +105,8 @@ def test_list_batch_change_summaries_with_deleted_record_owner_group_passes(shar for record owner group name """ client = shared_zone_test_context.shared_zone_vinyldns_client + shared_zone_name = shared_zone_test_context.shared_zone["name"] + temp_group = { "name": "test-list-summaries-deleted-owner-group", "email": "test@test.com", @@ -121,7 +123,7 @@ def test_list_batch_change_summaries_with_deleted_record_owner_group_passes(shar batch_change_input = { "comments": '', "changes": [ - get_change_A_AAAA_json("list-batch-with-deleted-owner-group.shared.", address="1.1.1.1") + get_change_A_AAAA_json(f"list-batch-with-deleted-owner-group.{shared_zone_name}", address="1.1.1.1") ], "ownerGroupId": group_to_delete["id"] } @@ -151,7 +153,6 @@ def test_list_batch_change_summaries_with_deleted_record_owner_group_passes(shar under_test = under_test[0] assert_that(under_test["ownerGroupId"], is_(group_to_delete["id"])) assert_that(under_test, is_not(has_key("ownerGroupName"))) - finally: for result_rs in record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) @@ -166,11 +167,12 @@ def test_list_batch_change_summaries_with_ignore_access_true_only_shows_requesti client = shared_zone_test_context.shared_zone_vinyldns_client ok_client = shared_zone_test_context.ok_vinyldns_client group = shared_zone_test_context.shared_record_group + shared_zone_name = shared_zone_test_context.shared_zone["name"] ok_batch_change_input = { "comments": '', "changes": [ - get_change_A_AAAA_json("ok-batch-with-owner-group.shared.", address="1.1.1.1") + get_change_A_AAAA_json(f"ok-batch-with-owner-group.{shared_zone_name}", address="1.1.1.1") ], "ownerGroupId": group["id"] } @@ -188,7 +190,6 @@ def test_list_batch_change_summaries_with_ignore_access_true_only_shows_requesti ok_under_test = [item for item in ok_batch_change_summaries_result if (item["id"] == ok_completed_batch["id"])] assert_that(ok_under_test, has_length(1)) - finally: for result_rs in ok_record_to_delete: delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) diff --git a/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py b/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py index 8cc2f463d..79d2f92dc 100644 --- a/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py @@ -40,7 +40,6 @@ def test_reject_batch_change_with_invalid_batch_change_id_fails(shared_zone_test """ Test rejecting a batch change with invalid batch change ID """ - client = shared_zone_test_context.ok_vinyldns_client error = client.reject_batch_change("some-id", status=404) @@ -52,7 +51,6 @@ def test_reject_batch_change_with_comments_exceeding_max_length_fails(shared_zon """ Test rejecting a batch change with comments exceeding 1024 characters fails """ - client = shared_zone_test_context.ok_vinyldns_client reject_batch_change_input = { "reviewComment": "a" * 1025 diff --git a/modules/api/functional_test/live_tests/conftest.py b/modules/api/functional_test/live_tests/conftest.py index 59c1e140f..363535450 100644 --- a/modules/api/functional_test/live_tests/conftest.py +++ b/modules/api/functional_test/live_tests/conftest.py @@ -16,7 +16,7 @@ ctx_cache: MutableMapping[str, SharedZoneTestContext] = {} @pytest.fixture(scope="session") def shared_zone_test_context(tmp_path_factory, worker_id): if worker_id == "master": - partition_id = "1" + partition_id = "2" else: partition_id = str(int(worker_id.replace("gw", "")) + 1) diff --git a/modules/api/functional_test/live_tests/internal/color_test.py b/modules/api/functional_test/live_tests/internal/color_test.py index 02301d8c3..9bb60427b 100644 --- a/modules/api/functional_test/live_tests/internal/color_test.py +++ b/modules/api/functional_test/live_tests/internal/color_test.py @@ -1,7 +1,4 @@ -import pytest - from hamcrest import * -from vinyldns_python import VinylDNSClient def test_color(shared_zone_test_context): diff --git a/modules/api/functional_test/live_tests/internal/health_test.py b/modules/api/functional_test/live_tests/internal/health_test.py index 12d42981a..14157d526 100644 --- a/modules/api/functional_test/live_tests/internal/health_test.py +++ b/modules/api/functional_test/live_tests/internal/health_test.py @@ -1,13 +1,6 @@ -import pytest - -from hamcrest import * -from vinyldns_python import VinylDNSClient - - def test_health(shared_zone_test_context): """ Tests that the health check endpoint works """ client = shared_zone_test_context.ok_vinyldns_client client.health() - diff --git a/modules/api/functional_test/live_tests/internal/ping_test.py b/modules/api/functional_test/live_tests/internal/ping_test.py index 287bd32fa..6c215080d 100644 --- a/modules/api/functional_test/live_tests/internal/ping_test.py +++ b/modules/api/functional_test/live_tests/internal/ping_test.py @@ -1,7 +1,4 @@ -import pytest - from hamcrest import * -from vinyldns_python import VinylDNSClient def test_ping(shared_zone_test_context): diff --git a/modules/api/functional_test/live_tests/internal/status_test.py b/modules/api/functional_test/live_tests/internal/status_test.py index 8b67b4489..c8f87acbc 100644 --- a/modules/api/functional_test/live_tests/internal/status_test.py +++ b/modules/api/functional_test/live_tests/internal/status_test.py @@ -1,12 +1,7 @@ import copy import pytest -import time -from hamcrest import * - -from vinyldns_python import VinylDNSClient -from vinyldns_context import VinylDNSTestContext from utils import * @@ -18,7 +13,7 @@ def test_get_status_success(shared_zone_test_context): result = client.get_status() assert_that([True, False], has_item(result["processingDisabled"])) - assert_that(["green","blue"], has_item(result["color"])) + assert_that(["green", "blue"], has_item(result["color"])) assert_that(result["keyName"], not_none()) assert_that(result["version"], not_none()) @@ -29,7 +24,6 @@ def test_toggle_processing(shared_zone_test_context): """ Test that updating a zone when processing is disabled does not happen """ - client = shared_zone_test_context.ok_vinyldns_client ok_zone = copy.deepcopy(shared_zone_test_context.ok_zone) diff --git a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py index 441525cdd..265244163 100644 --- a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py +++ b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py @@ -3,7 +3,7 @@ from vinyldns_python import VinylDNSClient class ListBatchChangeSummariesTestContext: - to_delete: set = None + to_delete: set = set() completed_changes: list = [] group: object = None is_setup: bool = False @@ -13,7 +13,7 @@ class ListBatchChangeSummariesTestContext: def setup(self, shared_zone_test_context): self.completed_changes = [] - self.to_delete = None + self.to_delete = set() acl_rule = generate_acl_rule("Write", userId="list-batch-summaries-id") add_ok_acl_rules(shared_zone_test_context, [acl_rule]) @@ -21,7 +21,7 @@ class ListBatchChangeSummariesTestContext: initial_db_check = self.client.list_batch_change_summaries(status=200) self.group = self.client.get_group("list-summaries-group", status=200) - ok_zone_name = shared_zone_test_context.ok_zone + ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input_one = { "comments": "first", "changes": [ @@ -49,7 +49,6 @@ class ListBatchChangeSummariesTestContext: self.completed_changes = [] if len(initial_db_check["batchChanges"]) == 0: - print("\r\n!!! CREATING NEW SUMMARIES") # make some batch changes for batch_change_input in batch_change_inputs: change = self.client.create_batch_change(batch_change_input, status=202) @@ -70,7 +69,13 @@ class ListBatchChangeSummariesTestContext: self.to_delete = set(record_set_list) self.is_setup = True - def tear_down(self): + def tear_down(self, shared_zone_test_context): + for result_rs in self.to_delete: + delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(result_rs[0], result_rs[1], status=(202, 404)) + if type(delete_result) != str: + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete') + self.to_delete.clear() + clear_ok_acl_rules(shared_zone_test_context) self.client.tear_down() def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False, max_items=100, approval_status=False): diff --git a/modules/api/functional_test/live_tests/list_groups_test_context.py b/modules/api/functional_test/live_tests/list_groups_test_context.py index 5de19f227..ba43a452a 100644 --- a/modules/api/functional_test/live_tests/list_groups_test_context.py +++ b/modules/api/functional_test/live_tests/list_groups_test_context.py @@ -5,21 +5,23 @@ from vinyldns_python import VinylDNSClient class ListGroupsTestContext(object): def __init__(self, partition_id: str): self.partition_id = partition_id - self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, access_key="listGroupAccessKey", secret_key="listGroupSecretKey") + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listGroupAccessKey", "listGroupSecretKey") self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "supportUserAccessKey", "supportUserSecretKey") + self.group_prefix = f"test-list-my-groups{partition_id}" def build(self): try: - for runner in range(0, 50): + for index in range(0, 50): new_group = { - "name": "test-list-my-groups-{0:0>3}{0}".format(runner, self.partition_id), + "name": "{0}-{1:0>3}".format(self.group_prefix, index), "email": "test@test.com", "members": [{"id": "list-group-user"}], "admins": [{"id": "list-group-user"}] } self.client.create_group(new_group, status=200) - except: + except Exception: self.tear_down() + traceback.print_exc() raise def tear_down(self): diff --git a/modules/api/functional_test/live_tests/list_recordsets_test_context.py b/modules/api/functional_test/live_tests/list_recordsets_test_context.py index 466694356..da41fcaeb 100644 --- a/modules/api/functional_test/live_tests/list_recordsets_test_context.py +++ b/modules/api/functional_test/live_tests/list_recordsets_test_context.py @@ -9,6 +9,7 @@ class ListRecordSetsTestContext(object): self.zone = None self.all_records = [] self.group = None + get_zone = self.client.get_zone_by_name(f"list-records{partition_id}.", status=(200, 404)) if get_zone and "zone" in get_zone: self.zone = get_zone["zone"] @@ -17,7 +18,7 @@ class ListRecordSetsTestContext(object): if my_groups and "groups" in my_groups and len(my_groups["groups"]) > 0: self.group = my_groups["groups"][0] - def build(self): + def setup(self): partition_id = self.partition_id group = { "name": f"list-records-group{partition_id}", diff --git a/modules/api/functional_test/live_tests/list_zones_test_context.py b/modules/api/functional_test/live_tests/list_zones_test_context.py index 854769c17..541301fb0 100644 --- a/modules/api/functional_test/live_tests/list_zones_test_context.py +++ b/modules/api/functional_test/live_tests/list_zones_test_context.py @@ -6,6 +6,12 @@ class ListZonesTestContext(object): def __init__(self, partition_id): self.partition_id = partition_id self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listZonesAccessKey", "listZonesSecretKey") + self.search_zone1 = None + self.search_zone2 = None + self.search_zone3 = None + self.non_search_zone1 = None + self.non_search_zone2 = None + self.list_zones_group = None def build(self): partition_id = self.partition_id @@ -16,56 +22,62 @@ class ListZonesTestContext(object): "members": [{"id": "list-zones-user"}], "admins": [{"id": "list-zones-user"}] } - list_zones_group = self.client.create_group(group, status=200) + self.list_zones_group = self.client.create_group(group, status=200) + search_zone_1_change = self.client.create_zone( { "name": f"list-zones-test-searched-1{partition_id}.", "email": "test@test.com", "shared": False, - "adminGroupId": list_zones_group["id"], + "adminGroupId": self.list_zones_group["id"], "isTest": True, "backendId": "func-test-backend" }, status=202) + self.search_zone1 = search_zone_1_change["zone"] search_zone_2_change = self.client.create_zone( { "name": f"list-zones-test-searched-2{partition_id}.", "email": "test@test.com", "shared": False, - "adminGroupId": list_zones_group["id"], + "adminGroupId": self.list_zones_group["id"], "isTest": True, "backendId": "func-test-backend" }, status=202) + self.search_zone2 = search_zone_2_change["zone"] search_zone_3_change = self.client.create_zone( { "name": f"list-zones-test-searched-3{partition_id}.", "email": "test@test.com", "shared": False, - "adminGroupId": list_zones_group["id"], + "adminGroupId": self.list_zones_group["id"], "isTest": True, "backendId": "func-test-backend" }, status=202) + self.search_zone3 = search_zone_3_change["zone"] non_search_zone_1_change = self.client.create_zone( { "name": f"list-zones-test-unfiltered-1{partition_id}.", "email": "test@test.com", "shared": False, - "adminGroupId": list_zones_group["id"], + "adminGroupId": self.list_zones_group["id"], "isTest": True, "backendId": "func-test-backend" }, status=202) + self.non_search_zone1 = non_search_zone_1_change["zone"] non_search_zone_2_change = self.client.create_zone( { "name": f"list-zones-test-unfiltered-2{partition_id}.", "email": "test@test.com", "shared": False, - "adminGroupId": list_zones_group["id"], + "adminGroupId": self.list_zones_group["id"], "isTest": True, "backendId": "func-test-backend" }, status=202) + self.non_search_zone2 = non_search_zone_2_change["zone"] zone_changes = [search_zone_1_change, search_zone_2_change, search_zone_3_change, non_search_zone_1_change, non_search_zone_2_change] for change in zone_changes: diff --git a/modules/api/functional_test/live_tests/membership/create_group_test.py b/modules/api/functional_test/live_tests/membership/create_group_test.py index 660f5c773..c1157d0c4 100644 --- a/modules/api/functional_test/live_tests/membership/create_group_test.py +++ b/modules/api/functional_test/live_tests/membership/create_group_test.py @@ -1,5 +1,3 @@ -import json - from hamcrest import * @@ -30,7 +28,6 @@ def test_create_group_success(shared_zone_test_context): assert_that(result["members"][0]["id"], is_("ok")) assert_that(result["admins"], has_length(1)) assert_that(result["admins"][0]["id"], is_("ok")) - finally: if result: client.delete_group(result["id"], status=(200, 404)) @@ -63,7 +60,6 @@ def test_creator_is_an_admin(shared_zone_test_context): assert_that(result["members"][0]["id"], is_("ok")) assert_that(result["admins"], has_length(1)) assert_that(result["admins"][0]["id"], is_("ok")) - finally: if result: client.delete_group(result["id"], status=(200, 404)) @@ -186,7 +182,6 @@ def test_create_group_duplicate(shared_zone_test_context): result = client.create_group(new_group, status=200) client.create_group(new_group, status=409) - finally: if result: client.delete_group(result["id"], status=(200, 404)) diff --git a/modules/api/functional_test/live_tests/membership/delete_group_test.py b/modules/api/functional_test/live_tests/membership/delete_group_test.py index a09f836aa..0fe12e4a2 100644 --- a/modules/api/functional_test/live_tests/membership/delete_group_test.py +++ b/modules/api/functional_test/live_tests/membership/delete_group_test.py @@ -1,9 +1,5 @@ -import pytest -import uuid -import json - from hamcrest import * -from vinyldns_python import VinylDNSClient + from vinyldns_context import VinylDNSTestContext @@ -11,7 +7,6 @@ def test_delete_group_success(shared_zone_test_context): """ Tests that we can delete a group that has been created """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -19,8 +14,8 @@ def test_delete_group_success(shared_zone_test_context): "name": "test-delete-group-success", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) result = client.delete_group(saved_group["id"], status=200) @@ -42,7 +37,6 @@ def test_delete_group_that_is_already_deleted(shared_zone_test_context): """ Tests that deleting a group that is already deleted """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -51,14 +45,13 @@ def test_delete_group_that_is_already_deleted(shared_zone_test_context): "name": "test-delete-group-already", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) client.delete_group(saved_group["id"], status=200) client.delete_group(saved_group["id"], status=404) - finally: if saved_group: client.delete_group(saved_group["id"], status=(200, 404)) @@ -73,21 +66,21 @@ def test_delete_admin_group(shared_zone_test_context): result_zone = None try: - #Create group + # Create group new_group = { "name": "test-delete-group-already", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } result_group = client.create_group(new_group, status=200) print(result_group) - #Create zone with that group ID as admin + # Create zone with that group ID as admin zone = { - "name": "one-time.", + "name": f"one-time{shared_zone_test_context.partition_id}.", "email": "test@test.com", "adminGroupId": result_group["id"], "connection": { @@ -110,11 +103,11 @@ def test_delete_admin_group(shared_zone_test_context): client.delete_group(result_group["id"], status=400) - #Delete zone + # Delete zone client.delete_zone(result_zone["id"], status=202) client.wait_until_zone_deleted(result_zone["id"]) - #Should now be able to delete group + # Should now be able to delete group client.delete_group(result_group["id"], status=200) finally: if result_zone: @@ -122,6 +115,7 @@ def test_delete_admin_group(shared_zone_test_context): if result_group: client.delete_group(result_group["id"], status=(200, 404)) + def test_delete_group_not_authorized(shared_zone_test_context): """ Tests that only the admins can delete a zone @@ -133,8 +127,8 @@ def test_delete_group_not_authorized(shared_zone_test_context): "name": "test-delete-group-not-authorized", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = ok_client.create_group(new_group, status=200) not_admin_client.delete_group(saved_group["id"], status=403) diff --git a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py index 268371d39..ec6496563 100644 --- a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py +++ b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py @@ -15,7 +15,6 @@ def test_list_group_activity_start_from_success(group_activity_context, shared_z """ Test that we can list the changes starting from a given timestamp """ - import json client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] @@ -53,7 +52,6 @@ def test_list_group_activity_start_from_fake_time(group_activity_context, shared """ Test that we can start from a fake time stamp """ - client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] updated_groups = group_activity_context["updated_groups"] @@ -76,7 +74,6 @@ def test_list_group_activity_max_item_success(group_activity_context, shared_zon """ Test that we can set the max_items returned """ - client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] updated_groups = group_activity_context["updated_groups"] @@ -98,7 +95,6 @@ def test_list_group_activity_max_item_zero(group_activity_context, shared_zone_t """ Test that max_item set to zero fails """ - client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] client.get_group_changes(created_group["id"], max_items=0, status=400) @@ -108,7 +104,6 @@ def test_list_group_activity_max_item_over_1000(group_activity_context, shared_z """ Test that when max_item is over 1000 fails """ - client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] client.get_group_changes(created_group["id"], max_items=1001, status=400) @@ -118,7 +113,6 @@ def test_get_group_changes_paging(group_activity_context, shared_zone_test_conte """ Test that we can page through multiple pages of group changes """ - client = shared_zone_test_context.ok_vinyldns_client created_group = group_activity_context["created_group"] updated_groups = group_activity_context["updated_groups"] @@ -159,7 +153,6 @@ def test_get_group_changes_unauthed(shared_zone_test_context): """ Tests that we cant get group changes without access """ - client = shared_zone_test_context.ok_vinyldns_client dummy_client = shared_zone_test_context.dummy_vinyldns_client saved_group = None diff --git a/modules/api/functional_test/live_tests/membership/get_group_test.py b/modules/api/functional_test/live_tests/membership/get_group_test.py index 994d63609..4a50d7201 100644 --- a/modules/api/functional_test/live_tests/membership/get_group_test.py +++ b/modules/api/functional_test/live_tests/membership/get_group_test.py @@ -1,15 +1,10 @@ -import pytest -import json - from hamcrest import * -from vinyldns_python import VinylDNSClient def test_get_group_success(shared_zone_test_context): """ Tests that we can get a group that has been created """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -17,8 +12,8 @@ def test_get_group_success(shared_zone_test_context): "name": "test-get-group-success", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) @@ -47,7 +42,6 @@ def test_get_deleted_group(shared_zone_test_context): """ Tests getting a group that was already deleted """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -56,8 +50,8 @@ def test_get_deleted_group(shared_zone_test_context): "name": "test-get-deleted-group", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) @@ -72,7 +66,6 @@ def test_get_group_unauthed(shared_zone_test_context): """ Tests that we cant get a group were not in """ - client = shared_zone_test_context.ok_vinyldns_client dummy_client = shared_zone_test_context.dummy_vinyldns_client saved_group = None @@ -81,8 +74,8 @@ def test_get_group_unauthed(shared_zone_test_context): "name": "test-get-group-unauthed", "email": "test@test.com", "description": "this is a description", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) diff --git a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py index 6aedd9e29..8742b1e7e 100644 --- a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py +++ b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py @@ -1,25 +1,18 @@ - -import pytest -import json - from hamcrest import * -from vinyldns_python import VinylDNSClient - def test_list_group_admins_success(shared_zone_test_context): """ Test that we can list all the admins of a given group """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: new_group = { "name": "test-list-group-admins-success", "email": "test@test.com", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"}, { "id": "dummy"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}, {"id": "dummy"}] } saved_group = client.create_group(new_group, status=200) @@ -51,7 +44,6 @@ def test_list_group_admins_group_not_found(shared_zone_test_context): """ Test that listing the admins of a non-existent group fails """ - client = shared_zone_test_context.ok_vinyldns_client client.list_group_admins("doesntexist", status=404) @@ -60,7 +52,6 @@ def test_list_group_admins_unauthed(shared_zone_test_context): """ Tests that we cant list admins without access """ - client = shared_zone_test_context.ok_vinyldns_client dummy_client = shared_zone_test_context.dummy_vinyldns_client saved_group = None @@ -68,8 +59,8 @@ def test_list_group_admins_unauthed(shared_zone_test_context): new_group = { "name": "test-list-group-admins-unauthed", "email": "test@test.com", - "members": [ { "id": "ok"} ], - "admins": [ { "id": "ok"} ] + "members": [{"id": "ok"}], + "admins": [{"id": "ok"}] } saved_group = client.create_group(new_group, status=200) diff --git a/modules/api/functional_test/live_tests/membership/list_group_members_test.py b/modules/api/functional_test/live_tests/membership/list_group_members_test.py index 462315271..8a768c636 100644 --- a/modules/api/functional_test/live_tests/membership/list_group_members_test.py +++ b/modules/api/functional_test/live_tests/membership/list_group_members_test.py @@ -5,7 +5,6 @@ def test_list_group_members_success(shared_zone_test_context): """ Test that we can list all the members of a group """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -49,7 +48,6 @@ def test_list_group_members_success(shared_zone_test_context): assert_that(ok["email"], is_("test@test.com")) assert_that(ok["created"], is_not(none())) assert_that(ok["lockStatus"], is_("Unlocked")) - finally: if saved_group: client.delete_group(saved_group["id"], status=(200, 404)) @@ -59,7 +57,6 @@ def test_list_group_members_not_found(shared_zone_test_context): """ Tests that we can not list the members of a non-existent group """ - client = shared_zone_test_context.ok_vinyldns_client client.list_members_group("not_found", status=404) @@ -69,7 +66,6 @@ def test_list_group_members_start_from(shared_zone_test_context): """ Test that we can list the members starting from a given user """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -124,7 +120,6 @@ def test_list_group_members_start_from_non_user(shared_zone_test_context): """ Test that we can list the members starting from a non existent username """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -179,7 +174,6 @@ def test_list_group_members_max_item(shared_zone_test_context): """ Test that we can chose the number of items to list """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -233,7 +227,6 @@ def test_list_group_members_max_item_default(shared_zone_test_context): """ Test that the default for max_item is 100 items """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -286,13 +279,12 @@ def test_list_group_members_max_item_zero(shared_zone_test_context): """ Test that the call fails when max_item is 0 """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: members = [] for runner in range(0, 200): - members.append({"id": "dummy{0:0>3}".format(runner)}) + members.append({"id": "dummy{0:0>3}".format(runner)}) new_group = { "name": "test-list-group-members-max-items-zero", @@ -321,7 +313,6 @@ def test_list_group_members_max_item_over_1000(shared_zone_test_context): """ Test that the call fails when max_item is over 1000 """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -356,7 +347,6 @@ def test_list_group_members_next_id_correct(shared_zone_test_context): """ Test that the correct next_id is returned """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -409,7 +399,6 @@ def test_list_group_members_next_id_exhausted(shared_zone_test_context): """ Test that the next_id is null when the list is exhausted """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -461,7 +450,6 @@ def test_list_group_members_next_id_exhausted_two_pages(shared_zone_test_context """ Test that the next_id is null when the list is exhausted over 2 pages """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -537,7 +525,6 @@ def test_list_group_members_unauthed(shared_zone_test_context): """ Tests that we cant list members without access """ - client = shared_zone_test_context.ok_vinyldns_client dummy_client = shared_zone_test_context.dummy_vinyldns_client saved_group = None diff --git a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py index 219455d92..cdefd85d2 100644 --- a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py +++ b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py @@ -1,10 +1,10 @@ import pytest -from hamcrest import * + from utils import * @pytest.fixture(scope="module") -def list_my_groups_context(request, shared_zone_test_context): +def list_my_groups_context(shared_zone_test_context): return shared_zone_test_context.list_groups_context @@ -12,7 +12,6 @@ def test_list_my_groups_no_parameters(list_my_groups_context): """ Test that we can get all the groups where a user is a member """ - results = list_my_groups_context.client.list_my_groups(status=200) assert_that(results, has_length(3)) # 3 fields @@ -26,7 +25,7 @@ def test_list_my_groups_no_parameters(list_my_groups_context): results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) for i in range(0, 50): - assert_that(results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i))) + assert_that(results["groups"][i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i))) def test_get_my_groups_using_old_account_auth(list_my_groups_context): @@ -94,12 +93,12 @@ def test_list_my_groups_filter_matches(list_my_groups_context): """ Tests that only matched groups are returned """ - results = list_my_groups_context.client.list_my_groups(group_name_filter="test-list-my-groups-01", status=200) + results = list_my_groups_context.client.list_my_groups(group_name_filter=f"{list_my_groups_context.group_prefix}-01", status=200) assert_that(results, has_length(4)) # 4 fields assert_that(results["groups"], has_length(10)) - assert_that(results["groupNameFilter"], is_("test-list-my-groups-01")) + assert_that(results["groupNameFilter"], is_(f"{list_my_groups_context.group_prefix}-01")) assert_that(results, is_not(has_key("startFrom"))) assert_that(results, is_not(has_key("nextId"))) assert_that(results["maxItems"], is_(100)) @@ -107,23 +106,22 @@ def test_list_my_groups_filter_matches(list_my_groups_context): results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) for i in range(0, 10): - assert_that(results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i + 10))) + assert_that(results["groups"][i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i + 10))) def test_list_my_groups_no_deleted(list_my_groups_context): """ Tests that no deleted groups are returned """ - results = list_my_groups_context.client.list_my_groups(max_items=100, status=200) + client = list_my_groups_context.client + results = client.list_my_groups(max_items=100, status=200) assert_that(results, has_key("groups")) for g in results["groups"]: assert_that(g["status"], is_not("Deleted")) while "nextId" in results: - results = client.list_my_groups(max_items=20, group_name_filter="test-list-my-groups-", - start_from=results["nextId"], status=200) - + results = client.list_my_groups(max_items=20, group_name_filter=f"{list_my_groups_context.group_prefix}-", start_from=results["nextId"], status=200) assert_that(results, has_key("groups")) for g in results["groups"]: assert_that(g["status"], is_not("Deleted")) @@ -133,7 +131,6 @@ def test_list_my_groups_with_ignore_access_true(list_my_groups_context): """ Test that we can get all the groups whether a user is a member or not """ - results = list_my_groups_context.client.list_my_groups(ignore_access=True, status=200) assert_that(len(results["groups"]), greater_than(50)) @@ -144,14 +141,13 @@ def test_list_my_groups_with_ignore_access_true(list_my_groups_context): my_results["groups"] = sorted(my_results["groups"], key=lambda x: x["name"]) for i in range(0, 50): - assert_that(my_results["groups"][i]["name"], is_("test-list-my-groups-{0:0>3}".format(i))) + assert_that(my_results["groups"][i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i))) def test_list_my_groups_as_support_user(list_my_groups_context): """ Test that we can get all the groups as a support user, even without ignore_access """ - results = list_my_groups_context.support_user_client.list_my_groups(status=200) assert_that(len(results["groups"]), greater_than(50)) @@ -163,7 +159,6 @@ def test_list_my_groups_as_support_user_with_ignore_access_true(list_my_groups_c """ Test that we can get all the groups as a support user """ - results = list_my_groups_context.support_user_client.list_my_groups(ignore_access=True, status=200) assert_that(len(results["groups"]), greater_than(50)) diff --git a/modules/api/functional_test/live_tests/membership/update_group_test.py b/modules/api/functional_test/live_tests/membership/update_group_test.py index 795044565..0e5044d8e 100644 --- a/modules/api/functional_test/live_tests/membership/update_group_test.py +++ b/modules/api/functional_test/live_tests/membership/update_group_test.py @@ -7,7 +7,6 @@ def test_update_group_success(shared_zone_test_context): """ Tests that we can update a group that has been created """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -193,7 +192,6 @@ def test_update_group_adds_admins_as_members(shared_zone_test_context): """ Tests that when we add an admin to a group the admin is also a member """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -241,7 +239,6 @@ def test_update_group_conflict(shared_zone_test_context): """ Tests that we can not update a groups name to a name already in use """ - client = shared_zone_test_context.ok_vinyldns_client result = None conflict_group = None @@ -287,7 +284,6 @@ def test_update_group_not_found(shared_zone_test_context): """ Tests that we can not update a group that has not been created """ - client = shared_zone_test_context.ok_vinyldns_client update_group = { @@ -305,7 +301,6 @@ def test_update_group_deleted(shared_zone_test_context): """ Tests that we can not update a group that has been deleted """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -338,7 +333,6 @@ def test_add_member_via_update_group_success(shared_zone_test_context): """ Tests that we can add a member to a group via update successfully """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -372,7 +366,6 @@ def test_add_member_to_group_twice_via_update_group(shared_zone_test_context): """ Tests that we can add a member to a group twice successfully via update group """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None try: @@ -407,7 +400,6 @@ def test_add_not_found_member_to_group_via_update_group(shared_zone_test_context """ Tests that we can not add a non-existent member to a group via update group """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None @@ -440,7 +432,6 @@ def test_remove_member_via_update_group_success(shared_zone_test_context): """ Tests that we can remove a member via update group successfully """ - client = shared_zone_test_context.ok_vinyldns_client saved_group = None diff --git a/modules/api/functional_test/live_tests/production_verify_test.py b/modules/api/functional_test/live_tests/production_verify_test.py index 0b0df124b..c75e650fc 100644 --- a/modules/api/functional_test/live_tests/production_verify_test.py +++ b/modules/api/functional_test/live_tests/production_verify_test.py @@ -56,5 +56,6 @@ def test_verify_production(shared_zone_test_context): try: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_deleted(delete_result["zoneId"], delete_result["id"]) - except: + except Exception: + traceback.print_exc() pass diff --git a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py index 9be902265..c9a13d67e 100644 --- a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py @@ -25,7 +25,6 @@ def test_create_recordset_with_dns_verify(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -36,17 +35,14 @@ def test_create_recordset_with_dns_verify(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) assert_that("10.1.1.1", is_in(records)) assert_that("10.2.2.2", is_in(records)) - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) @@ -58,7 +54,8 @@ def test_create_recordset_with_dns_verify(shared_zone_test_context): try: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -71,7 +68,7 @@ def test_create_naptr_origin_record(shared_zone_test_context): try: new_rs = { "zoneId": shared_zone_test_context.ok_zone["id"], - "name": "ok.", + "name": shared_zone_test_context.ok_zone["name"], "type": "NAPTR", "ttl": 100, "records": [ @@ -95,10 +92,9 @@ def test_create_naptr_origin_record(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) - finally: if result_rs: - delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) def test_create_naptr_non_origin_record(shared_zone_test_context): @@ -134,10 +130,9 @@ def test_create_naptr_non_origin_record(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] verify_recordset(result_rs, new_rs) - finally: if result_rs: - delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) + client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context): @@ -161,7 +156,6 @@ def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -172,11 +166,8 @@ def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -201,7 +192,6 @@ def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -212,11 +202,8 @@ def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -241,7 +228,6 @@ def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -252,11 +238,8 @@ def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -356,7 +339,6 @@ def test_create_recordset_conflict_with_trailing_dot_insensitive_name(shared_zon result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] first_rs["name"] = rs_name client.create_recordset(first_rs, status=409) - finally: if result_rs: result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] @@ -386,7 +368,6 @@ def test_create_recordset_conflict_with_dns(shared_zone_test_context): dns_add(shared_zone_test_context.ok_zone, "backend-conflict", 200, "A", "1.2.3.4") result = client.create_recordset(new_rs, status=202) client.wait_until_recordset_change_status(result, "Failed") - finally: dns_delete(shared_zone_test_context.ok_zone, "backend-conflict", "A") @@ -410,7 +391,6 @@ def test_create_recordset_conflict_with_dns_different_type(shared_zone_test_cont } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -421,21 +401,17 @@ def test_create_recordset_conflict_with_dns_different_type(shared_zone_test_cont result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") text = [x["text"] for x in result_rs["records"]] assert_that(text, has_length(1)) assert_that("should succeed", is_in(text)) - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that('"should succeed"', is_in(rdata_strings)) - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -561,7 +537,6 @@ def test_create_dotted_a_record_apex_succeeds(shared_zone_test_context): """ Test that creating an apex A record set containing dots succeeds. """ - client = shared_zone_test_context.ok_vinyldns_client zone_id = shared_zone_test_context.parent_zone["id"] zone_name = shared_zone_test_context.parent_zone["name"] @@ -578,7 +553,6 @@ def test_create_dotted_a_record_apex_succeeds(shared_zone_test_context): apex_a_response = client.create_recordset(apex_a_record, status=202) apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] assert_that(apex_a_rs["name"], is_(apex_a_record["name"] + ".")) - finally: if apex_a_rs: delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) @@ -590,7 +564,6 @@ def test_create_dotted_a_record_apex_with_trailing_dot_succeeds(shared_zone_test """ Test that creating an apex A record set containing dots succeeds (with trailing dot) """ - client = shared_zone_test_context.ok_vinyldns_client zone_id = shared_zone_test_context.parent_zone["id"] zone_name = shared_zone_test_context.parent_zone["name"] @@ -607,7 +580,6 @@ def test_create_dotted_a_record_apex_with_trailing_dot_succeeds(shared_zone_test apex_a_response = client.create_recordset(apex_a_record, status=202) apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] assert_that(apex_a_rs["name"], is_(apex_a_record["name"])) - finally: if apex_a_rs: delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) @@ -619,9 +591,9 @@ def test_create_dotted_cname_record_fails(shared_zone_test_context): Test that creating a CNAME record set with dotted host record name returns an error. """ client = shared_zone_test_context.ok_vinyldns_client - + zone = shared_zone_test_context.parent_zone apex_cname_rs = { - "zoneId": shared_zone_test_context.parent_zone["id"], + "zoneId": zone["id"], "name": "dot.ted", "type": "CNAME", "ttl": 500, @@ -629,8 +601,7 @@ def test_create_dotted_cname_record_fails(shared_zone_test_context): } error = client.create_recordset(apex_cname_rs, status=422) - assert_that(error, is_( - "Record with name dot.ted and type CNAME is a dotted host which is not allowed in zone parent.com.")) + assert_that(error, is_(f'Record with name dot.ted and type CNAME is a dotted host which is not allowed in zone {zone["name"]}')) def test_create_cname_with_multiple_records(shared_zone_test_context): @@ -704,9 +675,9 @@ def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_cont Test that creating a CNAME fails if a record with the same name exists """ client = shared_zone_test_context.ok_vinyldns_client - + zone = shared_zone_test_context.system_test_zone a_rs = { - "zoneId": shared_zone_test_context.system_test_zone["id"], + "zoneId": zone["id"], "name": "duplicate-test-name", "type": "A", "ttl": 500, @@ -718,7 +689,7 @@ def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_cont } cname_rs = { - "zoneId": shared_zone_test_context.system_test_zone["id"], + "zoneId": zone["id"], "name": "duplicate-test-name", "type": "CNAME", "ttl": 500, @@ -735,9 +706,7 @@ def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_cont a_record = client.wait_until_recordset_change_status(a_create, "Complete")["recordSet"] error = client.create_recordset(cname_rs, status=409) - assert_that(error, is_( - "RecordSet with name duplicate-test-name already exists in zone system-test., CNAME record cannot use duplicate name")) - + assert_that(error, is_(f'RecordSet with name duplicate-test-name already exists in zone {zone["name"]}, CNAME record cannot use duplicate name')) finally: if a_record: delete_result = client.delete_recordset(a_record["zoneId"], a_record["id"], status=202) @@ -749,9 +718,9 @@ def test_create_record_with_existing_cname_fails(shared_zone_test_context): Test that creating a record fails if a cname with the same name exists """ client = shared_zone_test_context.ok_vinyldns_client - + zone = shared_zone_test_context.system_test_zone cname_rs = { - "zoneId": shared_zone_test_context.system_test_zone["id"], + "zoneId": zone["id"], "name": "duplicate-test-name", "type": "CNAME", "ttl": 500, @@ -763,7 +732,7 @@ def test_create_record_with_existing_cname_fails(shared_zone_test_context): } a_rs = { - "zoneId": shared_zone_test_context.system_test_zone["id"], + "zoneId": zone["id"], "name": "duplicate-test-name", "type": "A", "ttl": 500, @@ -780,9 +749,7 @@ def test_create_record_with_existing_cname_fails(shared_zone_test_context): cname_record = client.wait_until_recordset_change_status(cname_create, "Complete")["recordSet"] error = client.create_recordset(a_rs, status=409) - assert_that(error, - is_("RecordSet with name duplicate-test-name and type CNAME already exists in zone system-test.")) - + assert_that(error, is_(f'RecordSet with name duplicate-test-name and type CNAME already exists in zone {zone["name"]}')) finally: if cname_record: delete_result = client.delete_recordset(cname_record["zoneId"], cname_record["id"], status=202) @@ -1215,7 +1182,6 @@ def test_create_ipv4_ptr_recordset_with_verify(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone ip4_reverse_zone\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -1226,15 +1192,11 @@ def test_create_ipv4_ptr_recordset_with_verify(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = result_rs["records"] - assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(shared_zone_test_context.ip4_reverse_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) @@ -1316,15 +1278,12 @@ def test_create_ipv6_ptr_recordset(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = result_rs["records"] assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(shared_zone_test_context.ip6_reverse_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(answers, has_length(1)) @@ -1359,6 +1318,7 @@ def test_create_address_recordset_in_ipv6_reverse_zone_fails(shared_zone_test_co Test creating a new A record set in an existing IPv6 reverse lookup zone fails """ client = shared_zone_test_context.ok_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix new_rs = { "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], "name": "test_create_address_recordset_in_ipv6_reverse_zone_fails", @@ -1366,10 +1326,10 @@ def test_create_address_recordset_in_ipv6_reverse_zone_fails(shared_zone_test_co "ttl": 100, "records": [ { - "address": "fd69:27cc:fe91::60" + "address": f"{ip6_prefix}::60" }, { - "address": "fd69:27cc:fe91:1:2:3:4:61" + "address": f"{ip6_prefix}:1:2:3:4:61" } ] } @@ -1405,7 +1365,7 @@ def test_at_create_recordset(shared_zone_test_context): result_rs = None try: new_rs = { - "zoneId":ok_zone_id, + "zoneId": ok_zone_id, "name": "@", "type": "TXT", "ttl": 100, @@ -1415,7 +1375,6 @@ def test_at_create_recordset(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone 'ok'\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -1427,19 +1386,15 @@ def test_at_create_recordset(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs expected_rs["name"] = ok_zone_name verify_recordset(result_rs, expected_rs) - print("\r\n\r\n!!!recordset verified...") - records = result_rs["records"] assert_that(records, has_length(1)) assert_that(records[0]["text"], is_("someText")) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(shared_zone_test_context.ok_zone, ok_zone_name, result_rs["type"]) @@ -1461,7 +1416,7 @@ def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zon result_rs = None try: new_rs = { - "zoneId":ok_zone_id, + "zoneId": ok_zone_id, "name": "testing", "type": "TXT", "ttl": 100, @@ -1471,7 +1426,6 @@ def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zon } ] } - print("\r\nCreating recordset in zone 'ok'\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -1483,19 +1437,15 @@ def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zon result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs expected_rs["name"] = "testing" verify_recordset(result_rs, expected_rs) - print("\r\n\r\n!!!recordset verified...") - records = result_rs["records"] assert_that(records, has_length(1)) assert_that(records[0]["text"], is_('escaped\\char\"act\"ers')) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(shared_zone_test_context.ok_zone, "testing", result_rs["type"]) @@ -1555,7 +1505,8 @@ def test_create_record_with_existing_wildcard_succeeds(shared_zone_test_context) if "id" in test_rs: delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -1586,7 +1537,8 @@ def test_create_record_with_existing_cname_wildcard_succeed(shared_zone_test_con try: delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -1608,7 +1560,8 @@ def test_create_long_txt_record_succeeds(shared_zone_test_context): try: delete_result = client.delete_recordset(rs["zoneId"], rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -1633,7 +1586,6 @@ def test_txt_dotted_host_create_succeeds(shared_zone_test_context): try: rs_create = client.create_recordset(new_rs, status=202) rs_result = client.wait_until_recordset_change_status(rs_create, "Complete")["recordSet"] - finally: if rs_result: delete_result = client.delete_recordset(rs_result["zoneId"], rs_result["id"], status=202) @@ -1662,7 +1614,6 @@ def test_ns_create_for_admin_group_succeeds(shared_zone_test_context): } result = client.create_recordset(new_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - finally: if result_rs: client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) @@ -1734,7 +1685,6 @@ def test_create_ipv4_ptr_recordset_with_verify_in_classless(shared_zone_test_con } ] } - print("\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -1745,15 +1695,12 @@ def test_create_ipv4_ptr_recordset_with_verify_in_classless(shared_zone_test_con result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = result_rs["records"] assert_that(records[0]["ptrdname"], is_("ftp.vinyldns.")) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(reverse4_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) @@ -1786,14 +1733,13 @@ def test_create_ipv4_ptr_recordset_in_classless_outside_cidr(shared_zone_test_co } error = client.create_recordset(new_rs, status=422) - assert_that(error, is_("RecordSet 190 does not specify a valid IP address in zone 192/30.2.0.192.in-addr.arpa.")) + assert_that(error, is_(f'RecordSet 190 does not specify a valid IP address in zone {reverse4_zone["name"]}')) def test_create_high_value_domain_fails(shared_zone_test_context): """ Test that creating a record configured as a High Value Domain fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone new_rs = { @@ -1809,15 +1755,13 @@ def test_create_high_value_domain_fails(shared_zone_test_context): } error = client.create_recordset(new_rs, status=422) - assert_that(error, is_( - 'Record name "high-value-domain.ok." is configured as a High Value Domain, so it cannot be modified.')) + assert_that(error, is_(f'Record name "high-value-domain.{zone["name"]}" is configured as a High Value Domain, so it cannot be modified.')) def test_create_high_value_domain_fails_case_insensitive(shared_zone_test_context): """ Test that the High Value Domain validation works regardless of case """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone new_rs = { @@ -1833,15 +1777,13 @@ def test_create_high_value_domain_fails_case_insensitive(shared_zone_test_contex } error = client.create_recordset(new_rs, status=422) - assert_that(error, is_( - 'Record name "hIgH-vAlUe-dOmAiN.ok." is configured as a High Value Domain, so it cannot be modified.')) + assert_that(error, is_(f'Record name "hIgH-vAlUe-dOmAiN.{zone["name"]}" is configured as a High Value Domain, so it cannot be modified.')) def test_create_high_value_domain_fails_for_ip4_ptr(shared_zone_test_context): """ Test that creating a record configured as a High Value Domain fails for ip4 ptr record """ - client = shared_zone_test_context.ok_vinyldns_client ptr = { "zoneId": shared_zone_test_context.classless_base_zone["id"], @@ -1856,15 +1798,13 @@ def test_create_high_value_domain_fails_for_ip4_ptr(shared_zone_test_context): } error_ptr = client.create_recordset(ptr, status=422) - assert_that(error_ptr, - is_('Record name "192.0.2.252" is configured as a High Value Domain, so it cannot be modified.')) + assert_that(error_ptr, is_(f'Record name "{shared_zone_test_context.ip4_classless_prefix}.252" is configured as a High Value Domain, so it cannot be modified.')) def test_create_high_value_domain_fails_for_ip6_ptr(shared_zone_test_context): """ Test that creating a record configured as a High Value Domain fails for ip6 ptr record """ - client = shared_zone_test_context.ok_vinyldns_client ptr = { "zoneId": shared_zone_test_context.ip6_reverse_zone["id"], @@ -1879,26 +1819,13 @@ def test_create_high_value_domain_fails_for_ip6_ptr(shared_zone_test_context): } error_ptr = client.create_recordset(ptr, status=422) - assert_that(error_ptr, is_( - 'Record name "fd69:27cc:fe91:0000:0000:0000:0000:ffff" is configured as a High Value Domain, so it cannot be modified.')) - - -def test_no_add_access_non_test_zone(shared_zone_test_context): - """ - Test that a test user cannot create a record in a non-test zone (even if admin) - """ - - client = shared_zone_test_context.shared_zone_vinyldns_client - zone = shared_zone_test_context.non_test_shared_zone - record = create_recordset(zone, "non-test-zone-A", "A", [{"address": "1.2.3.4"}]) - client.create_recordset(record, status=403) + assert_that(error_ptr, is_(f'Record name "{shared_zone_test_context.ip6_prefix}:0000:0000:0000:0000:ffff" is configured as a High Value Domain, so it cannot be modified.')) def test_create_with_owner_group_in_private_zone_by_admin_passes(shared_zone_test_context): """ Test that creating a record with an owner group in a non shared zone by a zone admin passes """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone group = shared_zone_test_context.shared_record_group @@ -1910,7 +1837,6 @@ def test_create_with_owner_group_in_private_zone_by_admin_passes(shared_zone_tes create_response = client.create_recordset(record_json, status=202) create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs["ownerGroupId"], is_(group["id"])) - finally: if create_rs: delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -1921,7 +1847,6 @@ def test_create_with_owner_group_in_shared_zone_by_admin_passes(shared_zone_test """ Test that creating a record with an owner group in a shared zone by a zone admin passes """ - client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.shared_record_group @@ -1933,7 +1858,6 @@ def test_create_with_owner_group_in_shared_zone_by_admin_passes(shared_zone_test create_response = client.create_recordset(record_json, status=202) create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs["ownerGroupId"], is_(group["id"])) - finally: if create_rs: delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -1945,7 +1869,6 @@ def test_create_with_owner_group_in_private_zone_by_acl_passes(shared_zone_test_ """ Test that creating a record with an owner group in a non shared zone by a user with acl access passes """ - client = shared_zone_test_context.dummy_vinyldns_client acl_rule = generate_acl_rule("Write", userId="dummy") zone = shared_zone_test_context.ok_zone @@ -1960,12 +1883,10 @@ def test_create_with_owner_group_in_private_zone_by_acl_passes(shared_zone_test_ create_response = client.create_recordset(record_json, status=202) create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs["ownerGroupId"], is_(group["id"])) - finally: clear_ok_acl_rules(shared_zone_test_context) if create_rs: - delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(zone["id"], create_rs["id"], - status=202) + delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(zone["id"], create_rs["id"], status=202) shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, "Complete") @@ -1974,7 +1895,6 @@ def test_create_with_owner_group_in_shared_zone_by_acl_passes(shared_zone_test_c """ Test that creating a record with an owner group in a shared zone by a user with acl access passes """ - client = shared_zone_test_context.dummy_vinyldns_client acl_rule = generate_acl_rule("Write", userId="dummy") zone = shared_zone_test_context.shared_zone @@ -1989,15 +1909,11 @@ def test_create_with_owner_group_in_shared_zone_by_acl_passes(shared_zone_test_c create_response = client.create_recordset(record_json, status=202) create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs["ownerGroupId"], is_(group["id"])) - finally: clear_shared_zone_acl_rules(shared_zone_test_context) if create_rs: - delete_result = shared_zone_test_context.shared_zone_vinyldns_client.delete_recordset(zone["id"], - create_rs["id"], - status=202) - shared_zone_test_context.shared_zone_vinyldns_client.wait_until_recordset_change_status(delete_result, - "Complete") + delete_result = shared_zone_test_context.shared_zone_vinyldns_client.delete_recordset(zone["id"], create_rs["id"], status=202) + shared_zone_test_context.shared_zone_vinyldns_client.wait_until_recordset_change_status(delete_result, "Complete") def test_create_in_shared_zone_without_owner_group_id_succeeds(shared_zone_test_context): @@ -2015,7 +1931,6 @@ def test_create_in_shared_zone_without_owner_group_id_succeeds(shared_zone_test_ create_response = dummy_client.create_recordset(record_json, status=202) create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs, is_not(has_key("ownerGroupId"))) - finally: if create_rs: delete_result = dummy_client.delete_recordset(create_rs["zoneId"], create_rs["id"], status=202) @@ -2026,7 +1941,6 @@ def test_create_in_shared_zone_by_unassociated_user_succeeds_if_record_type_is_a """ Test that creating a record in a shared zone by a user with no write permissions succeeds if the record type is approved """ - client = shared_zone_test_context.dummy_vinyldns_client zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.dummy_group @@ -2035,12 +1949,10 @@ def test_create_in_shared_zone_by_unassociated_user_succeeds_if_record_type_is_a record_json["ownerGroupId"] = group["id"] create_rs = None - try: create_response = client.create_recordset(record_json, status=202) create_rs = client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs["ownerGroupId"], is_(group["id"])) - finally: if create_rs: delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -2051,23 +1963,20 @@ def test_create_in_shared_zone_by_unassociated_user_fails_if_record_type_is_not_ """ Test that creating a record in a shared zone by a user with no write permissions fails if the record type is not approved """ - client = shared_zone_test_context.dummy_vinyldns_client zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.dummy_group - record_json = create_recordset(zone, "test_shared_not_approved_record_type", "MX", - [{"preference": 3, "exchange": "mx"}]) + record_json = create_recordset(zone, "test_shared_not_approved_record_type", "MX", [{"preference": 3, "exchange": "mx"}]) record_json["ownerGroupId"] = group["id"] error = client.create_recordset(record_json, status=403) - assert_that(error, is_("User dummy does not have access to create test-shared-not-approved-record-type.shared.")) + assert_that(error, is_(f'User dummy does not have access to create test-shared-not-approved-record-type.{zone["name"]}')) def test_create_with_not_found_owner_group_fails(shared_zone_test_context): """ Test that creating a record with a owner group that doesn't exist fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone @@ -2081,7 +1990,6 @@ def test_create_with_owner_group_when_not_member_fails(shared_zone_test_context) """ Test that creating a record with a owner group that the user is not in fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone group = shared_zone_test_context.dummy_group @@ -2089,7 +1997,7 @@ def test_create_with_owner_group_when_not_member_fails(shared_zone_test_context) record_json = create_recordset(zone, "test_shared_not_group_member", "A", [{"address": "1.1.1.1"}]) record_json["ownerGroupId"] = group["id"] error = client.create_recordset(record_json, status=422) - assert_that(error, is_("User not in record owner group with id \"" + group["id"] + "\"")) + assert_that(error, is_(f"User not in record owner group with id \"{group['id']}\"")) @pytest.mark.serial @@ -2097,13 +2005,11 @@ def test_create_ds_success(shared_zone_test_context): """ Test that creating a valid DS record succeeds """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [ {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}, - {"keytag": 60485, "algorithm": 5, "digesttype": 2, - "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} + {"keytag": 60485, "algorithm": 5, "digesttype": 2, "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} ] record_json = create_recordset(zone, "dskey", "DS", record_data, ttl=3600) result_rs = None @@ -2131,7 +2037,6 @@ def test_create_ds_non_hex_digest(shared_zone_test_context): """ Test that creating a DS record fails with a bad digest """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53G"}] @@ -2144,11 +2049,9 @@ def test_create_ds_unknown_algorithm(shared_zone_test_context): """ Test that creating a DS record fails with an unknown algorithm """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 0, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 0, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "dskey", "DS", record_data) errors = client.create_recordset(record_json, status=400)["errors"] assert_that(errors, contains_inanyorder("Algorithm 0 is not a supported DNSSEC algorithm")) @@ -2158,11 +2061,9 @@ def test_create_ds_unknown_digest_type(shared_zone_test_context): """ Test that creating a DS record fails with an unknown digest type """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 5, "digesttype": 0, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 0, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "dskey", "DS", record_data) errors = client.create_recordset(record_json, status=400)["errors"] assert_that(errors, contains_inanyorder("Digest Type 0 is not a supported DS record digest type")) @@ -2172,11 +2073,9 @@ def test_create_ds_bad_ttl_fails(shared_zone_test_context): """ Test that creating a DS record with unmatching TTL fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "dskey", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) assert_that(error, is_("DS record [dskey] must have TTL matching its linked NS (3600)")) @@ -2186,42 +2085,33 @@ def test_create_ds_no_ns_fails(shared_zone_test_context): """ Test that creating a DS record when there is no child NS in the zone fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "no-ns-exists", "DS", record_data, ttl=3600) error = client.create_recordset(record_json, status=422) - assert_that(error, - is_( - "DS record [no-ns-exists] is invalid because there is no NS record with that name in the zone [example.com.]")) + assert_that(error, is_(f'DS record [no-ns-exists] is invalid because there is no NS record with that name in the zone [{zone["name"]}]')) def test_create_apex_ds_fails(shared_zone_test_context): """ Test that creating a DS record fails at apex """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "@", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) - assert_that(error, is_("Record with name [example.com.] is an DS record at apex and cannot be added")) + assert_that(error, is_(f'Record with name [{zone["name"]}] is an DS record at apex and cannot be added')) def test_create_dotted_ds_fails(shared_zone_test_context): """ Test that creating a DS record fails if dotted """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone - record_data = [ - {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] + record_data = [{"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}] record_json = create_recordset(zone, "dotted.ds", "DS", record_data, ttl=100) error = client.create_recordset(record_json, status=422) - assert_that(error, is_( - "Record with name dotted.ds and type DS is a dotted host which is not allowed in zone example.com.")) + assert_that(error, is_(f'Record with name dotted.ds and type DS is a dotted host which is not allowed in zone {zone["name"]}')) diff --git a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py index 2121c905f..f0e6db94e 100644 --- a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py @@ -1,11 +1,7 @@ import pytest -import sys -from utils import * -from hamcrest import * -from vinyldns_python import VinylDNSClient -from test_data import TestData -import time +from live_tests.test_data import TestData +from utils import * @pytest.mark.parametrize("record_name,test_rs", TestData.FORWARD_RECORDS) @@ -117,7 +113,6 @@ def test_delete_recordset_with_verify(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(str(result)) @@ -128,17 +123,14 @@ def test_delete_recordset_with_verify(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) assert_that("10.1.1.1", is_in(records)) assert_that("10.2.2.2", is_in(records)) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) @@ -166,7 +158,6 @@ def test_user_can_delete_record_in_owned_zone(shared_zone_test_context): """ Test user can delete a record that in a zone that it is owns """ - client = shared_zone_test_context.ok_vinyldns_client rs = None try: @@ -200,7 +191,6 @@ def test_user_cannot_delete_record_in_unowned_zone(shared_zone_test_context): """ Test user cannot delete a record that in an unowned zone """ - client = shared_zone_test_context.dummy_vinyldns_client unauthorized_client = shared_zone_test_context.ok_vinyldns_client rs = None @@ -275,7 +265,7 @@ def test_delete_ipv4_ptr_recordset_does_not_exist_fails(shared_zone_test_context """ Test deleting a nonexistant IPv4 PTR recordset returns not found """ - client =shared_zone_test_context.ok_vinyldns_client + client = shared_zone_test_context.ok_vinyldns_client client.delete_recordset(shared_zone_test_context.ip4_reverse_zone["id"], "4444", status=404) @@ -310,7 +300,6 @@ def test_delete_ipv6_ptr_recordset(shared_zone_test_context): client.wait_until_recordset_change_status(delete_result, "Complete") - def test_delete_ipv6_ptr_recordset_does_not_exist_fails(shared_zone_test_context): """ Test deleting a nonexistant IPv6 PTR recordset returns not found @@ -342,7 +331,6 @@ def test_at_delete_recordset(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone - result_rs = None new_rs = { "zoneId": ok_zone["id"], "name": "@", @@ -354,7 +342,6 @@ def test_at_delete_recordset(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) print(json.dumps(result, indent=3)) @@ -366,19 +353,15 @@ def test_at_delete_recordset(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") expected_rs = new_rs expected_rs["name"] = ok_zone["name"] verify_recordset(result_rs, expected_rs) - print("\r\n\r\n!!!recordset verified...") - records = result_rs["records"] assert_that(records, has_length(1)) assert_that(records[0]["text"], is_("someText")) - print("\r\n\r\n!!!deleting recordset in dns backend") delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") @@ -392,7 +375,6 @@ def test_delete_recordset_with_different_dns_data(shared_zone_test_context): """ Test deleting a recordset with out-of-sync rdata in dns (ex. if the record was modified manually) """ - client = shared_zone_test_context.ok_vinyldns_client ok_zone = shared_zone_test_context.ok_zone result_rs = None @@ -436,14 +418,14 @@ def test_delete_recordset_with_different_dns_data(shared_zone_test_context): delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") result_rs = None - finally: if result_rs: try: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) if delete_result: client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -460,13 +442,13 @@ def test_user_can_delete_record_via_user_acl_rule(shared_zone_test_context): result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_user_acl_rule", ok_zone) - #Dummy user cannot delete record in zone + # Dummy user cannot delete record in zone shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=403, retries=3) - #add rule + # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - #Dummy user can delete record + # Dummy user can delete record shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) result_rs = None @@ -503,9 +485,8 @@ def test_user_cannot_delete_record_with_write_txt_read_all(shared_zone_test_cont created_rs = dummy_client.wait_until_recordset_change_status(rs_change, "Complete")["recordSet"] verify_recordset(created_rs, new_rs) - #dummy cannot delete the RS + # dummy cannot delete the RS dummy_client.delete_recordset(ok_zone["id"], created_rs["id"], status=403) - finally: clear_ok_acl_rules(shared_zone_test_context) if created_rs: @@ -526,13 +507,13 @@ def test_user_can_delete_record_via_group_acl_rule(shared_zone_test_context): result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_group_acl_rule", ok_zone) - #Dummy user cannot delete record in zone + # Dummy user cannot delete record in zone shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=403) - #add rule + # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - #Dummy user can delete record + # Dummy user can delete record shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs["zoneId"], result_rs["id"]) result_rs = None @@ -550,7 +531,6 @@ def test_ns_delete_for_admin_group_passes(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone ns_rs = None - try: new_rs = { "zoneId": zone["id"], @@ -570,7 +550,6 @@ def test_ns_delete_for_admin_group_passes(shared_zone_test_context): client.wait_until_recordset_change_status(delete_result, "Complete") ns_rs = None - finally: if ns_rs: client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=(202, 404)) @@ -595,7 +574,6 @@ def test_delete_dotted_a_record_apex_succeeds(shared_zone_test_context): """ Test that deleting an apex A record set containing dots succeeds. """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone @@ -609,8 +587,7 @@ def test_delete_dotted_a_record_apex_succeeds(shared_zone_test_context): try: apex_a_response = client.create_recordset(apex_a_record, status=202) apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, "Complete")["recordSet"] - assert_that(apex_a_rs["name"],is_(apex_a_record["name"] + ".")) - + assert_that(apex_a_rs["name"], is_(apex_a_record["name"] + ".")) finally: delete_result = client.delete_recordset(apex_a_rs["zoneId"], apex_a_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") @@ -620,14 +597,13 @@ def test_delete_high_value_domain_fails(shared_zone_test_context): """ Test that deleting a high value domain fails """ - client = shared_zone_test_context.ok_vinyldns_client - zone_system = shared_zone_test_context.system_test_zone - list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] + zone = shared_zone_test_context.system_test_zone + list_results_page_system = client.list_recordsets_by_zone(zone["id"], status=200)["recordSets"] record_system = [item for item in list_results_page_system if item["name"] == "high-value-domain"][0] errors_system = client.delete_recordset(record_system["zoneId"], record_system["id"], status=422) - assert_that(errors_system, is_('Record name "high-value-domain.system-test." is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_system, is_(f'Record name "high-value-domain.{zone["name"]}" is configured as a High Value Domain, so it cannot be modified.')) def test_delete_high_value_domain_fails_ip4_ptr(shared_zone_test_context): @@ -640,36 +616,22 @@ def test_delete_high_value_domain_fails_ip4_ptr(shared_zone_test_context): record_ip4 = [item for item in list_results_page_ip4 if item["name"] == "253"][0] errors_ip4 = client.delete_recordset(record_ip4["zoneId"], record_ip4["id"], status=422) - assert_that(errors_ip4, is_('Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_ip4, is_(f'Record name "{shared_zone_test_context.ip4_classless_prefix}.253" is configured as a High Value Domain, so it cannot be modified.')) def test_delete_high_value_domain_fails_ip6_ptr(shared_zone_test_context): """ Test that deleting a high value domain fails for ip6 ptr """ - client = shared_zone_test_context.ok_vinyldns_client zone_ip6 = shared_zone_test_context.ip6_reverse_zone list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6["id"], status=200)["recordSets"] record_ip6 = [item for item in list_results_page_ip6 if item["name"] == "0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0"][0] errors_ip6 = client.delete_recordset(record_ip6["zoneId"], record_ip6["id"], status=422) - assert_that(errors_ip6, is_('Record name "fd69:27cc:fe91:0000:0000:0000:ffff:0000" is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_ip6, is_(f'Record name "{shared_zone_test_context.ip6_prefix}:0000:0000:0000:ffff:0000" is configured as a High Value Domain, so it cannot be modified.')) -def test_no_delete_access_non_test_zone(shared_zone_test_context): - """ - Test that a test user cannot delete a record in a non-test zone (even if admin) - """ - - client = shared_zone_test_context.shared_zone_vinyldns_client - zone_id = shared_zone_test_context.non_test_shared_zone["id"] - - list_results = client.list_recordsets_by_zone(zone_id, status=200)["recordSets"] - record_delete = [item for item in list_results if item["name"] == "delete-test"][0] - - client.delete_recordset(zone_id, record_delete["id"], status=403) - def test_delete_for_user_in_record_owner_group_in_shared_zone_succeeds(shared_zone_test_context): """ Test that a user in record owner group can delete a record in a shared zone @@ -679,7 +641,7 @@ def test_delete_for_user_in_record_owner_group_in_shared_zone_succeeds(shared_zo shared_zone = shared_zone_test_context.shared_zone shared_group = shared_zone_test_context.shared_record_group - record_json = create_recordset(shared_zone, "test_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_group["id"]) + record_json = create_recordset(shared_zone, "test_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id=shared_group["id"]) create_rs = shared_client.create_recordset(record_json, status=202) result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] @@ -687,6 +649,7 @@ def test_delete_for_user_in_record_owner_group_in_shared_zone_succeeds(shared_zo delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) ok_client.wait_until_recordset_change_status(delete_rs, "Complete") + def test_delete_for_zone_admin_in_shared_zone_succeeds(shared_zone_test_context): """ Test that a zone admin not in record owner group can delete a record in a shared zone @@ -694,7 +657,7 @@ def test_delete_for_zone_admin_in_shared_zone_succeeds(shared_zone_test_context) shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone - record_json = create_recordset(shared_zone, "test_shared_del_admin", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) + record_json = create_recordset(shared_zone, "test_shared_del_admin", "A", [{"address": "1.1.1.1"}], ownergroup_id=shared_zone_test_context.shared_record_group["id"]) create_rs = shared_client.create_recordset(record_json, status=202) result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] @@ -702,6 +665,7 @@ def test_delete_for_zone_admin_in_shared_zone_succeeds(shared_zone_test_context) delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) shared_client.wait_until_recordset_change_status(delete_rs, "Complete") + def test_delete_for_unowned_record_with_approved_record_type_in_shared_zone_succeeds(shared_zone_test_context): """ Test that a user not associated with a unowned record can delete it in a shared zone @@ -718,35 +682,34 @@ def test_delete_for_unowned_record_with_approved_record_type_in_shared_zone_succ delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) ok_client.wait_until_recordset_change_status(delete_rs, "Complete") + def test_delete_for_user_not_in_record_owner_group_in_shared_zone_fails(shared_zone_test_context): """ Test that a user cannot delete a record in a shared zone if not part of record owner group """ - dummy_client = shared_zone_test_context.dummy_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone result_rs = None - record_json = create_recordset(shared_zone, "test_shared_del_nonog", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) + record_json = create_recordset(shared_zone, "test_shared_del_nonog", "A", [{"address": "1.1.1.1"}], ownergroup_id=shared_zone_test_context.shared_record_group["id"]) try: create_rs = shared_client.create_recordset(record_json, status=202) result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] error = dummy_client.delete_recordset(shared_zone["id"], result_rs["id"], status=403) - assert_that(error, is_("User dummy does not have access to delete test-shared-del-nonog.shared.")) - + assert_that(error, is_(f'User dummy does not have access to delete test-shared-del-nonog.{shared_zone["name"]}')) finally: if result_rs: delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) shared_client.wait_until_recordset_change_status(delete_rs, "Complete") + def test_delete_for_user_not_in_unowned_record_in_shared_zone_fails_if_record_type_is_not_approved(shared_zone_test_context): """ Test that a user cannot delete a record in a shared zone if the record is unowned and the record type is not approved """ - dummy_client = shared_zone_test_context.dummy_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone = shared_zone_test_context.shared_zone @@ -759,13 +722,13 @@ def test_delete_for_user_not_in_unowned_record_in_shared_zone_fails_if_record_ty result_rs = shared_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] error = dummy_client.delete_recordset(shared_zone["id"], result_rs["id"], status=403) - assert_that(error, is_("User dummy does not have access to delete test-shared-del-not-approved-record-type.shared.")) - + assert_that(error, is_(f'User dummy does not have access to delete test-shared-del-not-approved-record-type.{shared_zone["name"]}')) finally: if result_rs: delete_rs = shared_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) shared_client.wait_until_recordset_change_status(delete_rs, "Complete") + def test_delete_for_user_in_record_owner_group_in_non_shared_zone_fails(shared_zone_test_context): """ Test that a user in record owner group cannot delete a record in a non-shared zone @@ -775,15 +738,14 @@ def test_delete_for_user_in_record_owner_group_in_non_shared_zone_fails(shared_z ok_zone = shared_zone_test_context.ok_zone result_rs = None - record_json = create_recordset(ok_zone, "test_non_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id = shared_zone_test_context.shared_record_group["id"]) + record_json = create_recordset(ok_zone, "test_non_shared_del_og", "A", [{"address": "1.1.1.1"}], ownergroup_id=shared_zone_test_context.shared_record_group["id"]) try: create_rs = ok_client.create_recordset(record_json, status=202) result_rs = ok_client.wait_until_recordset_change_status(create_rs, "Complete")["recordSet"] error = shared_client.delete_recordset(ok_zone["id"], result_rs["id"], status=403) - assert_that(error, is_("User sharedZoneUser does not have access to delete test-non-shared-del-og.ok.")) - + assert_that(error, is_(f'User sharedZoneUser does not have access to delete test-non-shared-del-og.{ok_zone["name"]}')) finally: if result_rs: delete_rs = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) diff --git a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py index 63b993f52..7f0dc6621 100644 --- a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py @@ -1,9 +1,7 @@ import pytest -import uuid from utils import * -from hamcrest import * -from vinyldns_python import VinylDNSClient + def test_get_recordset_no_authorization(shared_zone_test_context): """ @@ -124,12 +122,12 @@ def test_at_get_recordset(shared_zone_test_context): records = result_rs["records"] assert_that(records, has_length(1)) assert_that(records[0]["text"], is_("someText")) - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") + def test_get_recordset_from_shared_zone(shared_zone_test_context): """ Test getting a recordset as the record group owner @@ -137,10 +135,11 @@ def test_get_recordset_from_shared_zone(shared_zone_test_context): client = shared_zone_test_context.shared_zone_vinyldns_client retrieved_rs = None try: + shared_group = shared_zone_test_context.shared_record_group new_rs = create_recordset(shared_zone_test_context.shared_zone, - "test_get_recordset", "TXT", [{"text":"should-work"}], + "test_get_recordset", "TXT", [{"text": "should-work"}], 100, - shared_zone_test_context.shared_record_group["id"]) + shared_group["id"]) result = client.create_recordset(new_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] @@ -151,14 +150,14 @@ def test_get_recordset_from_shared_zone(shared_zone_test_context): retrieved_rs = retrieved["recordSet"] verify_recordset(retrieved_rs, new_rs) - assert_that(retrieved_rs["ownerGroupId"], is_(shared_zone_test_context.shared_record_group["id"])) - assert_that(retrieved_rs["ownerGroupName"], is_("record-ownergroup")) - + assert_that(retrieved_rs["ownerGroupId"], is_(shared_group["id"])) + assert_that(retrieved_rs["ownerGroupName"], is_(shared_group["name"])) finally: if retrieved_rs: delete_result = client.delete_recordset(retrieved_rs["zoneId"], retrieved_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") + def test_get_unowned_recordset_from_shared_zone_succeeds_if_record_type_approved(shared_zone_test_context): """ Test getting an unowned recordset with no admin rights succeeds if the record type is approved @@ -167,8 +166,7 @@ def test_get_unowned_recordset_from_shared_zone_succeeds_if_record_type_approved ok_client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - new_rs = create_recordset(shared_zone_test_context.shared_zone, - "test_get_unowned_recordset_approved_type", "A", [{"address": "1.2.3.4"}]) + new_rs = create_recordset(shared_zone_test_context.shared_zone, "test_get_unowned_recordset_approved_type", "A", [{"address": "1.2.3.4"}]) result = client.create_recordset(new_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] @@ -177,12 +175,12 @@ def test_get_unowned_recordset_from_shared_zone_succeeds_if_record_type_approved retrieved = ok_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=200) retrieved_rs = retrieved["recordSet"] verify_recordset(retrieved_rs, new_rs) - finally: if result_rs: delete_result = ok_client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) ok_client.wait_until_recordset_change_status(delete_result, "Complete") + def test_get_unowned_recordset_from_shared_zone_fails_if_record_type_not_approved(shared_zone_test_context): """ Test getting an unowned recordset with no admin rights fails if the record type is not approved @@ -190,22 +188,21 @@ def test_get_unowned_recordset_from_shared_zone_fails_if_record_type_not_approve client = shared_zone_test_context.shared_zone_vinyldns_client result_rs = None try: - new_rs = create_recordset(shared_zone_test_context.shared_zone, - "test_get_unowned_recordset", "MX", [{"preference": 3, "exchange": "mx"}]) + new_rs = create_recordset(shared_zone_test_context.shared_zone, "test_get_unowned_recordset", "MX", [{"preference": 3, "exchange": "mx"}]) result = client.create_recordset(new_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify - ok_client = shared_zone_test_context.ok_vinyldns_client - error = ok_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=403) - assert_that(error, is_("User ok does not have access to view test-get-unowned-recordset.shared.")) - + dummy_client = shared_zone_test_context.dummy_vinyldns_client + error = dummy_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=403) + assert_that(error, is_(f'User dummy does not have access to view test-get-unowned-recordset.{shared_zone_test_context.shared_zone["name"]}')) finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") + def test_get_owned_recordset_from_not_shared_zone(shared_zone_test_context): """ Test getting a recordset as the record group owner not in a shared zone fails @@ -213,17 +210,15 @@ def test_get_owned_recordset_from_not_shared_zone(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - new_rs = create_recordset(shared_zone_test_context.ok_zone, - "test_cant_get_owned_recordset", "TXT", [{"text":"should-work"}], - 100, - shared_zone_test_context.shared_record_group["id"]) + new_rs = create_recordset(shared_zone_test_context.ok_zone, "test_cant_get_owned_recordset", "TXT", [{"text": "should-work"}], + ttl=100, + ownergroup_id=shared_zone_test_context.shared_record_group["id"]) result = client.create_recordset(new_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # Get the recordset we just made and verify shared_client = shared_zone_test_context.shared_zone_vinyldns_client shared_client.get_recordset(result_rs["zoneId"], result_rs["id"], status=403) - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py index 1aa9ba62b..5c9f9a1c5 100644 --- a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py +++ b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py @@ -11,7 +11,6 @@ def check_changes_response(response, recordChanges=False, nextId=False, startFro :param startFrom: the string for startFrom or false if doesnt exist :param maxItems: maxItems is defined as an Int by default so will always return an Int """ - assert_that(response, has_key("zoneId")) # always defined as random string if recordChanges: assert_that(response["recordSetChanges"], is_not(has_length(0))) diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py index 97c61a78c..df53c5d3f 100644 --- a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py +++ b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py @@ -1,14 +1,10 @@ import pytest -import sys -from utils import * -from hamcrest import * -from vinyldns_python import VinylDNSClient -from test_data import TestData +from utils import * @pytest.fixture(scope="module") -def rs_fixture(request, shared_zone_test_context): +def rs_fixture(shared_zone_test_context): return shared_zone_test_context.list_records_context @@ -35,7 +31,7 @@ def test_list_recordsets_with_owner_group_id_and_owner_group_name(rs_fixture): try: # create a record in the zone with an owner group ID new_rs = create_recordset(rs_zone, - "test-owned-recordset", "TXT", [{"text":"should-work"}], + "test-owned-recordset", "TXT", [{"text": "should-work"}], 100, shared_group["id"]) @@ -50,7 +46,6 @@ def test_list_recordsets_with_owner_group_id_and_owner_group_name(rs_fixture): assert_that(rs_from_list["name"], is_("test-owned-recordset")) assert_that(rs_from_list["ownerGroupId"], is_(shared_group["id"])) assert_that(rs_from_list["ownerGroupName"], is_(shared_group["name"])) - finally: if result_rs: delete_result = client.delete_recordset(rs_zone["id"], result_rs["id"], status=202) @@ -88,7 +83,7 @@ def test_list_recordsets_excess_page_size(rs_fixture): client = rs_fixture.client rs_zone = rs_fixture.zone - #page of 22 items + # page of 22 items list_results_page = client.list_recordsets_by_zone(rs_zone["id"], max_items=23, status=200) rs_fixture.check_recordsets_page_accuracy(list_results_page, size=22, offset=0, max_items=23, next_id=False) @@ -152,7 +147,6 @@ def test_list_recordsets_duplicate_names(rs_fixture): list_results = client.list_recordsets_by_zone(rs_zone["id"], status=200, start_from=list_results["nextId"], max_items=1) assert_that(list_results["recordSets"][0]["id"], is_(created[1])) - finally: for recordset_id in created: client.delete_recordset(rs_zone["id"], recordset_id, status=202) @@ -284,7 +278,8 @@ def test_list_recordsets_with_record_type_filter_valid_and_invalid_type(rs_fixtu list_results_records = list_results["recordSets"] assert_that(list_results_records, has_length(1)) assert_that(list_results_records[0]["type"], contains_string("SOA")) - assert_that(list_results_records[0]["name"], contains_string("list-records.")) + assert_that(list_results_records[0]["name"], contains_string(rs_fixture.zone["name"])) + def test_list_recordsets_with_record_type_filter_invalid_type(rs_fixture): """ @@ -311,7 +306,7 @@ def test_list_recordsets_with_sort_descending(rs_fixture): list_results_records = list_results["recordSets"] assert_that(list_results_records[0]["type"], contains_string("NS")) - assert_that(list_results_records[0]["name"], contains_string("list-records.")) + assert_that(list_results_records[0]["name"], contains_string(rs_fixture.zone["name"])) assert_that(list_results_records[21]["type"], contains_string("A")) assert_that(list_results_records[21]["name"], contains_string("0-A")) @@ -330,7 +325,7 @@ def test_list_recordsets_with_invalid_sort(rs_fixture): assert_that(list_results_records[0]["type"], contains_string("A")) assert_that(list_results_records[0]["name"], contains_string("0-A")) assert_that(list_results_records[21]["type"], contains_string("SOA")) - assert_that(list_results_records[21]["name"], contains_string("list-records.")) + assert_that(list_results_records[21]["name"], contains_string(rs_fixture.zone["name"])) def test_list_recordsets_no_authorization(rs_fixture): @@ -376,7 +371,6 @@ def test_list_recordsets_with_acl(shared_zone_test_context): elif rs["name"] == rec3["name"]: verify_recordset(rs, rec3) assert_that(rs["accessLevel"], is_("NoAccess")) - finally: clear_ok_acl_rules(shared_zone_test_context) for rs in new_rs: diff --git a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py index a45c89586..980706a50 100644 --- a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py @@ -1,11 +1,10 @@ import copy -import json -import pytest -from hamcrest import * -from requests.compat import urljoin -from utils import * +from urllib.parse import urljoin -from test_data import TestData +import pytest + +from live_tests.test_data import TestData +from utils import * def test_update_recordset_name_fails(shared_zone_test_context): @@ -45,7 +44,6 @@ def test_update_recordset_name_fails(shared_zone_test_context): error = client.update_recordset(updated_rs, status=422) assert_that(error, is_("Cannot update RecordSet's name.")) - finally: if result_rs: result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) @@ -89,7 +87,6 @@ def test_update_recordset_type_fails(shared_zone_test_context): error = client.update_recordset(updated_rs, status=422) assert_that(error, is_("Cannot update RecordSet's record type.")) - finally: if result_rs: result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) @@ -218,7 +215,6 @@ def test_update_reverse_record_types(shared_zone_test_context, record_name, test result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] assert_that(result_rs["ttl"], is_(1000)) - finally: if result_rs: result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=(202, 404)) @@ -354,7 +350,6 @@ def test_update_recordset_replace_2_records_with_1_different_record(shared_zone_ rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) assert_that("1.1.1.1", is_in(rdata_strings)) - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -544,7 +539,6 @@ def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Updating...") new_ptr_target = "www.vinyldns." new_rs = result_rs @@ -555,16 +549,13 @@ def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!updated recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") print(result_rs) records = result_rs["records"] assert_that(records[0]["ptrdname"], is_(new_ptr_target)) - print("\r\n\r\n!!!verifying recordset in dns backend") # verify that the record exists in the backend dns server answers = dns_resolve(reverse4_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) @@ -600,7 +591,6 @@ def test_update_ipv6_ptr_recordset(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Updating...") new_ptr_target = "www.vinyldns." new_rs = result_rs @@ -611,16 +601,13 @@ def test_update_ipv6_ptr_recordset(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!updated recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") print(result_rs) records = result_rs["records"] assert_that(records[0]["ptrdname"], is_(new_ptr_target)) - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(reverse6_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) assert_that(rdata_strings, has_length(1)) @@ -776,8 +763,7 @@ def test_user_can_update_record_via_user_acl_rule(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) @@ -809,8 +795,7 @@ def test_user_can_update_record_via_group_acl_rule(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) @@ -841,8 +826,7 @@ def test_user_rule_priority_over_group_acl_rule(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ok_acl_rules(shared_zone_test_context) @@ -923,7 +907,7 @@ def test_acl_rule_with_cidr_ip4_success(shared_zone_test_context): ip4_zone = shared_zone_test_context.ip4_reverse_zone client = shared_zone_test_context.ok_vinyldns_client try: - acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/32") + acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask=f"{shared_zone_test_context.ip4_10_prefix}.0.0/32") result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) @@ -938,8 +922,7 @@ def test_acl_rule_with_cidr_ip4_success(shared_zone_test_context): # Dummy user can update record result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] assert_that(result_rs["ttl"], is_(expected_ttl)) finally: clear_ip4_acl_rules(shared_zone_test_context) @@ -986,7 +969,7 @@ def test_acl_rule_with_cidr_ip6_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client try: acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], - recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") + recordMask=f"{shared_zone_test_context.ip6_prefix}:0000:0000:0000:0000:0000/127") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) @@ -1019,9 +1002,10 @@ def test_acl_rule_with_cidr_ip6_failure(shared_zone_test_context): result_rs = None ip6_zone = shared_zone_test_context.ip6_reverse_zone client = shared_zone_test_context.ok_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix try: acl_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], - recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") + recordMask=f"{ip6_prefix}:0000:0000:0000:0000:0000/127") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.0.0.0.0.0", ip6_zone) @@ -1049,8 +1033,8 @@ def test_more_restrictive_cidr_ip4_rule_priority(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_rs = None try: - slash16_rule = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/16") - slash32_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask="10.10.0.0/32") + slash16_rule = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR"], recordMask=f"{shared_zone_test_context.ip4_10_prefix}.0.0/16") + slash32_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], recordMask=f"{shared_zone_test_context.ip4_10_prefix}.0.0/32") result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) result_rs["ttl"] = result_rs["ttl"] + 1000 @@ -1074,12 +1058,13 @@ def test_more_restrictive_cidr_ip6_rule_priority(shared_zone_test_context): """ ip6_zone = shared_zone_test_context.ip6_reverse_zone client = shared_zone_test_context.ok_vinyldns_client + ip6_prefix = shared_zone_test_context.ip6_prefix result_rs = None try: slash50_rule = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR"], - recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") + recordMask=f"{ip6_prefix}:0000:0000:0000:0000:0000/50") slash100_rule = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], - recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/100") + recordMask=f"{ip6_prefix}:0000:0000:0000:0000:0000/100") result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) result_rs["ttl"] = result_rs["ttl"] + 1000 @@ -1104,43 +1089,44 @@ def test_mix_of_cidr_ip6_and_acl_rules_priority(shared_zone_test_context): ip6_zone = shared_zone_test_context.ip6_reverse_zone ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client - result_rs_PTR = None - result_rs_A = None - result_rs_AAAA = None + ip6_prefix = shared_zone_test_context.ip6_prefix + result_rs_ptr = None + result_rs_a = None + result_rs_aaaa = None try: mixed_type_rule_no_mask = generate_acl_rule("Read", userId="dummy", recordTypes=["PTR", "AAAA", "A"]) ptr_rule_with_mask = generate_acl_rule("Write", userId="dummy", recordTypes=["PTR"], - recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") + recordMask=f"{ip6_prefix}:0000:0000:0000:0000:0000/50") - result_rs_PTR = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) - result_rs_PTR["ttl"] = result_rs_PTR["ttl"] + 1000 + result_rs_ptr = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) + result_rs_ptr["ttl"] = result_rs_ptr["ttl"] + 1000 - result_rs_A = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_1", ok_zone) - result_rs_A["ttl"] = result_rs_A["ttl"] + 1000 + result_rs_a = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_1", ok_zone) + result_rs_a["ttl"] = result_rs_a["ttl"] + 1000 - result_rs_AAAA = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_2", ok_zone) - result_rs_AAAA["ttl"] = result_rs_AAAA["ttl"] + 1000 + result_rs_aaaa = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_2", ok_zone) + result_rs_aaaa["ttl"] = result_rs_aaaa["ttl"] + 1000 # add rules add_ip6_acl_rules(shared_zone_test_context, [mixed_type_rule_no_mask, ptr_rule_with_mask]) add_ok_acl_rules(shared_zone_test_context, [mixed_type_rule_no_mask, ptr_rule_with_mask]) # Dummy user cannot update record for A,AAAA, but can for PTR - shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_PTR, status=202) - shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_A, status=403) - shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_AAAA, status=403) + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_ptr, status=202) + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_a, status=403) + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_aaaa, status=403) finally: clear_ip6_acl_rules(shared_zone_test_context) clear_ok_acl_rules(shared_zone_test_context) - if result_rs_A: - delete_result = client.delete_recordset(result_rs_A["zoneId"], result_rs_A["id"], status=202) + if result_rs_a: + delete_result = client.delete_recordset(result_rs_a["zoneId"], result_rs_a["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - if result_rs_AAAA: - delete_result = client.delete_recordset(result_rs_AAAA["zoneId"], result_rs_AAAA["id"], status=202) + if result_rs_aaaa: + delete_result = client.delete_recordset(result_rs_aaaa["zoneId"], result_rs_aaaa["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - if result_rs_PTR: - delete_result = client.delete_recordset(result_rs_PTR["zoneId"], result_rs_PTR["id"], status=202) + if result_rs_ptr: + delete_result = client.delete_recordset(result_rs_ptr["zoneId"], result_rs_ptr["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") @@ -1178,7 +1164,6 @@ def test_empty_acl_record_type_applies_to_all(shared_zone_test_context): """ Test an empty record set rule applies to all types """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1212,7 +1197,6 @@ def test_acl_rule_with_fewer_record_types_prioritized(shared_zone_test_context): """ Test a rule on a specific record type takes priority over a group of types """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1228,8 +1212,7 @@ def test_acl_rule_with_fewer_record_types_prioritized(shared_zone_test_context): # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) @@ -1264,8 +1247,7 @@ def test_acl_rule_user_over_record_type_priority(shared_zone_test_context): # Dummy user can update record in zone with base rule result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) - result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")[ - "recordSet"] + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, "Complete")["recordSet"] # add rule add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) @@ -1285,7 +1267,6 @@ def test_acl_rule_with_record_mask_success(shared_zone_test_context): """ Test rule with record mask allows user to update record """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1319,7 +1300,6 @@ def test_acl_rule_with_record_mask_failure(shared_zone_test_context): """ Test rule with unmatching record mask is not applied """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1346,7 +1326,6 @@ def test_acl_rule_with_defined_mask_prioritized(shared_zone_test_context): """ Test a rule on a specific record mask takes priority over All """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1383,7 +1362,6 @@ def test_user_rule_over_mask_prioritized(shared_zone_test_context): """ Test user/group logic priority over record mask """ - result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client @@ -1443,7 +1421,6 @@ def test_ns_update_passes(shared_zone_test_context): change_result = client.update_recordset(changed_rs, status=202) client.wait_until_recordset_change_status(change_result, "Complete") - finally: if ns_rs: client.delete_recordset(ns_rs["zoneId"], ns_rs["id"], status=(202, 404)) @@ -1499,14 +1476,12 @@ def test_update_to_txt_dotted_host_succeeds(shared_zone_test_context): result_rs = None ok_zone = shared_zone_test_context.ok_zone client = shared_zone_test_context.ok_vinyldns_client - try: result_rs = seed_text_recordset(client, "update_with.dots", ok_zone) result_rs["ttl"] = 333 update_rs = client.update_recordset(result_rs, status=202) result_rs = client.wait_until_recordset_change_status(update_rs, "Complete")["recordSet"] - finally: if result_rs: delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) @@ -1521,9 +1496,7 @@ def test_ns_update_existing_ns_origin_fails(shared_zone_test_context): zone = shared_zone_test_context.parent_zone list_results_page = client.list_recordsets_by_zone(zone["id"], status=200)["recordSets"] - apex_ns = [item for item in list_results_page if item["type"] == "NS" and item["name"] in zone["name"]][0] - apex_ns["ttl"] = apex_ns["ttl"] + 100 client.update_recordset(apex_ns, status=422) @@ -1533,21 +1506,17 @@ def test_update_existing_dotted_a_record_succeeds(shared_zone_test_context): """ Test that updating an existing A record with dotted host name succeeds """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone recordsets = client.list_recordsets_by_zone(zone["id"], record_name_filter="dotted.a", status=200)["recordSets"] - update_rs = recordsets[0] - update_rs["records"] = [{"address": "1.1.1.1"}] try: update_response = client.update_recordset(update_rs, status=202) updated_rs = client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(updated_rs["records"], is_([{"address": "1.1.1.1"}])) - finally: update_rs["records"] = [{"address": "7.7.7.7"}] revert_rs_update = client.update_recordset(update_rs, status=202) @@ -1558,7 +1527,6 @@ def test_update_existing_dotted_cname_record_succeeds(shared_zone_test_context): """ Test that updating an existing CNAME record with dotted host name succeeds """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone @@ -1569,7 +1537,6 @@ def test_update_existing_dotted_cname_record_succeeds(shared_zone_test_context): update_response = client.update_recordset(update_rs, status=202) updated_rs = client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(updated_rs["records"], is_([{"cname": "got.reference."}])) - finally: update_rs["records"] = [{"cname": "test.example.com"}] revert_rs_update = client.update_recordset(update_rs, status=202) @@ -1580,7 +1547,6 @@ def test_update_succeeds_for_applied_unsynced_record_change(shared_zone_test_con """ Update should succeed if record change is not synced with DNS backend, but has already been applied """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone @@ -1608,12 +1574,12 @@ def test_update_succeeds_for_applied_unsynced_record_change(shared_zone_test_con retrieved_rs = client.get_recordset(zone["id"], update_rs["id"])["recordSet"] verify_recordset(retrieved_rs, updates) - finally: try: delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -1621,7 +1587,6 @@ def test_update_fails_for_unapplied_unsynced_record_change(shared_zone_test_cont """ Update should fail if record change is not synced with DNS backend """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.parent_zone @@ -1643,15 +1608,14 @@ def test_update_fails_for_unapplied_unsynced_record_change(shared_zone_test_cont ] update_response = client.update_recordset(update_rs, status=202) response = client.wait_until_recordset_change_status(update_response, "Failed") - assert_that(response["systemMessage"], is_("Failed validating update to DNS for change " + response["id"] + - ":" + a_rs[ - "name"] + ": This record set is out of sync with the DNS backend; sync this zone before attempting to update this record set.")) - + assert_that(response["systemMessage"], is_(f"Failed validating update to DNS for change {response['id']}:{a_rs['name']}: " + f"This record set is out of sync with the DNS backend; sync this zone before attempting to update this record set.")) finally: try: delete_result = client.delete_recordset(zone["id"], create_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -1659,7 +1623,6 @@ def test_update_high_value_domain_fails(shared_zone_test_context): """ Test that updating a high value domain fails """ - client = shared_zone_test_context.ok_vinyldns_client zone_system = shared_zone_test_context.system_test_zone list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] @@ -1667,15 +1630,13 @@ def test_update_high_value_domain_fails(shared_zone_test_context): record_system["ttl"] = record_system["ttl"] + 100 errors_system = client.update_recordset(record_system, status=422) - assert_that(errors_system, is_( - 'Record name "high-value-domain.system-test." is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_system, is_(f'Record name "high-value-domain.{zone_system["name"]}" is configured as a High Value Domain, so it cannot be modified.')) def test_update_high_value_domain_fails_case_insensitive(shared_zone_test_context): """ Test that updating a high value domain fails regardless of case """ - client = shared_zone_test_context.ok_vinyldns_client zone_system = shared_zone_test_context.system_test_zone list_results_page_system = client.list_recordsets_by_zone(zone_system["id"], status=200)["recordSets"] @@ -1683,8 +1644,7 @@ def test_update_high_value_domain_fails_case_insensitive(shared_zone_test_contex record_system["ttl"] = record_system["ttl"] + 100 errors_system = client.update_recordset(record_system, status=422) - assert_that(errors_system, is_( - 'Record name "high-VALUE-domain-UPPER-CASE.system-test." is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_system, is_(f'Record name "high-VALUE-domain-UPPER-CASE.{zone_system["name"]}" is configured as a High Value Domain, so it cannot be modified.')) def test_update_high_value_domain_fails_ip4_ptr(shared_zone_test_context): @@ -1698,47 +1658,27 @@ def test_update_high_value_domain_fails_ip4_ptr(shared_zone_test_context): record_ip4["ttl"] = record_ip4["ttl"] + 100 errors_ip4 = client.update_recordset(record_ip4, status=422) - assert_that(errors_ip4, - is_('Record name "192.0.2.253" is configured as a High Value Domain, so it cannot be modified.')) + assert_that(errors_ip4, is_(f'Record name "{shared_zone_test_context.ip4_classless_prefix}.253" is configured as a High Value Domain, so it cannot be modified.')) def test_update_high_value_domain_fails_ip6_ptr(shared_zone_test_context): """ Test that updating a high value domain fails for ip6 ptr """ - client = shared_zone_test_context.ok_vinyldns_client zone_ip6 = shared_zone_test_context.ip6_reverse_zone list_results_page_ip6 = client.list_recordsets_by_zone(zone_ip6["id"], status=200)["recordSets"] - record_ip6 = [item for item in list_results_page_ip6 if item["name"] == "0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0"][ - 0] + record_ip6 = [item for item in list_results_page_ip6 if item["name"] == "0.0.0.0.f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0"][0] record_ip6["ttl"] = record_ip6["ttl"] + 100 errors_ip6 = client.update_recordset(record_ip6, status=422) - assert_that(errors_ip6, is_( - 'Record name "fd69:27cc:fe91:0000:0000:0000:ffff:0000" is configured as a High Value Domain, so it cannot be modified.')) - - -def test_no_update_access_non_test_zone(shared_zone_test_context): - """ - Test that a test user cannot update a record in a non-test zone (even if admin) - """ - - client = shared_zone_test_context.shared_zone_vinyldns_client - zone_id = shared_zone_test_context.non_test_shared_zone["id"] - - list_results = client.list_recordsets_by_zone(zone_id, status=200)["recordSets"] - record_update = [item for item in list_results if item["name"] == "update-test"][0] - record_update["ttl"] = record_update["ttl"] + 100 - - client.update_recordset(record_update, status=403) + assert_that(errors_ip6, is_(f'Record name "{shared_zone_test_context.ip6_prefix}:0000:0000:0000:ffff:0000" is configured as a High Value Domain, so it cannot be modified.')) def test_update_from_user_in_record_owner_group_for_private_zone_fails(shared_zone_test_context): """ Test that updating with a user in the record owner group fails when the zone is not set to shared """ - ok_client = shared_zone_test_context.ok_vinyldns_client shared_record_group = shared_zone_test_context.shared_record_group shared_zone_client = shared_zone_test_context.shared_zone_vinyldns_client @@ -1755,8 +1695,7 @@ def test_update_from_user_in_record_owner_group_for_private_zone_fails(shared_zo update = create_rs update["ttl"] = update["ttl"] + 100 error = shared_zone_client.update_recordset(update, status=403) - assert_that(error, is_("User sharedZoneUser does not have access to update test-shared-failure.ok.")) - + assert_that(error, is_(f'User sharedZoneUser does not have access to update test-shared-failure.{shared_zone_test_context.ok_zone["name"]}')) finally: if create_rs: delete_result = ok_client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -1767,7 +1706,6 @@ def test_update_owner_group_from_user_in_record_owner_group_for_shared_zone_pass """ Test that updating with a user in the record owner group passes when the zone is set to shared """ - ok_client = shared_zone_test_context.ok_vinyldns_client shared_record_group = shared_zone_test_context.shared_record_group shared_client = shared_zone_test_context.shared_zone_vinyldns_client @@ -1785,8 +1723,6 @@ def test_update_owner_group_from_user_in_record_owner_group_for_shared_zone_pass update_response = ok_client.update_recordset(update, status=202) update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(update_rs["ownerGroupId"], is_(shared_record_group["id"])) - - finally: if update_rs: delete_result = shared_client.delete_recordset(shared_zone["id"], update_rs["id"], status=202) @@ -1797,7 +1733,6 @@ def test_update_owner_group_from_admin_in_shared_zone_passes(shared_zone_test_co """ Test that updating with a zone admin user when the zone is set to shared passes """ - shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone group = shared_zone_test_context.shared_record_group @@ -1814,8 +1749,6 @@ def test_update_owner_group_from_admin_in_shared_zone_passes(shared_zone_test_co update_response = shared_client.update_recordset(update, status=202) update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(update_rs["ownerGroupId"], is_(group["id"])) - - finally: if update_rs: delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) @@ -1826,7 +1759,6 @@ def test_update_from_unassociated_user_in_shared_zone_passes_when_record_type_is """ Test that updating with a user that does not have write access succeeds in a shared zone if the record type is approved """ - ok_client = shared_zone_test_context.ok_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone @@ -1842,7 +1774,6 @@ def test_update_from_unassociated_user_in_shared_zone_passes_when_record_type_is update["ttl"] = update["ttl"] + 100 update_response = ok_client.update_recordset(update, status=202) update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] - finally: if update_rs: delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) @@ -1853,24 +1784,21 @@ def test_update_from_unassociated_user_in_shared_zone_fails(shared_zone_test_con """ Test that updating with a user that does not have write access fails in a shared zone """ - - ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone create_rs = None try: - record_json = create_recordset(zone, "test_shared_unapproved_record_type", "MX", - [{"preference": 3, "exchange": "mx"}]) + record_json = create_recordset(zone, "test_shared_unapproved_record_type", "MX", [{"preference": 3, "exchange": "mx"}]) create_response = shared_client.create_recordset(record_json, status=202) create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] assert_that(create_rs, is_not(has_key("ownerGroupId"))) update = create_rs update["ttl"] = update["ttl"] + 100 - error = ok_client.update_recordset(update, status=403) - assert_that(error, is_("User ok does not have access to update test-shared-unapproved-record-type.shared.")) - + error = dummy_client.update_recordset(update, status=403) + assert_that(error, is_(f'User dummy does not have access to update test-shared-unapproved-record-type.{zone["name"]}')) finally: if create_rs: delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -1882,7 +1810,6 @@ def test_update_from_acl_for_shared_zone_passes(shared_zone_test_context): """ Test that updating with a user that has an acl passes when the zone is set to shared """ - dummy_client = shared_zone_test_context.dummy_vinyldns_client shared_client = shared_zone_test_context.shared_zone_vinyldns_client acl_rule = generate_acl_rule("Write", userId="dummy") @@ -1901,8 +1828,6 @@ def test_update_from_acl_for_shared_zone_passes(shared_zone_test_context): update_response = dummy_client.update_recordset(update, status=202) update_rs = dummy_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(update, is_not(has_key("ownerGroupId"))) - - finally: clear_shared_zone_acl_rules(shared_zone_test_context) if update_rs: @@ -1914,7 +1839,6 @@ def test_update_to_no_group_owner_passes(shared_zone_test_context): """ Test that updating to have no record owner group passes """ - shared_record_group = shared_zone_test_context.shared_record_group shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone @@ -1931,7 +1855,6 @@ def test_update_to_no_group_owner_passes(shared_zone_test_context): update_response = shared_client.update_recordset(update, status=202) update_rs = shared_client.wait_until_recordset_change_status(update_response, "Complete")["recordSet"] assert_that(update_rs, is_not(has_key("ownerGroupId"))) - finally: if update_rs: delete_result = shared_client.delete_recordset(zone["id"], update_rs["id"], status=202) @@ -1942,7 +1865,6 @@ def test_update_to_invalid_record_owner_group_fails(shared_zone_test_context): """ Test that updating to a record owner group that does not exist fails """ - shared_record_group = shared_zone_test_context.shared_record_group shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone @@ -1958,7 +1880,6 @@ def test_update_to_invalid_record_owner_group_fails(shared_zone_test_context): update["ownerGroupId"] = "no-existo" error = shared_client.update_recordset(update, status=422) assert_that(error, is_('Record owner group with id "no-existo" not found')) - finally: if create_rs: delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -1969,7 +1890,6 @@ def test_update_to_group_a_user_is_not_in_fails(shared_zone_test_context): """ Test that updating to a record owner group that the user is not in fails """ - dummy_group = shared_zone_test_context.dummy_group shared_client = shared_zone_test_context.shared_zone_vinyldns_client zone = shared_zone_test_context.shared_zone @@ -1984,7 +1904,6 @@ def test_update_to_group_a_user_is_not_in_fails(shared_zone_test_context): update["ownerGroupId"] = dummy_group["id"] error = shared_client.update_recordset(update, status=422) assert_that(error, is_(f"User not in record owner group with id \"{dummy_group['id']}\"")) - finally: if create_rs: delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -2002,16 +1921,14 @@ def test_update_with_global_acl_rule_only_fails(shared_zone_test_context): create_rs = None try: - record_json = create_recordset(zone, "test-global-acl", "A", [{"address": "1.1.1.1"}], 200, - "shared-zone-group") + record_json = create_recordset(zone, "test-global-acl", "A", [{"address": "1.1.1.1"}], 200, "shared-zone-group") create_response = shared_client.create_recordset(record_json, status=202) create_rs = shared_client.wait_until_recordset_change_status(create_response, "Complete")["recordSet"] update = create_rs update["ttl"] = 400 error = dummy_client.update_recordset(update, status=403) - assert_that(error, is_("User dummy does not have access to update test-global-acl.shared.")) - + assert_that(error, is_(f'User dummy does not have access to update test-global-acl.{zone["name"]}')) finally: if create_rs: delete_result = shared_client.delete_recordset(zone["id"], create_rs["id"], status=202) @@ -2023,7 +1940,6 @@ def test_update_ds_success(shared_zone_test_context): """ Test that creating a valid DS record succeeds """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ @@ -2031,8 +1947,7 @@ def test_update_ds_success(shared_zone_test_context): ] record_data_update = [ {"keytag": 60485, "algorithm": 5, "digesttype": 1, "digest": "2BB183AF5F22588179A53B0A98631FAD1A292118"}, - {"keytag": 60485, "algorithm": 5, "digesttype": 2, - "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} + {"keytag": 60485, "algorithm": 5, "digesttype": 2, "digest": "D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A"} ] record_json = create_recordset(zone, "dskey", "DS", record_data_create, ttl=3600) result_rs = None @@ -2059,7 +1974,6 @@ def test_update_ds_data_failures(shared_zone_test_context): """ Test that updating a DS record fails with bad hex, digest, algorithm """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ @@ -2102,7 +2016,6 @@ def test_update_ds_bad_ttl(shared_zone_test_context): """ Test that updating a DS record with a TTL that doesn't match the zone NS record TTL fails """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ds_zone record_data_create = [ @@ -2127,7 +2040,6 @@ def test_update_fails_when_payload_and_route_zone_id_does_not_match(shared_zone_ """ Test that a 422 is returned if the zoneId in the body and route do not match """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone @@ -2143,11 +2055,9 @@ def test_update_fails_when_payload_and_route_zone_id_does_not_match(shared_zone_ update["zoneId"] = shared_zone_test_context.dummy_zone["id"] url = urljoin(client.index_url, "/zones/{0}/recordsets/{1}".format(zone["id"], update["id"])) - response, error = client.make_request(url, "PUT", client.headers, json.dumps(update), not_found_ok=True, - status=422) + response, error = client.make_request(url, "PUT", client.headers, json.dumps(update), not_found_ok=True, status=422) assert_that(error, is_("Cannot update RecordSet's zoneId attribute")) - finally: if created: delete_result = client.delete_recordset(zone["id"], created["id"], status=202) @@ -2158,12 +2068,10 @@ def test_update_fails_when_payload_and_actual_zone_id_do_not_match(shared_zone_t """ Test that a 422 is returned if the zoneId in the body and the recordSets actual zoneId do not match """ - client = shared_zone_test_context.ok_vinyldns_client zone = shared_zone_test_context.ok_zone created = None - try: record_json = create_recordset(zone, "test_update_zone_id", "A", [{"address": "1.1.1.1"}]) create_response = client.create_recordset(record_json, status=202) @@ -2175,7 +2083,6 @@ def test_update_fails_when_payload_and_actual_zone_id_do_not_match(shared_zone_t error = client.update_recordset(update, status=422) assert_that(error, is_("Cannot update RecordSet's zone ID.")) - finally: if created: delete_result = client.delete_recordset(zone["id"], created["id"], status=202) diff --git a/modules/api/functional_test/live_tests/shared_zone_test_context.py b/modules/api/functional_test/live_tests/shared_zone_test_context.py index 5422f897c..2aa0e1362 100644 --- a/modules/api/functional_test/live_tests/shared_zone_test_context.py +++ b/modules/api/functional_test/live_tests/shared_zone_test_context.py @@ -72,10 +72,6 @@ class SharedZoneTestContext(object): def requires_review_zone(self) -> Mapping: return self.attempt_retrieve_value("_requires_review_zone") - @property - def non_test_shared_zone(self) -> Mapping: - return self._non_test_shared_zone - def __init__(self, partition_id: str): self.partition_id = partition_id self.ok_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") @@ -113,7 +109,6 @@ class SharedZoneTestContext(object): self._ds_zone = None self._requires_review_zone = None self._shared_zone = None - self._non_test_shared_zone = None self.ip4_10_prefix = None self.ip4_classless_prefix = None @@ -472,31 +467,6 @@ class SharedZoneTestContext(object): }, status=202) self._shared_zone = shared_zone_change["zone"] - # Shared zone - non_test_shared_zone_change = self.support_user_client.create_zone( - { - "name": f"non.test.shared{partition_id}.", - "email": "test@test.com", - "shared": True, - "adminGroupId": self.shared_record_group["id"], - "isTest": False, - "connection": { - "name": "shared.", - "keyName": VinylDNSTestContext.dns_key_name, - "key": VinylDNSTestContext.dns_key, - "algorithm": VinylDNSTestContext.dns_key_algo, - "primaryServer": VinylDNSTestContext.name_server_ip - }, - "transferConnection": { - "name": "shared.", - "keyName": VinylDNSTestContext.dns_key_name, - "key": VinylDNSTestContext.dns_key, - "algorithm": VinylDNSTestContext.dns_key_algo, - "primaryServer": VinylDNSTestContext.name_server_ip - } - }, status=202) - self._non_test_shared_zone = non_test_shared_zone_change["zone"] - # wait until our zones are created self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change["zone"]["id"]) self.ok_vinyldns_client.wait_until_zone_active(ok_zone_change["zone"]["id"]) @@ -512,13 +482,12 @@ class SharedZoneTestContext(object): self.ok_vinyldns_client.wait_until_zone_active(requires_review_zone_change["zone"]["id"]) self.history_client.wait_until_zone_active(history_zone_change["zone"]["id"]) self.shared_zone_vinyldns_client.wait_until_zone_active(shared_zone_change["zone"]["id"]) - self.shared_zone_vinyldns_client.wait_until_zone_active(non_test_shared_zone_change["zone"]["id"]) # validate all in there zones = self.dummy_vinyldns_client.list_zones()["zones"] assert_that(len(zones), is_(2)) zones = self.ok_vinyldns_client.list_zones()["zones"] - assert_that(len(zones), is_(12)) + assert_that(len(zones), is_(11)) # initialize history self.init_history() @@ -533,15 +502,16 @@ class SharedZoneTestContext(object): self.list_zones_client = self.list_zones.client # build the list of records; note: we do need to save the test records - self.list_records_context.build() + self.list_records_context.setup() # build the list of groups self.list_groups_context.build() self.list_batch_summaries_context = ListBatchChangeSummariesTestContext() - except Exception as e: + except Exception: # Cleanup if setup fails self.tear_down() + traceback.print_exc() raise def init_history(self): @@ -650,7 +620,7 @@ class SharedZoneTestContext(object): self.list_records_context.tear_down() if self.list_batch_summaries_context: - self.list_batch_summaries_context.tear_down() + self.list_batch_summaries_context.tear_down(self) if self.list_groups_context: self.list_groups_context.tear_down() @@ -666,7 +636,8 @@ class SharedZoneTestContext(object): for client in self.clients: client.tear_down() - except Exception as e: + except Exception: + traceback.print_exc() raise @staticmethod diff --git a/modules/api/functional_test/live_tests/zones/create_zone_test.py b/modules/api/functional_test/live_tests/zones/create_zone_test.py index 51602bcd5..419410ace 100644 --- a/modules/api/functional_test/live_tests/zones/create_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/create_zone_test.py @@ -1,38 +1,10 @@ import copy +from typing import List, Dict import pytest from utils import * -records_in_dns = [ - {"name": "one-time.", - "type": "SOA", - "records": [{"mname": "172.17.42.1.", - "rname": "admin.test.com.", - "retry": 3600, - "refresh": 10800, - "minimum": 38400, - "expire": 604800, - "serial": 1439234395}]}, - {"name": "one-time.", - "type": "NS", - "records": [{"nsdname": "172.17.42.1."}]}, - {"name": "jenkins", - "type": "A", - "records": [{"address": "10.1.1.1"}]}, - {"name": "foo", - "type": "A", - "records": [{"address": "2.2.2.2"}]}, - {"name": "test", - "type": "A", - "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, - {"name": "one-time.", - "type": "A", - "records": [{"address": "5.5.5.5"}]}, - {"name": "already-exists", - "type": "A", - "records": [{"address": "6.6.6.6"}]}] - # Defined in docker bind9 conf file TSIG_KEYS = [ ("vinyldns-sha1.", "0nIhR1zS/nHUg2n0AIIUyJwXUyQ=", "HMAC-SHA1"), @@ -48,7 +20,7 @@ TSIG_KEYS = [ def test_create_zone_with_tsigs(shared_zone_test_context, key_name, key_secret, key_alg): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}." zone = { "name": zone_name, @@ -70,11 +42,10 @@ def test_create_zone_with_tsigs(shared_zone_test_context, key_name, key_secret, # Check that it was internally stored correctly using GET zone_get = client.get_zone(zone["id"])["zone"] - assert_that(zone_get["name"], is_(zone_name + ".")) + assert_that(zone_get["name"], is_(zone_name)) assert_that("connection" in zone_get) assert_that(zone_get["connection"]["keyName"], is_(key_name)) assert_that(zone_get["connection"]["algorithm"], is_(key_alg)) - finally: if "id" in zone: client.abandon_zones([zone["id"]], status=202) @@ -88,7 +59,8 @@ def test_create_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time " + # Include a space in the zone name to verify that it is trimmed and properly formatted + zone_name = f"one-time{shared_zone_test_context.partition_id} " zone = { "name": zone_name, @@ -117,8 +89,7 @@ def test_create_zone_success(shared_zone_test_context): for rs in recordsets: small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) small_rs["records"] = small_rs["records"] - assert_that(records_in_dns, has_item(small_rs)) - + assert_that(retrieve_dns_records(shared_zone_test_context), has_item(small_rs)) finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) @@ -132,7 +103,7 @@ def test_create_zone_without_transfer_connection_leaves_it_empty(shared_zone_tes client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -216,7 +187,7 @@ def test_create_zone_with_connection_failure(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time." + zone_name = f"one-time{shared_zone_test_context.partition_id}." zone = { "name": zone_name, "email": "test@test.com", @@ -259,7 +230,7 @@ def test_create_zone_returns_400_for_invalid_data(shared_zone_test_context): def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -284,7 +255,6 @@ def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): assert_that(zone_get["name"], is_(zone_name + ".")) assert_that("connection" not in zone_get) assert_that("transferConnection" not in zone_get) - finally: if "id" in zone: client.abandon_zones([zone["id"]], status=202) @@ -294,7 +264,7 @@ def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): def test_zone_connection_only(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -344,7 +314,6 @@ def test_zone_connection_only(shared_zone_test_context): assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) - finally: if "id" in zone: client.abandon_zones([zone["id"]], status=202) @@ -354,7 +323,7 @@ def test_zone_connection_only(shared_zone_test_context): def test_zone_bad_connection(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -374,7 +343,7 @@ def test_zone_bad_connection(shared_zone_test_context): def test_zone_bad_transfer_connection(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -400,7 +369,7 @@ def test_zone_bad_transfer_connection(shared_zone_test_context): def test_zone_transfer_connection(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -450,7 +419,6 @@ def test_zone_transfer_connection(shared_zone_test_context): assert_that(zone["transferConnection"]["name"], is_(expected_connection["name"])) assert_that(zone["transferConnection"]["keyName"], is_(expected_connection["keyName"])) assert_that(zone["transferConnection"]["primaryServer"], is_(expected_connection["primaryServer"])) - finally: if "id" in zone: client.abandon_zones([zone["id"]], status=202) @@ -462,7 +430,7 @@ def test_user_cannot_create_zone_with_nonmember_admin_group(shared_zone_test_con Test user cannot create a zone with an admin group they are not a member of """ zone = { - "name": "one-time.", + "name": f"one-time{shared_zone_test_context.partition_id}.", "email": "test@test.com", "adminGroupId": shared_zone_test_context.dummy_group["id"], "connection": { @@ -487,7 +455,7 @@ def test_user_cannot_create_zone_with_failed_validations(shared_zone_test_contex Test that a user cannot create a zone that has invalid zone data """ zone = { - "name": "invalid-zone.", + "name": f"invalid-zone{shared_zone_test_context.partition_id}.", "email": "test@test.com", "adminGroupId": shared_zone_test_context.ok_group["id"], "connection": { @@ -532,3 +500,40 @@ def test_create_zone_bad_backend_id(shared_zone_test_context): } result = shared_zone_test_context.ok_vinyldns_client.create_zone(zone, status=400) assert_that(result, contains_string("Invalid backendId")) + + +def retrieve_dns_records(shared_zone_test_context) -> List[Dict]: + """ + Returns a representation of what is current configured in the one-time. zone + :param shared_zone_test_context: The test context + :return: An array of recordsets + """ + partition_id = shared_zone_test_context.partition_id + return [ + {"name": f"one-time{partition_id}.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 1439234395}]}, + {"name": f"one-time{partition_id}.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "jenkins", + "type": "A", + "records": [{"address": "10.1.1.1"}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "2.2.2.2"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": f"one-time{partition_id}.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}] diff --git a/modules/api/functional_test/live_tests/zones/delete_zone_test.py b/modules/api/functional_test/live_tests/zones/delete_zone_test.py index a888c20a7..983d337ac 100644 --- a/modules/api/functional_test/live_tests/zones/delete_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/delete_zone_test.py @@ -1,9 +1,5 @@ import pytest -import uuid -from hamcrest import * -from vinyldns_python import VinylDNSClient -from vinyldns_context import VinylDNSTestContext from utils import * @@ -15,7 +11,7 @@ def test_delete_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -43,7 +39,6 @@ def test_delete_zone_success(shared_zone_test_context): client.get_zone(result_zone["id"], status=404) result_zone = None - finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) @@ -57,7 +52,7 @@ def test_delete_zone_twice(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" zone = { "name": zone_name, @@ -85,7 +80,6 @@ def test_delete_zone_twice(shared_zone_test_context): client.delete_zone(result_zone["id"], status=404) result_zone = None - finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) diff --git a/modules/api/functional_test/live_tests/zones/get_zone_test.py b/modules/api/functional_test/live_tests/zones/get_zone_test.py index 7c2f2a246..2e5ba57ff 100644 --- a/modules/api/functional_test/live_tests/zones/get_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/get_zone_test.py @@ -22,12 +22,12 @@ def test_get_zone_shared_by_id_as_owner(shared_zone_test_context): Test get an existing shared zone by id as a zone owner """ client = shared_zone_test_context.shared_zone_vinyldns_client - + group_name = shared_zone_test_context.shared_record_group["name"] result = client.get_zone(shared_zone_test_context.shared_zone["id"], status=200) retrieved = result["zone"] assert_that(retrieved["id"], is_(shared_zone_test_context.shared_zone["id"])) - assert_that(retrieved["adminGroupName"], is_("testSharedZoneGroup")) + assert_that(retrieved["adminGroupName"], is_(group_name)) assert_that(retrieved["shared"], is_(True)) assert_that(retrieved["accessLevel"], is_("Delete")) @@ -72,7 +72,6 @@ def test_get_zone_by_id_includes_acl_display_name(shared_zone_test_context): """ Test get an existing zone with acl rules """ - client = shared_zone_test_context.ok_vinyldns_client user_acl_rule = generate_acl_rule("Write", userId="ok", recordTypes=[]) @@ -119,7 +118,7 @@ def test_get_zone_by_name_without_trailing_dot_succeeds(shared_zone_test_context """ client = shared_zone_test_context.ok_vinyldns_client - result = client.get_zone_by_name("system-test", status=200)["zone"] + result = client.get_zone_by_name(shared_zone_test_context.system_test_zone["name"], status=200)["zone"] assert_that(result["id"], is_(shared_zone_test_context.system_test_zone["id"])) assert_that(result["name"], is_(shared_zone_test_context.system_test_zone["name"])) @@ -136,8 +135,8 @@ def test_get_zone_by_name_shared_zone_succeeds(shared_zone_test_context): result = client.get_zone_by_name(shared_zone_test_context.shared_zone["name"], status=200)["zone"] assert_that(result["id"], is_(shared_zone_test_context.shared_zone["id"])) assert_that(result["name"], is_(shared_zone_test_context.shared_zone["name"])) - assert_that(result["adminGroupName"], is_("testSharedZoneGroup")) - assert_that(result["accessLevel"], is_("NoAccess")) + assert_that(result["adminGroupName"], is_(shared_zone_test_context.shared_record_group["name"])) + assert_that(result["accessLevel"], is_("Delete")) def test_get_zone_by_name_succeeds_without_access(shared_zone_test_context): @@ -146,7 +145,7 @@ def test_get_zone_by_name_succeeds_without_access(shared_zone_test_context): """ client = shared_zone_test_context.dummy_vinyldns_client - result = client.get_zone_by_name("system-test", status=200)["zone"] + result = client.get_zone_by_name(shared_zone_test_context.system_test_zone["name"], status=200)["zone"] assert_that(result["id"], is_(shared_zone_test_context.system_test_zone["id"])) assert_that(result["name"], is_(shared_zone_test_context.system_test_zone["name"])) assert_that(result["adminGroupName"], is_(shared_zone_test_context.ok_group["name"])) diff --git a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py index a55ba6282..ff0345f22 100644 --- a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py +++ b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py @@ -1,5 +1,5 @@ import pytest -from hamcrest import * + from utils import * diff --git a/modules/api/functional_test/live_tests/zones/list_zones_test.py b/modules/api/functional_test/live_tests/zones/list_zones_test.py index 048bda293..12493ed63 100644 --- a/modules/api/functional_test/live_tests/zones/list_zones_test.py +++ b/modules/api/functional_test/live_tests/zones/list_zones_test.py @@ -1,8 +1,14 @@ -from hamcrest import * +import pytest + from utils import * -def test_list_zones_success(shared_zone_test_context): +@pytest.fixture(scope="module") +def list_zone_context(shared_zone_test_context): + return shared_zone_test_context.list_zones + + +def test_list_zones_success(list_zone_context, shared_zone_test_context): """ Test that we can retrieve a list of the user's zones """ @@ -10,8 +16,8 @@ def test_list_zones_success(shared_zone_test_context): retrieved = result["zones"] assert_that(retrieved, has_length(5)) - assert_that(retrieved, has_item(has_entry("name", "list-zones-test-searched-1."))) - assert_that(retrieved, has_item(has_entry("adminGroupName", "list-zones-group"))) + assert_that(retrieved, has_item(has_entry("name", list_zone_context.search_zone1["name"]))) + assert_that(retrieved, has_item(has_entry("adminGroupName", list_zone_context.list_zones_group["name"]))) assert_that(retrieved, has_item(has_entry("backendId", "func-test-backend"))) @@ -22,6 +28,7 @@ def test_list_zones_max_items_100(shared_zone_test_context): result = shared_zone_test_context.list_zones_client.list_zones(status=200) assert_that(result["maxItems"], is_(100)) + def test_list_zones_ignore_access_default_false(shared_zone_test_context): """ Test that the default ignore access value for a list zones request is false @@ -29,6 +36,7 @@ def test_list_zones_ignore_access_default_false(shared_zone_test_context): result = shared_zone_test_context.list_zones_client.list_zones(status=200) assert_that(result["ignoreAccess"], is_(False)) + def test_list_zones_invalid_max_items_fails(shared_zone_test_context): """ Test that passing in an invalid value for max items fails @@ -44,7 +52,7 @@ def test_list_zones_no_authorization(shared_zone_test_context): shared_zone_test_context.list_zones_client.list_zones(sign_request=False, status=401) -def test_list_zones_no_search_first_page(shared_zone_test_context): +def test_list_zones_no_search_first_page(list_zone_context, shared_zone_test_context): """ Test that the first page of listing zones returns correctly when no name filter is provided """ @@ -52,51 +60,51 @@ def test_list_zones_no_search_first_page(shared_zone_test_context): zones = result["zones"] assert_that(zones, has_length(3)) - assert_that(zones[0]["name"], is_("list-zones-test-searched-1.")) - assert_that(zones[1]["name"], is_("list-zones-test-searched-2.")) - assert_that(zones[2]["name"], is_("list-zones-test-searched-3.")) + assert_that(zones[0]["name"], is_(list_zone_context.search_zone1["name"])) + assert_that(zones[1]["name"], is_(list_zone_context.search_zone2["name"])) + assert_that(zones[2]["name"], is_(list_zone_context.search_zone3["name"])) - assert_that(result["nextId"], is_("list-zones-test-searched-3.")) + assert_that(result["nextId"], is_(list_zone_context.search_zone3["name"])) assert_that(result["maxItems"], is_(3)) assert_that(result, is_not(has_key("startFrom"))) assert_that(result, is_not(has_key("nameFilter"))) -def test_list_zones_no_search_second_page(shared_zone_test_context): +def test_list_zones_no_search_second_page(list_zone_context, shared_zone_test_context): """ Test that the second page of listing zones returns correctly when no name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(start_from="list-zones-test-searched-2.", max_items=2, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(start_from=list_zone_context.search_zone2["name"], max_items=2, status=200) zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]["name"], is_("list-zones-test-searched-3.")) - assert_that(zones[1]["name"], is_("list-zones-test-unfiltered-1.")) + assert_that(zones[0]["name"], is_(list_zone_context.search_zone3["name"])) + assert_that(zones[1]["name"], is_(list_zone_context.non_search_zone1["name"])) - assert_that(result["nextId"], is_("list-zones-test-unfiltered-1.")) + assert_that(result["nextId"], is_(list_zone_context.non_search_zone1["name"])) assert_that(result["maxItems"], is_(2)) - assert_that(result["startFrom"], is_("list-zones-test-searched-2.")) + assert_that(result["startFrom"], is_(list_zone_context.search_zone2["name"])) assert_that(result, is_not(has_key("nameFilter"))) -def test_list_zones_no_search_last_page(shared_zone_test_context): +def test_list_zones_no_search_last_page(list_zone_context, shared_zone_test_context): """ Test that the last page of listing zones returns correctly when no name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(start_from="list-zones-test-searched-3.", max_items=4, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(start_from=list_zone_context.search_zone3["name"], max_items=4, status=200) zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]["name"], is_("list-zones-test-unfiltered-1.")) - assert_that(zones[1]["name"], is_("list-zones-test-unfiltered-2.")) + assert_that(zones[0]["name"], is_(list_zone_context.non_search_zone1["name"])) + assert_that(zones[1]["name"], is_(list_zone_context.non_search_zone2["name"])) assert_that(result, is_not(has_key("nextId"))) assert_that(result["maxItems"], is_(4)) - assert_that(result["startFrom"], is_("list-zones-test-searched-3.")) + assert_that(result["startFrom"], is_(list_zone_context.search_zone3["name"])) assert_that(result, is_not(has_key("nameFilter"))) -def test_list_zones_with_search_first_page(shared_zone_test_context): +def test_list_zones_with_search_first_page(list_zone_context, shared_zone_test_context): """ Test that the first page of listing zones returns correctly when a name filter is provided """ @@ -104,10 +112,10 @@ def test_list_zones_with_search_first_page(shared_zone_test_context): zones = result["zones"] assert_that(zones, has_length(2)) - assert_that(zones[0]["name"], is_("list-zones-test-searched-1.")) - assert_that(zones[1]["name"], is_("list-zones-test-searched-2.")) + assert_that(zones[0]["name"], is_(list_zone_context.search_zone1["name"])) + assert_that(zones[1]["name"], is_(list_zone_context.search_zone2["name"])) - assert_that(result["nextId"], is_("list-zones-test-searched-2.")) + assert_that(result["nextId"], is_(list_zone_context.search_zone2["name"])) assert_that(result["maxItems"], is_(2)) assert_that(result["nameFilter"], is_("*searched*")) assert_that(result, is_not(has_key("startFrom"))) @@ -128,20 +136,24 @@ def test_list_zones_with_no_results(shared_zone_test_context): assert_that(result, is_not(has_key("nextId"))) -def test_list_zones_with_search_last_page(shared_zone_test_context): +def test_list_zones_with_search_last_page(list_zone_context, shared_zone_test_context): """ Test that the second page of listing zones returns correctly when a name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter="*test-searched-3", start_from="list-zones-test-searched-2.", max_items=2, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*test-searched-3{shared_zone_test_context.partition_id}", + start_from=list_zone_context.search_zone2["name"], + max_items=2, + status=200) zones = result["zones"] assert_that(zones, has_length(1)) - assert_that(zones[0]["name"], is_("list-zones-test-searched-3.")) + assert_that(zones[0]["name"], is_(list_zone_context.search_zone3["name"])) assert_that(result, is_not(has_key("nextId"))) assert_that(result["maxItems"], is_(2)) - assert_that(result["nameFilter"], is_("*test-searched-3")) - assert_that(result["startFrom"], is_("list-zones-test-searched-2.")) + assert_that(result["nameFilter"], is_(f"*test-searched-3{shared_zone_test_context.partition_id}")) + assert_that(result["startFrom"], is_(list_zone_context.search_zone2["name"])) + def test_list_zones_ignore_access_success(shared_zone_test_context): """ @@ -158,9 +170,9 @@ def test_list_zones_ignore_access_success_with_name_filter(shared_zone_test_cont """ Test that we can retrieve a list of all zones with a name filter """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter="shared", ignore_access=True, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=shared_zone_test_context.shared_zone["name"].rstrip("."), ignore_access=True, status=200) retrieved = result["zones"] assert_that(result["ignoreAccess"], is_(True)) - assert_that(retrieved, has_item(has_entry("name", "shared."))) + assert_that(retrieved, has_item(has_entry("name", shared_zone_test_context.shared_zone["name"]))) assert_that(retrieved, has_item(has_entry("accessLevel", "NoAccess"))) diff --git a/modules/api/functional_test/live_tests/zones/sync_zone_test.py b/modules/api/functional_test/live_tests/zones/sync_zone_test.py index 4cda1fadb..fed066792 100644 --- a/modules/api/functional_test/live_tests/zones/sync_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/sync_zone_test.py @@ -1,5 +1,4 @@ import pytest -import pytz from utils import * @@ -8,88 +7,6 @@ API_SYNC_DELAY = 10 MAX_RETRIES = 30 RETRY_WAIT = 0.05 -records_in_dns = [ - {"name": "sync-test.", - "type": "SOA", - "records": [{"mname": "172.17.42.1.", - "rname": "admin.test.com.", - "retry": 3600, - "refresh": 10800, - "minimum": 38400, - "expire": 604800, - "serial": 1439234395}]}, - {"name": "sync-test.", - "type": "NS", - "records": [{"nsdname": "172.17.42.1."}]}, - {"name": "jenkins", - "type": "A", - "records": [{"address": "10.1.1.1"}]}, - {"name": "foo", - "type": "A", - "records": [{"address": "2.2.2.2"}]}, - {"name": "test", - "type": "A", - "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, - {"name": "sync-test.", - "type": "A", - "records": [{"address": "5.5.5.5"}]}, - {"name": "already-exists", - "type": "A", - "records": [{"address": "6.6.6.6"}]}, - {"name": "fqdn", - "type": "A", - "records": [{"address": "7.7.7.7"}]}, - {"name": "_sip._tcp", - "type": "SRV", - "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, - {"name": "existing.dotted", - "type": "A", - "records": [{"address": "9.9.9.9"}]}] - -records_post_update = [ - {"name": "sync-test.", - "type": "SOA", - "records": [{"mname": "172.17.42.1.", - "rname": "admin.test.com.", - "retry": 3600, - "refresh": 10800, - "minimum": 38400, - "expire": 604800, - "serial": 0}]}, - {"name": "sync-test.", - "type": "NS", - "records": [{"nsdname": "172.17.42.1."}]}, - {"name": "foo", - "type": "A", - "records": [{"address": "1.2.3.4"}]}, - {"name": "test", - "type": "A", - "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, - {"name": "sync-test.", - "type": "A", - "records": [{"address": "5.5.5.5"}]}, - {"name": "already-exists", - "type": "A", - "records": [{"address": "6.6.6.6"}]}, - {"name": "newrs", - "type": "A", - "records": [{"address": "2.3.4.5"}]}, - {"name": "fqdn", - "type": "A", - "records": [{"address": "7.7.7.7"}]}, - {"name": "_sip._tcp", - "type": "SRV", - "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, - {"name": "existing.dotted", - "type": "A", - "records": [{"address": "9.9.9.9"}]}, - {"name": "dott.ed", - "type": "A", - "records": [{"address": "6.7.8.9"}]}, - {"name": "dott.ed-two", - "type": "A", - "records": [{"address": "6.7.8.9"}]}] - @pytest.mark.skip_production def test_sync_zone_success(shared_zone_test_context): @@ -97,7 +14,7 @@ def test_sync_zone_success(shared_zone_test_context): Test syncing a zone """ client = shared_zone_test_context.ok_vinyldns_client - zone_name = "sync-test" + zone_name = f"sync-test{shared_zone_test_context.partition_id}" updated_rs_id = None check_rs = None @@ -133,14 +50,15 @@ def test_sync_zone_success(shared_zone_test_context): # Confirm that the recordsets in DNS have been saved in vinyldns recordsets = client.list_recordsets_by_zone(zone["id"])["recordSets"] - assert_that(len(recordsets), is_(10)) + records_in_dns = build_records_in_dns(shared_zone_test_context) + assert_that(len(recordsets), is_(len(records_in_dns))) for rs in recordsets: if rs["name"] == "foo": # get the ID for recordset with name "foo" updated_rs_id = rs["id"] small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) if small_rs["type"] == "SOA": - assert_that(small_rs["name"], is_("sync-test.")) + assert_that(small_rs["name"], is_(f"{zone_name}.")) else: assert_that(records_in_dns, has_item(small_rs)) @@ -182,7 +100,8 @@ def test_sync_zone_success(shared_zone_test_context): # confirm that the updated recordsets in DNS have been saved in vinyldns recordsets = client.list_recordsets_by_zone(zone["id"])["recordSets"] - assert_that(len(recordsets), is_(12)) + records_post_update = build_records_post_update(shared_zone_test_context) + assert_that(len(recordsets), is_(len(records_post_update))) for rs in recordsets: small_rs = dict((k, rs[k]) for k in ["name", "type", "records"]) small_rs["records"] = small_rs["records"] @@ -228,7 +147,6 @@ def test_sync_zone_success(shared_zone_test_context): good_update["name"] = "example-dotted" change = client.update_recordset(good_update, status=202) client.wait_until_recordset_change_status(change, "Complete") - finally: # reset the ownerGroupId for foo record if check_rs: @@ -243,3 +161,90 @@ def test_sync_zone_success(shared_zone_test_context): dns_delete(zone, "dott.ed", "A") dns_delete(zone, "dott.ed-two", "A") client.abandon_zones([zone["id"]], status=202) + +def build_records_in_dns(shared_zone_test_context): + partition_id = shared_zone_test_context.partition_id + return [ + {"name": f"sync-test{partition_id}.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 1439234395}]}, + {"name": f"sync-test{partition_id}.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "jenkins", + "type": "A", + "records": [{"address": "10.1.1.1"}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "2.2.2.2"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": f"sync-test{partition_id}.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}, + {"name": "fqdn", + "type": "A", + "records": [{"address": "7.7.7.7"}]}, + {"name": "_sip._tcp", + "type": "SRV", + "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, + {"name": "existing.dotted", + "type": "A", + "records": [{"address": "9.9.9.9"}]}] + + +def build_records_post_update(shared_zone_test_context): + partition_id = shared_zone_test_context.partition_id + return [ + {"name": f"sync-test{partition_id}.", + "type": "SOA", + "records": [{"mname": "172.17.42.1.", + "rname": "admin.test.com.", + "retry": 3600, + "refresh": 10800, + "minimum": 38400, + "expire": 604800, + "serial": 0}]}, + {"name": f"sync-test{partition_id}.", + "type": "NS", + "records": [{"nsdname": "172.17.42.1."}]}, + {"name": "foo", + "type": "A", + "records": [{"address": "1.2.3.4"}]}, + {"name": "test", + "type": "A", + "records": [{"address": "3.3.3.3"}, {"address": "4.4.4.4"}]}, + {"name": f"sync-test{partition_id}.", + "type": "A", + "records": [{"address": "5.5.5.5"}]}, + {"name": "already-exists", + "type": "A", + "records": [{"address": "6.6.6.6"}]}, + {"name": "newrs", + "type": "A", + "records": [{"address": "2.3.4.5"}]}, + {"name": "fqdn", + "type": "A", + "records": [{"address": "7.7.7.7"}]}, + {"name": "_sip._tcp", + "type": "SRV", + "records": [{"priority": 10, "weight": 60, "port": 5060, "target": "foo.sync-test."}]}, + {"name": "existing.dotted", + "type": "A", + "records": [{"address": "9.9.9.9"}]}, + {"name": "dott.ed", + "type": "A", + "records": [{"address": "6.7.8.9"}]}, + {"name": "dott.ed-two", + "type": "A", + "records": [{"address": "6.7.8.9"}]}] diff --git a/modules/api/functional_test/live_tests/zones/update_zone_test.py b/modules/api/functional_test/live_tests/zones/update_zone_test.py index e762506bb..e869fb120 100644 --- a/modules/api/functional_test/live_tests/zones/update_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/update_zone_test.py @@ -14,7 +14,7 @@ def test_update_zone_success(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time" + zone_name = f"one-time{shared_zone_test_context.partition_id}" acl_rule = { "accessLevel": "Read", @@ -62,7 +62,6 @@ def test_update_zone_success(shared_zone_test_context): acl = uz["acl"] verify_acl_rule_is_present_once(acl_rule, acl) - finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) @@ -112,11 +111,10 @@ def test_update_missing_zone_data(shared_zone_test_context): """ Test that updating a zone without providing necessary data returns errors and fails the update """ - client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time." + zone_name = f"one-time{shared_zone_test_context.partition_id}." zone = { "name": zone_name, @@ -153,7 +151,6 @@ def test_update_missing_zone_data(shared_zone_test_context): # Check that the failed update didn't go through zone_get = client.get_zone(result_zone["id"])["zone"] assert_that(zone_get["name"], is_(zone_name)) - finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) @@ -167,7 +164,7 @@ def test_update_invalid_zone_data(shared_zone_test_context): client = shared_zone_test_context.ok_vinyldns_client result_zone = None try: - zone_name = "one-time." + zone_name = f"one-time{shared_zone_test_context.partition_id}." zone = { "name": zone_name, @@ -203,7 +200,6 @@ def test_update_invalid_zone_data(shared_zone_test_context): # Check that the failed update didn't go through zone_get = client.get_zone(result_zone["id"])["zone"] assert_that(zone_get["name"], is_(zone_name)) - finally: if result_zone: client.abandon_zones([result_zone["id"]], status=202) @@ -216,7 +212,7 @@ def test_update_zone_returns_404_if_zone_not_found(shared_zone_test_context): """ client = shared_zone_test_context.ok_vinyldns_client zone = { - "name": "one-time.", + "name": f"one-time{shared_zone_test_context.partition_id}.", "email": "test@test.com", "id": "nothere", "connection": { @@ -477,7 +473,7 @@ def test_delete_acl_group_rule_success(shared_zone_test_context): # delete the rule result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, - status=202) + status=202) # make sure that our acl is not on the zone zone = client.get_zone(result["zone"]["id"])["zone"] @@ -510,7 +506,7 @@ def test_delete_acl_user_rule_success(shared_zone_test_context): # delete the rule result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, - status=202) + status=202) # make sure that our acl is not on the zone zone = client.get_zone(result["zone"]["id"])["zone"] @@ -533,7 +529,7 @@ def test_delete_non_existent_acl_rule_success(shared_zone_test_context): } # delete the rule result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone["id"], acl_rule, - status=202) + status=202) # make sure that our acl is not on the zone zone = client.get_zone(result["zone"]["id"])["zone"] @@ -579,7 +575,6 @@ def test_delete_acl_removes_permissions(shared_zone_test_context): Test that a user (who previously had permissions to view a zone via acl rules) can still view the zone once the acl rule is deleted """ - ok_client = shared_zone_test_context.ok_vinyldns_client # ok adds and deletes acl rule dummy_client = shared_zone_test_context.dummy_vinyldns_client # dummy should not be able to see ok_zone once acl rule is deleted ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone["id"])["zone"] @@ -714,7 +709,7 @@ def test_user_can_update_zone_to_another_admin_group(shared_zone_test_context): try: result = client.create_zone( { - "name": "one-time.", + "name": f"one-time{shared_zone_test_context.partition_id}.", "email": "test@test.com", "adminGroupId": shared_zone_test_context.dummy_group["id"], "connection": { @@ -839,21 +834,6 @@ def test_normal_user_cannot_update_shared_zone_flag(shared_zone_test_context): error = shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=403) assert_that(error, contains_string("Not authorized to update zone shared status from false to true.")) - -def test_toggle_test_flag(shared_zone_test_context): - """ - Test the isTest flag is ignored in update requests - """ - client = shared_zone_test_context.shared_zone_vinyldns_client - zone_update = copy.deepcopy(shared_zone_test_context.non_test_shared_zone) - zone_update["isTest"] = True - - change = client.update_zone(zone_update, status=202) - client.wait_until_zone_change_status_synced(change) - - assert_that(change["zone"]["isTest"], is_(False)) - - @pytest.mark.serial def test_update_connection_info_success(shared_zone_test_context): """ @@ -892,6 +872,7 @@ def test_update_connection_info_success(shared_zone_test_context): delete_result = client.delete_recordset(test_rs["zoneId"], test_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") + @pytest.mark.serial def test_update_connection_info_invalid_backendid(shared_zone_test_context): """ diff --git a/modules/api/functional_test/perf_tests/uat_sync_test.py b/modules/api/functional_test/perf_tests/uat_sync_test.py deleted file mode 100644 index 446685d73..000000000 --- a/modules/api/functional_test/perf_tests/uat_sync_test.py +++ /dev/null @@ -1,64 +0,0 @@ -import time - -from hamcrest import * - -from vinyldns_context import VinylDNSTestContext -from vinyldns_python import VinylDNSClient - - -def test_sync_zone_success(): - """ - Test syncing a zone - """ - with VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") as client: - zone_name = "small" - zones = client.list_zones()["zones"] - zone = [z for z in zones if z["name"] == zone_name + "."] - - last_latest_sync = [] - new = True - if zone: - zone = zone[0] - last_latest_sync = zone["latestSync"] - new = False - else: - # create zone if it doesnt exist - zone = { - "name": zone_name, - "email": "test@test.com", - "connection": { - "name": "vinyldns.", - "keyName": VinylDNSTestContext.dns_key_name, - "key": VinylDNSTestContext.dns_key, - "primaryServer": VinylDNSTestContext.name_server_ip - }, - "transferConnection": { - "name": "vinyldns.", - "keyName": VinylDNSTestContext.dns_key_name, - "key": VinylDNSTestContext.dns_key, - "primaryServer": VinylDNSTestContext.name_server_ip - } - } - zone_change = client.create_zone(zone, status=202) - zone = zone_change["zone"] - client.wait_until_zone_active(zone_change["zone"]["id"]) - - zone_id = zone["id"] - - # run sync - client.sync_zone(zone_id, status=202) - - # brief wait for zone status change. Can't use getZoneHistory here to check on the changeset itself, - # the action times out (presumably also querying the same record change table that the sync itself - # is interacting with) - time.sleep(0.5) - client.wait_until_zone_status(zone_id, "Active") - - # confirm zone has been updated - get_result = client.get_zone(zone_id) - synced_zone = get_result["zone"] - latest_sync = synced_zone["latestSync"] - assert_that(synced_zone["updated"], is_not(none())) - assert_that(latest_sync, is_not(none())) - if not new: - assert_that(latest_sync, is_not(last_latest_sync)) diff --git a/modules/api/functional_test/run.sh b/modules/api/functional_test/run.sh index 0616da2a3..f014b523b 100755 --- a/modules/api/functional_test/run.sh +++ b/modules/api/functional_test/run.sh @@ -9,5 +9,4 @@ if [ "$1" == "--update" ]; then fi PARAMS=("$@") -./pytest.sh "${UPDATE_DEPS}" --suppress-no-test-exit-code -v live_tests -m "serial" --teardown=False "${PARAMS[@]}" -./pytest.sh --suppress-no-test-exit-code -v live_tests -n 2 -m "not serial" --teardown=True "${PARAMS[@]}" +./pytest.sh "${UPDATE_DEPS}" --suppress-no-test-exit-code -v live_tests "${PARAMS[@]}" diff --git a/modules/api/functional_test/utils.py b/modules/api/functional_test/utils.py index 4e7f747ad..12b425bd1 100644 --- a/modules/api/functional_test/utils.py +++ b/modules/api/functional_test/utils.py @@ -1,4 +1,5 @@ import json +import traceback import uuid import dns.query @@ -318,35 +319,35 @@ def remove_classless_acl_rules(test_context, rules): def clear_ok_acl_rules(test_context): zone = test_context.ok_zone zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_shared_zone_acl_rules(test_context): zone = test_context.shared_zone zone["acl"]["rules"] = [] - update_change = test_context.shared_zone_vinyldns_client.update_zone(zone, status=202) + update_change = test_context.shared_zone_vinyldns_client.update_zone(zone, status=(202, 404)) test_context.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip4_acl_rules(test_context): zone = test_context.ip4_reverse_zone zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip6_acl_rules(test_context): zone = test_context.ip6_reverse_zone zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_classless_acl_rules(test_context): zone = test_context.classless_zone_delegation_zone zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) @@ -535,12 +536,13 @@ def clear_recordset_list(to_delete, client): try: delete_result = client.delete_recordset(result_rs["zone"]["id"], result_rs["recordSet"]["id"], status=202) delete_changes.append(delete_result) - except: - pass + except Exception: + traceback.print_exc() for change in delete_changes: try: client.wait_until_recordset_change_status(change, "Complete") - except: + except Exception: + traceback.print_exc() pass @@ -550,12 +552,14 @@ def clear_zoneid_rsid_tuple_list(to_delete, client): try: delete_result = client.delete_recordset(tup[0], tup[1], status=202) delete_changes.append(delete_result) - except: + except Exception: + traceback.print_exc() pass for change in delete_changes: try: client.wait_until_recordset_change_status(change, "Complete") - except: + except Exception: + traceback.print_exc() pass diff --git a/modules/api/functional_test/vinyldns_python.py b/modules/api/functional_test/vinyldns_python.py index 0faf8fd6e..41475012d 100644 --- a/modules/api/functional_test/vinyldns_python.py +++ b/modules/api/functional_test/vinyldns_python.py @@ -1,6 +1,8 @@ import json import logging import time +import traceback +from json import JSONDecodeError from typing import Iterable from urllib.parse import urlparse, urlsplit, parse_qs, urljoin @@ -112,8 +114,11 @@ class VinylDNSClient(object): try: return response.status_code, response.json() - except: + except JSONDecodeError: return response.status_code, response.text + except Exception: + traceback.print_exc() + raise def ping(self): """ @@ -169,7 +174,6 @@ class VinylDNSClient(object): :param group: A group dictionary that can be serialized to json :return: the content of the response, which should be a group json """ - url = urljoin(self.index_url, "/groups") response, data = self.make_request(url, "POST", self.headers, json.dumps(group), **kwargs) @@ -181,7 +185,6 @@ class VinylDNSClient(object): :param group_id: Id of the group to get :return: the group json """ - url = urljoin(self.index_url, "/groups/" + group_id) response, data = self.make_request(url, "GET", self.headers, **kwargs) @@ -205,7 +208,6 @@ class VinylDNSClient(object): :param group: A group dictionary that can be serialized to json :return: the content of the response, which should be a group json """ - url = urljoin(self.index_url, "/groups/{0}".format(group_id)) response, data = self.make_request(url, "PUT", self.headers, json.dumps(group), not_found_ok=True, **kwargs) @@ -220,7 +222,6 @@ class VinylDNSClient(object): :param ignore_access: determines if groups should be retrieved based on requester's membership :return: the content of the response """ - args = [] if group_name_filter: args.append("groupNameFilter={0}".format(group_name_filter)) @@ -242,7 +243,6 @@ class VinylDNSClient(object): :param group_name_filter: only returns groups whose names contain filter string :return: the content of the response """ - groups = [] args = [] if group_name_filter: @@ -329,7 +329,6 @@ class VinylDNSClient(object): :param zone: the zone to be created :return: the content of the response """ - url = urljoin(self.index_url, "/zones") response, data = self.make_request(url, "POST", self.headers, json.dumps(zone), **kwargs) @@ -695,6 +694,10 @@ class VinylDNSClient(object): """ Waits until the zone change status is Synced """ + # We can get a zone_change parameter from a 404 where the change is not a dict + if type(zone_change) == str: + return + latest_change = zone_change retries = MAX_RETRIES @@ -787,11 +790,8 @@ class VinylDNSClient(object): while change["status"] != expected_status and retries > 0: time.sleep(RETRY_WAIT) retries -= 1 - latest_change = self.get_recordset_change(change["recordSet"]["zoneId"], change["recordSet"]["id"], - change["id"], status=(200, 404)) - if "Unable to find record set change" in latest_change: - change = change - else: + latest_change = self.get_recordset_change(change["recordSet"]["zoneId"], change["recordSet"]["id"], change["id"], status=(200, 404)) + if type(latest_change) != str: change = latest_change if change["status"] != expected_status: From 0a1b53319251cfe377500390f4ad49bfe6acb882 Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Fri, 8 Oct 2021 15:52:09 -0400 Subject: [PATCH 13/82] WIP - Functional Test Updates - Update `dnsjava` library - Add support for H2 database - Update functional tests to support parallel runs - Remove the ability to specify number of processes for functional tests - always 4 now - Add `Makefile` and `Dockerfile` in `functional_test` to make it easier to run tests without spinning up multiple containers --- build.sbt | 59 ++-- docker/api/run.sh | 2 +- docker/bind9/README.md | 23 ++ docker/bind9/etc/named.conf.local | 11 +- modules/api/functional_test/Dockerfile | 33 ++ .../functional_test/Dockerfile.dockerignore | 16 + modules/api/functional_test/Makefile | 25 ++ modules/api/functional_test/conftest.py | 43 ++- modules/api/functional_test/docker.conf | 302 ++++++++++++++++++ .../live_tests/batch/get_batch_change_test.py | 2 +- .../batch/list_batch_change_summaries_test.py | 18 +- .../functional_test/live_tests/conftest.py | 2 +- .../list_batch_summaries_test_context.py | 53 +-- .../live_tests/list_groups_test_context.py | 13 +- .../list_recordsets_test_context.py | 10 +- .../live_tests/list_zones_test_context.py | 12 +- .../membership/create_group_test.py | 2 +- .../membership/delete_group_test.py | 3 +- .../membership/list_my_groups_test.py | 25 +- .../live_tests/production_verify_test.py | 6 - .../recordsets/create_recordset_test.py | 12 - .../recordsets/delete_recordset_test.py | 14 - .../recordsets/update_recordset_test.py | 19 +- .../live_tests/shared_zone_test_context.py | 186 +++-------- .../live_tests/zones/create_zone_test.py | 2 - .../live_tests/zones/list_zones_test.py | 28 +- .../live_tests/zones/update_zone_test.py | 3 - modules/api/functional_test/pytest.ini | 1 - modules/api/functional_test/requirements.txt | 6 +- modules/api/functional_test/run.sh | 7 +- modules/api/functional_test/utils.py | 66 ++-- .../api/functional_test/vinyldns_python.py | 35 +- .../vinyldns/api/backend/dns/DnsBackend.scala | 5 +- .../api/backend/dns/DnsConversions.scala | 4 +- .../main/scala/vinyldns/core/Messages.scala | 18 +- modules/mysql/src/main/resources/test/ddl.sql | 238 ++++++++++++++ .../scala/vinyldns/mysql/MySqlConnector.scala | 53 +-- project/Dependencies.scala | 9 +- 38 files changed, 981 insertions(+), 385 deletions(-) create mode 100644 docker/bind9/README.md create mode 100644 modules/api/functional_test/Dockerfile create mode 100644 modules/api/functional_test/Dockerfile.dockerignore create mode 100644 modules/api/functional_test/Makefile create mode 100644 modules/api/functional_test/docker.conf create mode 100644 modules/mysql/src/main/resources/test/ddl.sql diff --git a/build.sbt b/build.sbt index 3da85a606..8c4b891d3 100644 --- a/build.sbt +++ b/build.sbt @@ -1,12 +1,11 @@ -import Resolvers._ -import Dependencies._ import CompilerOptions._ +import Dependencies._ +import Resolvers._ import com.typesafe.sbt.packager.docker._ -import scoverage.ScoverageKeys.{coverageFailOnMinimum, coverageMinimum} -import org.scalafmt.sbt.ScalafmtPlugin._ import microsites._ -import ReleaseTransformations._ -import sbtrelease.Version +import org.scalafmt.sbt.ScalafmtPlugin._ +import sbtrelease.ReleasePlugin.autoImport.ReleaseTransformations._ +import scoverage.ScoverageKeys.{coverageFailOnMinimum, coverageMinimum} import scala.util.Try @@ -22,16 +21,19 @@ lazy val sharedSettings = Seq( startYear := Some(2018), licenses += ("Apache-2.0", new URL("https://www.apache.org/licenses/LICENSE-2.0.txt")), scalacOptions ++= scalacOptionsByV(scalaVersion.value), - scalacOptions in (Compile, doc) += "-no-link-warnings", + scalacOptions in(Compile, doc) += "-no-link-warnings", // Use wart remover to eliminate code badness - wartremoverErrors ++= Seq( - Wart.EitherProjectionPartial, - Wart.IsInstanceOf, - Wart.JavaConversions, - Wart.Return, - Wart.LeakingSealed, - Wart.ExplicitImplicitTypes - ), + wartremoverErrors := ( + if (getPropertyFlagOrDefault("build.lintOnCompile", true)) + Seq(Wart.EitherProjectionPartial, + Wart.IsInstanceOf, + Wart.JavaConversions, + Wart.Return, + Wart.LeakingSealed, + Wart.ExplicitImplicitTypes + ) + else Seq.empty + ), // scala format scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", true), @@ -72,7 +74,7 @@ lazy val apiAssemblySettings = Seq( mainClass in reStart := Some("vinyldns.api.Boot"), // there are some odd things from dnsjava including update.java and dig.java that we don't use assemblyMergeStrategy in assembly := { - case "update.class"| "dig.class" => MergeStrategy.discard + case "update.class" | "dig.class" => MergeStrategy.discard case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => MergeStrategy.discard case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => MergeStrategy.discard case x => @@ -158,11 +160,11 @@ lazy val portalPublishSettings = Seq( publishLocal := (publishLocal in Docker).value, publish := (publish in Docker).value, // for sbt-native-packager (docker) to exclude local.conf - mappings in Universal ~= ( _.filterNot { + mappings in Universal ~= (_.filterNot { case (file, _) => file.getName.equals("local.conf") }), // for local.conf to be excluded in jars - mappings in (Compile, packageBin) ~= ( _.filterNot { + mappings in(Compile, packageBin) ~= (_.filterNot { case (file, _) => file.getName.equals("local.conf") }) ) @@ -216,8 +218,6 @@ lazy val coreBuildSettings = Seq( // to write a crypto plugin so that we fall back to a noarg constructor scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params") ) ++ pbSettings - -import xerial.sbt.Sonatype._ lazy val corePublishSettings = Seq( publishMavenStyle := true, publishArtifact in Test := false, @@ -232,13 +232,6 @@ lazy val corePublishSettings = Seq( "scm:git@github.com:vinyldns/vinyldns.git" ) ), - developers := List( - Developer(id="pauljamescleary", name="Paul James Cleary", email="pauljamescleary@gmail.com", url=url("https://github.com/pauljamescleary")), - Developer(id="rebstar6", name="Rebecca Star", email="rebstar6@gmail.com", url=url("https://github.com/rebstar6")), - Developer(id="nimaeskandary", name="Nima Eskandary", email="nimaesk1@gmail.com", url=url("https://github.com/nimaeskandary")), - Developer(id="mitruly", name="Michael Ly", email="michaeltrulyng@gmail.com", url=url("https://github.com/mitruly")), - Developer(id="britneywright", name="Britney Wright", email="blw06g@gmail.com", url=url("https://github.com/britneywright")), - ), sonatypeProfileName := "io.vinyldns" ) @@ -397,9 +390,9 @@ lazy val setSonatypeReleaseSettings = ReleaseStep(action = oldState => { // create sonatypeReleaseCommand with releaseSonatype step val sonatypeCommand = Command.command("sonatypeReleaseCommand") { "project core" :: - "publish" :: - "sonatypeRelease" :: - _ + "publish" :: + "sonatypeRelease" :: + _ } newState.copy(definedCommands = newState.definedCommands :+ sonatypeCommand) @@ -428,7 +421,7 @@ lazy val initReleaseStage = Seq[ReleaseStep]( setSonatypeReleaseSettings ) -lazy val finalReleaseStage = Seq[ReleaseStep] ( +lazy val finalReleaseStage = Seq[ReleaseStep]( releaseStepCommand("project root"), // use version.sbt file from root commitReleaseVersion, setNextVersion, @@ -440,8 +433,8 @@ def getPropertyFlagOrDefault(name: String, value: Boolean): Boolean = releaseProcess := initReleaseStage ++ - sonatypePublishStage ++ - finalReleaseStage + sonatypePublishStage ++ + finalReleaseStage // Let's do things in parallel! addCommandAlias("validate", "; root/clean; " + diff --git a/docker/api/run.sh b/docker/api/run.sh index 830e8bae5..c9985fd65 100755 --- a/docker/api/run.sh +++ b/docker/api/run.sh @@ -47,5 +47,5 @@ done echo "Starting up Vinyl..." sleep 2 -java -Djava.net.preferIPv4Stack=true -Dconfig.file=/app/docker.conf -Dakka.loglevel=INFO -Dlogback.configurationFile=test/logback.xml -jar /app/vinyldns-server.jar vinyldns.api.Boot +java -Djava.net.preferIPv4Stack=true -Dconfig.file=/app/docker.conf -Dakka.loglevel=INFO -Dlogback.configurationFile=/app/logback.xml -jar /app/vinyldns-server.jar vinyldns.api.Boot diff --git a/docker/bind9/README.md b/docker/bind9/README.md new file mode 100644 index 000000000..ca9bbdd98 --- /dev/null +++ b/docker/bind9/README.md @@ -0,0 +1,23 @@ +## Bind Test Configuration + +This folder contains test configuration for BIND zones. The zones are partitioned into four distinct partitions to allow +for four parallel testing threads that won't interfere with one another. + +### Layout + +| Directory | Detail | +|:---|:---| +| `etc/` | Contains zone configurations separated by partition | +| `etc/_template` | Contains the template file for creating the partitioned `conf` files. Currently this is just a find and replace operation - finding `{placeholder}` and replacing it with the desired placeholder. | +| `zones/` | Contains zone definitions separated by partition | +| `zones/_template` |Contains the template file for creating the partitioned zone files. Currently this is just a find and replace operation - finding `{placeholder}` and replacing it with the desired placeholder. | + +### Target Directories + +When used in a container, or to run `named`, the files in this directory should be copied to the following directories: + +| Directory | Target | +|:---|:---| +| `etc/named.conf.local` | `/etc/bind/` | +| `etc/named.partition*.conf` | `/var/bind/config/` | +| `zones/` | `/var/bind/` | diff --git a/docker/bind9/etc/named.conf.local b/docker/bind9/etc/named.conf.local index 37dab1f8f..22ba7a61a 100755 --- a/docker/bind9/etc/named.conf.local +++ b/docker/bind9/etc/named.conf.local @@ -29,10 +29,7 @@ key "vinyldns-sha512." { secret "xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA=="; }; -// Consider adding the 1918 zones here, if they are not used in your organization -//include "/etc/bind/zones.rfc1918"; - -include "/var/cache/bind/config/named.partition1.conf"; -include "/var/cache/bind/config/named.partition2.conf"; -include "/var/cache/bind/config/named.partition3.conf"; -include "/var/cache/bind/config/named.partition4.conf"; +include "/var/bind/config/named.partition1.conf"; +include "/var/bind/config/named.partition2.conf"; +include "/var/bind/config/named.partition3.conf"; +include "/var/bind/config/named.partition4.conf"; diff --git a/modules/api/functional_test/Dockerfile b/modules/api/functional_test/Dockerfile new file mode 100644 index 000000000..93a100a00 --- /dev/null +++ b/modules/api/functional_test/Dockerfile @@ -0,0 +1,33 @@ +# Build VinylDNS API if the JAR doesn't already exist +FROM vinyldns/build:base-api as vinyldns-api +COPY modules/api/functional_test/docker.conf modules/api/functional_test/vinyldns*.jar /opt/vinyldns/ +COPY . /build/ +WORKDIR /build + +## Run the build if we don't already have a vinyldns.jar +RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ + env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ + sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ + && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ + fi + +# Build the testing image, copying data from `vinyldns-api` +FROM vinyldns/build:base-test +SHELL ["/bin/bash","-c"] +COPY --from=vinyldns-api /opt/vinyldns /opt/vinyldns + +# Local bind server files +COPY docker/bind9/etc/named.conf.local /etc/bind/ +COPY docker/bind9/etc/*.conf /var/bind/config/ +COPY docker/bind9/zones/ /var/bind/ +RUN named-checkconf + +# Copy over the functional tests +COPY modules/api/functional_test /functional_test + +ENTRYPOINT ["/bin/bash", "-c", "/initialize.sh && \ + (java -Dconfig.file=/opt/vinyldns/docker.conf -jar /opt/vinyldns/vinyldns.jar &> /opt/vinyldns/vinyldns.log &) && \ + echo -n 'Starting VinylDNS API..' && \ + timeout 30s grep -q 'STARTED SUCCESSFULLY' <(timeout 30s tail -f /opt/vinyldns/vinyldns.log) && \ + echo 'done.' && \ + /bin/bash"] \ No newline at end of file diff --git a/modules/api/functional_test/Dockerfile.dockerignore b/modules/api/functional_test/Dockerfile.dockerignore new file mode 100644 index 000000000..b134391d3 --- /dev/null +++ b/modules/api/functional_test/Dockerfile.dockerignore @@ -0,0 +1,16 @@ +**/.venv_win +**/.virtualenv +**/.venv +**/target +**/docs +**/out +**/.log +**/.idea/ +**/.bsp +**/*cache* +**/*.png +**/.git +**/Dockerfile +**/*.dockerignore +**/.github +**/_template diff --git a/modules/api/functional_test/Makefile b/modules/api/functional_test/Makefile new file mode 100644 index 000000000..ed3bd905c --- /dev/null +++ b/modules/api/functional_test/Makefile @@ -0,0 +1,25 @@ +SHELL=bash +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +.ONESHELL: + +.PHONY: all build run + +all: build run + +build: + @set -euo pipefail + trap 'if [ -f modules/api/functional_test/vinyldns.jar ]; then rm modules/api/functional_test/vinyldns.jar; fi' EXIT + cd ../../.. + if [ -f modules/api/target/scala-2.12/vinyldns.jar ]; then cp modules/api/target/scala-2.12/vinyldns.jar modules/api/functional_test/vinyldns.jar; fi + docker build -t vinyldns-test -f modules/api/functional_test/Dockerfile . + +run: + @set -euo pipefail + docker run -it --rm -p 9000:9000 -p 19001:53/tcp -p 19001:53/udp vinyldns-test \ No newline at end of file diff --git a/modules/api/functional_test/conftest.py b/modules/api/functional_test/conftest.py index 64caf366b..3dfeb86ca 100644 --- a/modules/api/functional_test/conftest.py +++ b/modules/api/functional_test/conftest.py @@ -4,9 +4,12 @@ import os import ssl import sys import traceback +from collections import OrderedDict +from typing import MutableMapping, List import _pytest.config import pytest +from xdist.scheduler import LoadScopeScheduling from vinyldns_context import VinylDNSTestContext @@ -26,7 +29,7 @@ def pytest_addoption(parser: _pytest.config.argparsing.Parser) -> None: Adds additional options that we can parse when we run the tests, stores them in the parser / py.test context """ parser.addoption("--url", dest="url", action="store", default="http://localhost:9000", help="URL for application to root") - parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001", help="The ip address for the dns name server to update") + parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1", help="The ip address for the dns name server to update") parser.addoption("--resolver-ip", dest="resolver_ip", action="store", help="The ip address for the dns server to use for the tests during resolution. This is usually the same as `--dns-ip`") parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.", help="The zone name that will be used for testing") parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.", help="The name of the key used to sign updates for the zone") @@ -116,3 +119,41 @@ def retrieve_resolver(resolver_name: str) -> str: pytest.exit(1) return resolver_address + +class WorkerScheduler(LoadScopeScheduling): + worker_assignments: List[MutableMapping] = [{"name": "list_batch_change_summaries_test.py", "worker": 0}] + + def _assign_work_unit(self, node): + """Assign a work unit to a node.""" + assert self.workqueue + + # Grab a unit of work + scope, work_unit = self.workqueue.popitem(last=False) + + # Always run list_batch_change_summaries_test on the first worker + for assignment in WorkerScheduler.worker_assignments: + while assignment["name"] in scope: + self.run_work_on_node(self.nodes[assignment["worker"]], scope, work_unit) + scope, work_unit = self.workqueue.popitem(last=False) + + self.run_work_on_node(node, scope, work_unit) + + def run_work_on_node(self, node, scope, work_unit): + # Keep track of the assigned work + assigned_to_node = self.assigned_work.setdefault(node, default=OrderedDict()) + assigned_to_node[scope] = work_unit + # Ask the node to execute the workload + worker_collection = self.registered_collections[node] + nodeids_indexes = [ + worker_collection.index(nodeid) + for nodeid, completed in work_unit.items() + if not completed + ] + node.send_runtest_some(nodeids_indexes) + + def _split_scope(self, nodeid): + return nodeid + + +def pytest_xdist_make_scheduler(config, log): + return WorkerScheduler(config, log) diff --git a/modules/api/functional_test/docker.conf b/modules/api/functional_test/docker.conf new file mode 100644 index 000000000..1a570b250 --- /dev/null +++ b/modules/api/functional_test/docker.conf @@ -0,0 +1,302 @@ +################################################################################################################ +# This configuration is only used by docker and the build process +################################################################################################################ +vinyldns { + + # configured backend providers + backend { + # Use "default" when dns backend legacy = true + # otherwise, use the id of one of the connections in any of your backends + default-backend-id = "default" + + # this is where we can save additional backends + backend-providers = [ + { + class-name = "vinyldns.api.backend.dns.DnsBackendProviderLoader" + settings = { + legacy = false + backends = [ + { + id = "default" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1" + primary-server = ${?DEFAULT_DNS_ADDRESS} + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1" + primary-server = ${?DEFAULT_DNS_ADDRESS} + }, + tsig-usage = "always" + }, + { + id = "func-test-backend" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1" + primary-server = ${?DEFAULT_DNS_ADDRESS} + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1" + primary-server = ${?DEFAULT_DNS_ADDRESS} + }, + tsig-usage = "always" + } + ] + } + } + ] + } + + queue { + class-name = "vinyldns.sqs.queue.SqsMessageQueueProvider" + + messages-per-poll = 10 + polling-interval = 250.millis + + settings { + # AWS access key and secret. + access-key = "test" + access-key = ${?AWS_ACCESS_KEY} + secret-key = "test" + secret-key = ${?AWS_SECRET_ACCESS_KEY} + + # Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed. + signing-region = "us-east-1" + signing-region = ${?SQS_REGION} + + # Endpoint to access queue + service-endpoint = "http://localhost:4566/" + service-endpoint = ${?SQS_ENDPOINT} + + # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. + queue-name = "vinyldns" + queue-name = ${?SQS_QUEUE_NAME} + } + } + + rest { + host = "0.0.0.0" + port = 9000 + } + + sync-delay = 10000 + + approved-name-servers = [ + "172.17.42.1.", + "ns1.parent.com." + "ns1.parent.com1." + "ns1.parent.com2." + "ns1.parent.com3." + "ns1.parent.com4." + ] + + crypto { + type = "vinyldns.core.crypto.NoOpCrypto" + } + + data-stores = ["mysql"] + + mysql { + settings { + # JDBC Settings, these are all values in scalikejdbc-config, not our own + # these must be overridden to use MYSQL for production use + # assumes a docker or mysql instance running locally + name = "vinyldns" + driver = "org.h2.Driver" + driver = ${?JDBC_DRIVER} + migration-url = "jdbc:h2:mem:vinyldns;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=TRUE;IGNORECASE=TRUE;INIT=RUNSCRIPT FROM 'classpath:test/ddl.sql'" + migration-url = ${?JDBC_MIGRATION_URL} + url = "jdbc:h2:mem:vinyldns;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=TRUE;IGNORECASE=TRUE;INIT=RUNSCRIPT FROM 'classpath:test/ddl.sql'" + url = ${?JDBC_URL} + user = "sa" + user = ${?JDBC_USER} + password = "" + password = ${?JDBC_PASSWORD} + # see https://github.com/brettwooldridge/HikariCP + connection-timeout-millis = 1000 + idle-timeout = 10000 + max-lifetime = 600000 + maximum-pool-size = 20 + minimum-idle = 20 + register-mbeans = true + } + # Repositories that use this data store are listed here + repositories { + zone { + # no additional settings for now + } + batch-change { + # no additional settings for now + } + user { + + } + record-set { + + } + group { + + } + membership { + + } + group-change { + + } + zone-change { + + } + record-change { + + } + } + } + + backends = [] + + batch-change-limit = 1000 + + # FQDNs / IPs that cannot be modified via VinylDNS + # regex-list used for all record types except PTR + # ip-list used exclusively for PTR records + high-value-domains = { + regex-list = [ + "high-value-domain.*" # for testing + ] + ip-list = [ + # using reverse zones in the vinyldns/bind9 docker image for testing + "192.0.2.252", + "192.0.2.253", + "fd69:27cc:fe91:0:0:0:0:ffff", + "fd69:27cc:fe91:0:0:0:ffff:0" + ] + } + + # FQDNs / IPs / zone names that require manual review upon submission in batch change interface + # domain-list used for all record types except PTR + # ip-list used exclusively for PTR records + manual-review-domains = { + domain-list = [ + "needs-review.*" + ] + ip-list = [ + "192.0.1.254", + "192.0.1.255", + "192.0.2.254", + "192.0.2.255", + "192.0.3.254", + "192.0.3.255", + "192.0.4.254", + "192.0.4.255", + "fd69:27cc:fe91:0:0:0:ffff:1", + "fd69:27cc:fe91:0:0:0:ffff:2", + "fd69:27cc:fe92:0:0:0:ffff:1", + "fd69:27cc:fe92:0:0:0:ffff:2", + "fd69:27cc:fe93:0:0:0:ffff:1", + "fd69:27cc:fe93:0:0:0:ffff:2", + "fd69:27cc:fe94:0:0:0:ffff:1", + "fd69:27cc:fe94:0:0:0:ffff:2" + ] + zone-name-list = [ + "zone.requires.review." + "zone.requires.review1." + "zone.requires.review2." + "zone.requires.review3." + "zone.requires.review4." + ] + } + + # FQDNs / IPs that cannot be modified via VinylDNS + # regex-list used for all record types except PTR + # ip-list used exclusively for PTR records + high-value-domains = { + regex-list = [ + "high-value-domain.*" # for testing + ] + ip-list = [ + # using reverse zones in the vinyldns/bind9 docker image for testing + "192.0.1.252", + "192.0.1.253", + "192.0.2.252", + "192.0.2.253", + "192.0.3.252", + "192.0.3.253", + "192.0.4.252", + "192.0.4.253", + "fd69:27cc:fe91:0:0:0:0:ffff", + "fd69:27cc:fe91:0:0:0:ffff:0", + "fd69:27cc:fe92:0:0:0:0:ffff", + "fd69:27cc:fe92:0:0:0:ffff:0", + "fd69:27cc:fe93:0:0:0:0:ffff", + "fd69:27cc:fe93:0:0:0:ffff:0", + "fd69:27cc:fe94:0:0:0:0:ffff", + "fd69:27cc:fe94:0:0:0:ffff:0" + ] + } + + # types of unowned records that users can access in shared zones + shared-approved-types = ["A", "AAAA", "CNAME", "PTR", "TXT"] + + manual-batch-review-enabled = true + + scheduled-changes-enabled = true + + multi-record-batch-change-enabled = true + + global-acl-rules = [ + { + group-ids: ["global-acl-group-id"], + fqdn-regex-list: [".*shared[0-9]{1}."] + }, + { + group-ids: ["another-global-acl-group"], + fqdn-regex-list: [".*ok[0-9]{1}."] + } + ] +} + +akka { + loglevel = "INFO" + loggers = ["akka.event.slf4j.Slf4jLogger"] + logging-filter = "akka.event.slf4j.Slf4jLoggingFilter" + logger-startup-timeout = 30s + + actor { + provider = "akka.actor.LocalActorRefProvider" + } +} + +akka.http { + server { + # The time period within which the TCP binding process must be completed. + # Set to `infinite` to disable. + bind-timeout = 5s + + # Show verbose error messages back to the client + verbose-error-messages = on + } + + parsing { + # Spray doesn't like the AWS4 headers + illegal-header-warnings = on + } +} diff --git a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py index fbad9be8f..015a4561a 100644 --- a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py +++ b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py @@ -84,7 +84,7 @@ def test_get_batch_change_with_deleted_record_owner_group_success(shared_zone_te client = shared_zone_test_context.shared_zone_vinyldns_client shared_zone_name = shared_zone_test_context.shared_zone["name"] temp_group = { - "name": "test-get-batch-record-owner-group2", + "name": f"test-get-batch-record-owner-group{shared_zone_test_context.partition_id}", "email": "test@test.com", "description": "for testing that a get batch change still works when record owner group is deleted", "members": [{"id": "sharedZoneUser"}], diff --git a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py index 6f0d1b655..bc7cf794f 100644 --- a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py +++ b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py @@ -5,10 +5,15 @@ from vinyldns_context import VinylDNSTestContext from vinyldns_python import VinylDNSClient +# FIXME: this whole suite of tests is fragile as it relies on data ordered in a specific way +# and that data cannot be cleaned up via the API (batchrecordchanges). This causes problems +# with xdist and parallel execution. The xdist scheduler will only ever schedule this suite +# on the first worker (gw0). + @pytest.fixture(scope="module") -def list_fixture(shared_zone_test_context): +def list_fixture(shared_zone_test_context, tmp_path_factory): ctx = shared_zone_test_context.list_batch_summaries_context - ctx.setup(shared_zone_test_context) + ctx.setup(shared_zone_test_context, tmp_path_factory.getbasetemp().parent) yield ctx ctx.tear_down(shared_zone_test_context) @@ -20,7 +25,7 @@ def test_list_batch_change_summaries_success(list_fixture): client = list_fixture.client batch_change_summaries_result = client.list_batch_change_summaries(status=200) - list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=3) + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=len(list_fixture.completed_changes)) def test_list_batch_change_summaries_with_max_items(list_fixture): @@ -40,7 +45,8 @@ def test_list_batch_change_summaries_with_start_from(list_fixture): client = list_fixture.client batch_change_summaries_result = client.list_batch_change_summaries(status=200, start_from=1) - list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=2, start_from=1) + all_changes = list_fixture.completed_changes + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=len(all_changes) - 1, start_from=1) def test_list_batch_change_summaries_with_next_id(list_fixture): @@ -49,13 +55,15 @@ def test_list_batch_change_summaries_with_next_id(list_fixture): Apply retrieved nextId to get second page of batch change summaries. """ client = list_fixture.client + batch_change_summaries_result = client.list_batch_change_summaries(status=200, start_from=1, max_items=1) list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=1, start_from=1, max_items=1, next_id=2) next_page_result = client.list_batch_change_summaries(status=200, start_from=batch_change_summaries_result["nextId"]) - list_fixture.check_batch_change_summaries_page_accuracy(next_page_result, size=1, start_from=batch_change_summaries_result["nextId"]) + all_changes = list_fixture.completed_changes + list_fixture.check_batch_change_summaries_page_accuracy(next_page_result, size=len(all_changes) - int(batch_change_summaries_result["nextId"]), start_from=batch_change_summaries_result["nextId"]) @pytest.mark.manual_batch_review diff --git a/modules/api/functional_test/live_tests/conftest.py b/modules/api/functional_test/live_tests/conftest.py index 363535450..59c1e140f 100644 --- a/modules/api/functional_test/live_tests/conftest.py +++ b/modules/api/functional_test/live_tests/conftest.py @@ -16,7 +16,7 @@ ctx_cache: MutableMapping[str, SharedZoneTestContext] = {} @pytest.fixture(scope="session") def shared_zone_test_context(tmp_path_factory, worker_id): if worker_id == "master": - partition_id = "2" + partition_id = "1" else: partition_id = str(int(worker_id.replace("gw", "")) + 1) diff --git a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py index 265244163..76862a631 100644 --- a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py +++ b/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py @@ -1,26 +1,31 @@ +from pathlib import Path + from utils import * from vinyldns_python import VinylDNSClient +# FIXME: this context is fragile as it depends on creating batch changes carefully created with a time delay. class ListBatchChangeSummariesTestContext: - to_delete: set = set() - completed_changes: list = [] - group: object = None - is_setup: bool = False - def __init__(self): + def __init__(self, partition_id: str): + self.to_delete: set = set() + self.completed_changes: list = [] + self.setup_started = False + self.partition_id = partition_id self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listBatchSummariesAccessKey", "listBatchSummariesSecretKey") - def setup(self, shared_zone_test_context): + def setup(self, shared_zone_test_context, temp_directory: Path): + if self.setup_started: + # Safeguard against reentrance + return + + self.setup_started = True self.completed_changes = [] self.to_delete = set() acl_rule = generate_acl_rule("Write", userId="list-batch-summaries-id") add_ok_acl_rules(shared_zone_test_context, [acl_rule]) - initial_db_check = self.client.list_batch_change_summaries(status=200) - self.group = self.client.get_group("list-summaries-group", status=200) - ok_zone_name = shared_zone_test_context.ok_zone["name"] batch_change_input_one = { "comments": "first", @@ -48,26 +53,20 @@ class ListBatchChangeSummariesTestContext: record_set_list = [] self.completed_changes = [] - if len(initial_db_check["batchChanges"]) == 0: - # make some batch changes - for batch_change_input in batch_change_inputs: - change = self.client.create_batch_change(batch_change_input, status=202) + # make some batch changes + for batch_change_input in batch_change_inputs: + change = self.client.create_batch_change(batch_change_input, status=202) - if "Review" not in change["status"]: - completed = self.client.wait_until_batch_change_completed(change) - assert_that(completed["comments"], equal_to(batch_change_input["comments"])) - record_set_list += [(change["zoneId"], change["recordSetId"]) for change in completed["changes"]] + if "Review" not in change["status"]: + completed = self.client.wait_until_batch_change_completed(change) + assert_that(completed["comments"], equal_to(batch_change_input["comments"])) + record_set_list += [(change["zoneId"], change["recordSetId"]) for change in completed["changes"]] + self.to_delete = set(record_set_list) - # sleep for consistent ordering of timestamps, must be at least one second apart - time.sleep(1) + # Sleep for consistent ordering of timestamps, must be at least one second apart + time.sleep(1.1) - self.completed_changes = self.client.list_batch_change_summaries(status=200)["batchChanges"] - assert_that(len(self.completed_changes), equal_to(len(batch_change_inputs))) - else: - print("\r\n!!! USING EXISTING SUMMARIES") - self.completed_changes = initial_db_check["batchChanges"] - self.to_delete = set(record_set_list) - self.is_setup = True + self.completed_changes = self.client.list_batch_change_summaries(status=200)["batchChanges"] def tear_down(self, shared_zone_test_context): for result_rs in self.to_delete: @@ -76,6 +75,8 @@ class ListBatchChangeSummariesTestContext: shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete') self.to_delete.clear() clear_ok_acl_rules(shared_zone_test_context) + self.client.clear_zones() + self.client.clear_groups() self.client.tear_down() def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False, max_items=100, approval_status=False): diff --git a/modules/api/functional_test/live_tests/list_groups_test_context.py b/modules/api/functional_test/live_tests/list_groups_test_context.py index ba43a452a..4ccd75193 100644 --- a/modules/api/functional_test/live_tests/list_groups_test_context.py +++ b/modules/api/functional_test/live_tests/list_groups_test_context.py @@ -5,11 +5,16 @@ from vinyldns_python import VinylDNSClient class ListGroupsTestContext(object): def __init__(self, partition_id: str): self.partition_id = partition_id + self.setup_started = False self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listGroupAccessKey", "listGroupSecretKey") self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "supportUserAccessKey", "supportUserSecretKey") self.group_prefix = f"test-list-my-groups{partition_id}" - def build(self): + def setup(self): + if self.setup_started: + # Safeguard against reentrance + return + self.setup_started = True try: for index in range(0, 50): new_group = { @@ -25,7 +30,9 @@ class ListGroupsTestContext(object): raise def tear_down(self): - clear_zones(self.client) - clear_groups(self.client) + self.client.clear_zones() + self.client.clear_groups() self.client.tear_down() + self.support_user_client.clear_zones() + self.support_user_client.clear_groups() self.support_user_client.tear_down() diff --git a/modules/api/functional_test/live_tests/list_recordsets_test_context.py b/modules/api/functional_test/live_tests/list_recordsets_test_context.py index da41fcaeb..ab29e9d5c 100644 --- a/modules/api/functional_test/live_tests/list_recordsets_test_context.py +++ b/modules/api/functional_test/live_tests/list_recordsets_test_context.py @@ -5,6 +5,7 @@ from vinyldns_python import VinylDNSClient class ListRecordSetsTestContext(object): def __init__(self, partition_id: str): self.partition_id = partition_id + self.setup_started = False self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listRecordsAccessKey", "listRecordsSecretKey") self.zone = None self.all_records = [] @@ -19,6 +20,11 @@ class ListRecordSetsTestContext(object): self.group = my_groups["groups"][0] def setup(self): + if self.setup_started: + # Safeguard against reentrance + return + self.setup_started = True + partition_id = self.partition_id group = { "name": f"list-records-group{partition_id}", @@ -42,8 +48,8 @@ class ListRecordSetsTestContext(object): self.all_records = self.client.list_recordsets_by_zone(self.zone["id"])["recordSets"] def tear_down(self): - clear_zones(self.client) - clear_groups(self.client) + self.client.clear_zones() + self.client.clear_groups() self.client.tear_down() def check_recordsets_page_accuracy(self, list_results_page, size, offset, next_id=False, start_from=False, max_items=100, record_type_filter=False, name_sort="ASC"): diff --git a/modules/api/functional_test/live_tests/list_zones_test_context.py b/modules/api/functional_test/live_tests/list_zones_test_context.py index 541301fb0..cb71cac4c 100644 --- a/modules/api/functional_test/live_tests/list_zones_test_context.py +++ b/modules/api/functional_test/live_tests/list_zones_test_context.py @@ -5,6 +5,7 @@ from vinyldns_python import VinylDNSClient class ListZonesTestContext(object): def __init__(self, partition_id): self.partition_id = partition_id + self.setup_started = False self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "listZonesAccessKey", "listZonesSecretKey") self.search_zone1 = None self.search_zone2 = None @@ -13,7 +14,12 @@ class ListZonesTestContext(object): self.non_search_zone2 = None self.list_zones_group = None - def build(self): + def setup(self): + if self.setup_started: + # Safeguard against reentrance + return + self.setup_started = True + partition_id = self.partition_id group = { "name": f"list-zones-group{partition_id}", @@ -84,6 +90,6 @@ class ListZonesTestContext(object): self.client.wait_until_zone_active(change["zone"]["id"]) def tear_down(self): - clear_zones(self.client) - clear_groups(self.client) + self.client.clear_zones() + self.client.clear_groups() self.client.tear_down() diff --git a/modules/api/functional_test/live_tests/membership/create_group_test.py b/modules/api/functional_test/live_tests/membership/create_group_test.py index c1157d0c4..0fcaaa95f 100644 --- a/modules/api/functional_test/live_tests/membership/create_group_test.py +++ b/modules/api/functional_test/live_tests/membership/create_group_test.py @@ -10,7 +10,7 @@ def test_create_group_success(shared_zone_test_context): try: new_group = { - "name": "test-create-group-success", + "name": f"test-create-group-success{shared_zone_test_context.partition_id}", "email": "test@test.com", "description": "this is a description", "members": [{"id": "ok"}], diff --git a/modules/api/functional_test/live_tests/membership/delete_group_test.py b/modules/api/functional_test/live_tests/membership/delete_group_test.py index 0fe12e4a2..3ce8dd803 100644 --- a/modules/api/functional_test/live_tests/membership/delete_group_test.py +++ b/modules/api/functional_test/live_tests/membership/delete_group_test.py @@ -42,7 +42,7 @@ def test_delete_group_that_is_already_deleted(shared_zone_test_context): try: new_group = { - "name": "test-delete-group-already", + "name": f"test-delete-group-already{shared_zone_test_context.partition_id}", "email": "test@test.com", "description": "this is a description", "members": [{"id": "ok"}], @@ -76,7 +76,6 @@ def test_delete_admin_group(shared_zone_test_context): } result_group = client.create_group(new_group, status=200) - print(result_group) # Create zone with that group ID as admin zone = { diff --git a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py index cdefd85d2..cfcf557e8 100644 --- a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py +++ b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py @@ -13,16 +13,17 @@ def test_list_my_groups_no_parameters(list_my_groups_context): Test that we can get all the groups where a user is a member """ results = list_my_groups_context.client.list_my_groups(status=200) - assert_that(results, has_length(3)) # 3 fields - assert_that(results["groups"], has_length(50)) + # Only count the groups with the group prefix + groups = [x for x in results["groups"] if x["name"].startswith(list_my_groups_context.group_prefix)] + assert_that(groups, has_length(50)) assert_that(results, is_not(has_key("groupNameFilter"))) assert_that(results, is_not(has_key("startFrom"))) assert_that(results, is_not(has_key("nextId"))) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) - results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) + results["groups"] = sorted(groups, key=lambda x: x["name"]) for i in range(0, 50): assert_that(results["groups"][i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i))) @@ -37,7 +38,7 @@ def test_get_my_groups_using_old_account_auth(list_my_groups_context): assert_that(results, is_not(has_key("groupNameFilter"))) assert_that(results, is_not(has_key("startFrom"))) assert_that(results, is_not(has_key("nextId"))) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) def test_list_my_groups_max_items(list_my_groups_context): @@ -101,7 +102,7 @@ def test_list_my_groups_filter_matches(list_my_groups_context): assert_that(results["groupNameFilter"], is_(f"{list_my_groups_context.group_prefix}-01")) assert_that(results, is_not(has_key("startFrom"))) assert_that(results, is_not(has_key("nextId"))) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) results["groups"] = sorted(results["groups"], key=lambda x: x["name"]) @@ -133,15 +134,17 @@ def test_list_my_groups_with_ignore_access_true(list_my_groups_context): """ results = list_my_groups_context.client.list_my_groups(ignore_access=True, status=200) + # Only count the groups with the group prefix assert_that(len(results["groups"]), greater_than(50)) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) assert_that(results["ignoreAccess"], is_(True)) my_results = list_my_groups_context.client.list_my_groups(status=200) - my_results["groups"] = sorted(my_results["groups"], key=lambda x: x["name"]) + my_groups = [x for x in my_results["groups"] if x["name"].startswith(list_my_groups_context.group_prefix)] + sorted_groups = sorted(my_groups, key=lambda x: x["name"]) for i in range(0, 50): - assert_that(my_results["groups"][i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i))) + assert_that(sorted_groups[i]["name"], is_("{0}-{1:0>3}".format(list_my_groups_context.group_prefix, i))) def test_list_my_groups_as_support_user(list_my_groups_context): @@ -151,7 +154,7 @@ def test_list_my_groups_as_support_user(list_my_groups_context): results = list_my_groups_context.support_user_client.list_my_groups(status=200) assert_that(len(results["groups"]), greater_than(50)) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) assert_that(results["ignoreAccess"], is_(False)) @@ -162,5 +165,5 @@ def test_list_my_groups_as_support_user_with_ignore_access_true(list_my_groups_c results = list_my_groups_context.support_user_client.list_my_groups(ignore_access=True, status=200) assert_that(len(results["groups"]), greater_than(50)) - assert_that(results["maxItems"], is_(100)) + assert_that(results["maxItems"], is_(200)) assert_that(results["ignoreAccess"], is_(True)) diff --git a/modules/api/functional_test/live_tests/production_verify_test.py b/modules/api/functional_test/live_tests/production_verify_test.py index c75e650fc..230e4bbcc 100644 --- a/modules/api/functional_test/live_tests/production_verify_test.py +++ b/modules/api/functional_test/live_tests/production_verify_test.py @@ -23,10 +23,7 @@ def test_verify_production(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print(str(result)) - assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) assert_that(result["created"], is_not(none())) @@ -34,17 +31,14 @@ def test_verify_production(shared_zone_test_context): result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") records = [x["address"] for x in result_rs["records"]] assert_that(records, has_length(2)) assert_that("10.1.1.1", is_in(records)) assert_that("10.2.2.2", is_in(records)) - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) diff --git a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py index c9a13d67e..cfe5b169f 100644 --- a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py @@ -26,7 +26,6 @@ def test_create_recordset_with_dns_verify(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -157,7 +156,6 @@ def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -193,7 +191,6 @@ def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -229,7 +226,6 @@ def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -392,7 +388,6 @@ def test_create_recordset_conflict_with_dns_different_type(shared_zone_test_cont ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -1032,7 +1027,6 @@ def test_create_recordset_forward_record_types(shared_zone_test_context, record_ result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -1064,7 +1058,6 @@ def test_reverse_create_recordset_reverse_record_types(shared_zone_test_context, result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -1183,7 +1176,6 @@ def test_create_ipv4_ptr_recordset_with_verify(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -1269,7 +1261,6 @@ def test_create_ipv6_ptr_recordset(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -1377,7 +1368,6 @@ def test_at_create_recordset(shared_zone_test_context): } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -1428,7 +1418,6 @@ def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zon } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -1686,7 +1675,6 @@ def test_create_ipv4_ptr_recordset_with_verify_in_classless(shared_zone_test_con ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) diff --git a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py index f0e6db94e..135b26abb 100644 --- a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py @@ -18,7 +18,6 @@ def test_delete_recordset_forward_record_types(shared_zone_test_context, record_ result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -62,7 +61,6 @@ def test_delete_recordset_reverse_record_types(shared_zone_test_context, record_ result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -114,7 +112,6 @@ def test_delete_recordset_with_verify(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) @@ -250,7 +247,6 @@ def test_delete_ipv4_ptr_recordset(shared_zone_test_context): } result = client.create_recordset(orig_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Deleting...") delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") @@ -289,7 +285,6 @@ def test_delete_ipv6_ptr_recordset(shared_zone_test_context): } result = client.create_recordset(orig_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Deleting...") delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) client.wait_until_recordset_change_status(delete_result, "Complete") @@ -344,8 +339,6 @@ def test_at_delete_recordset(shared_zone_test_context): } result = client.create_recordset(new_rs, status=202) - print(json.dumps(result, indent=3)) - assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) assert_that(result["created"], is_not(none())) @@ -391,28 +384,21 @@ def test_delete_recordset_with_different_dns_data(shared_zone_test_context): } ] } - print("\r\nCreating recordset in zone " + str(ok_zone) + "\r\n") result = client.create_recordset(new_rs, status=202) - print(str(result)) result_rs = result["recordSet"] result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!recordset is active! Verifying...") verify_recordset(result_rs, new_rs) - print("\r\n\r\n!!!recordset verified...") result_rs["records"][0]["address"] = "10.8.8.8" result = client.update_recordset(result_rs, status=202) result_rs = client.wait_until_recordset_change_status(result, "Complete")["recordSet"] - print("\r\n\r\n!!!verifying recordset in dns backend") answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) assert_that(answers, has_length(1)) response = dns_update(ok_zone, result_rs["name"], 300, result_rs["type"], "10.9.9.9") - print("\nSuccessfully updated the record, record is now out of sync\n") - print(str(response)) # check you can delete delete_result = client.delete_recordset(result_rs["zoneId"], result_rs["id"], status=202) diff --git a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py index 980706a50..701dec6fa 100644 --- a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py +++ b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py @@ -149,7 +149,6 @@ def test_update_recordset_forward_record_types(shared_zone_test_context, record_ result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -193,7 +192,6 @@ def test_update_reverse_record_types(shared_zone_test_context, record_name, test result = client.create_recordset(new_rs, status=202) assert_that(result["status"], is_("Pending")) - print(str(result)) result_rs = result["recordSet"] verify_recordset(result_rs, new_rs) @@ -305,8 +303,7 @@ def test_update_recordset_replace_2_records_with_1_different_record(shared_zone_ ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) - + assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) assert_that(result["created"], is_not(none())) @@ -376,8 +373,7 @@ def test_update_existing_record_set_add_record(shared_zone_test_context): ] } result = client.create_recordset(new_rs, status=202) - print(str(result)) - + assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) assert_that(result["created"], is_not(none())) @@ -393,8 +389,6 @@ def test_update_existing_record_set_add_record(shared_zone_test_context): answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) - print("GOT ANSWERS BACK FOR INITIAL CREATE:") - print(str(rdata_strings)) # Update the record set, adding a new record to the existing one modified_records = [ @@ -426,8 +420,6 @@ def test_update_existing_record_set_add_record(shared_zone_test_context): answers = dns_resolve(ok_zone, result_rs["name"], result_rs["type"]) rdata_strings = rdata(answers) - print("GOT BACK ANSWERS FOR UPDATE") - print(str(rdata_strings)) assert_that(rdata_strings, has_length(2)) assert_that("10.2.2.2", is_in(rdata_strings)) assert_that("4.4.4.8", is_in(rdata_strings)) @@ -542,9 +534,7 @@ def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): new_ptr_target = "www.vinyldns." new_rs = result_rs - print(new_rs) new_rs["records"][0]["ptrdname"] = new_ptr_target - print(new_rs) result = client.update_recordset(new_rs, status=202) result_rs = result["recordSet"] @@ -552,7 +542,6 @@ def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): verify_recordset(result_rs, new_rs) - print(result_rs) records = result_rs["records"] assert_that(records[0]["ptrdname"], is_(new_ptr_target)) @@ -594,9 +583,7 @@ def test_update_ipv6_ptr_recordset(shared_zone_test_context): new_ptr_target = "www.vinyldns." new_rs = result_rs - print(new_rs) new_rs["records"][0]["ptrdname"] = new_ptr_target - print(new_rs) result = client.update_recordset(new_rs, status=202) result_rs = result["recordSet"] @@ -604,7 +591,6 @@ def test_update_ipv6_ptr_recordset(shared_zone_test_context): verify_recordset(result_rs, new_rs) - print(result_rs) records = result_rs["records"] assert_that(records[0]["ptrdname"], is_(new_ptr_target)) @@ -698,7 +684,6 @@ def test_at_update_recordset(shared_zone_test_context): } result = client.create_recordset(new_rs, status=202) - print(str(result)) assert_that(result["changeType"], is_("Create")) assert_that(result["status"], is_("Pending")) diff --git a/modules/api/functional_test/live_tests/shared_zone_test_context.py b/modules/api/functional_test/live_tests/shared_zone_test_context.py index 2aa0e1362..f8359e7e4 100644 --- a/modules/api/functional_test/live_tests/shared_zone_test_context.py +++ b/modules/api/functional_test/live_tests/shared_zone_test_context.py @@ -20,60 +20,10 @@ class SharedZoneTestContext(object): """ _data_cache: MutableMapping[str, MutableMapping[str, Mapping]] = {} - @property - def ok_zone(self) -> Mapping: - return self.attempt_retrieve_value("_ok_zone") - - @property - def shared_zone(self) -> Mapping: - return self.attempt_retrieve_value("_shared_zone") - - @property - def history_zone(self) -> Mapping: - return self.attempt_retrieve_value("_history_zone") - - @property - def dummy_zone(self) -> Mapping: - return self.attempt_retrieve_value("_dummy_zone") - - @property - def ip6_reverse_zone(self) -> Mapping: - return self.attempt_retrieve_value("_ip6_reverse_zone") - - @property - def ip6_16_nibble_zone(self) -> Mapping: - return self.attempt_retrieve_value("_ip6_16_nibble_zone") - - @property - def ip4_reverse_zone(self) -> Mapping: - return self.attempt_retrieve_value("_ip4_reverse_zone") - - @property - def classless_base_zone(self) -> Mapping: - return self.attempt_retrieve_value("_classless_base_zone") - - @property - def classless_zone_delegation_zone(self) -> Mapping: - return self.attempt_retrieve_value("_classless_zone_delegation_zone") - - @property - def system_test_zone(self) -> Mapping: - return self.attempt_retrieve_value("_system_test_zone") - - @property - def parent_zone(self) -> Mapping: - return self.attempt_retrieve_value("_parent_zone") - - @property - def ds_zone(self) -> Mapping: - return self.attempt_retrieve_value("_ds_zone") - - @property - def requires_review_zone(self) -> Mapping: - return self.attempt_retrieve_value("_requires_review_zone") def __init__(self, partition_id: str): self.partition_id = partition_id + self.setup_started = False self.ok_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "okAccessKey", "okSecretKey") self.dummy_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "dummyAccessKey", "dummySecretKey") self.shared_zone_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, "sharedZoneUserAccessKey", "sharedZoneUserSecretKey") @@ -87,7 +37,7 @@ class SharedZoneTestContext(object): self.list_zones_client = self.list_zones.client self.list_records_context = ListRecordSetsTestContext(partition_id) self.list_groups_context = ListGroupsTestContext(partition_id) - self.list_batch_summaries_context = None + self.list_batch_summaries_context = ListBatchChangeSummariesTestContext(partition_id) self.dummy_group = None self.ok_group = None @@ -96,25 +46,30 @@ class SharedZoneTestContext(object): self.group_activity_created = None self.group_activity_updated = None - self._history_zone = None - self._ok_zone = None - self._dummy_zone = None - self._ip6_reverse_zone = None - self._ip6_16_nibble_zone = None - self._ip4_reverse_zone = None - self._classless_base_zone = None - self._classless_zone_delegation_zone = None - self._system_test_zone = None - self._parent_zone = None - self._ds_zone = None - self._requires_review_zone = None - self._shared_zone = None + self.history_zone = None + self.ok_zone = None + self.dummy_zone = None + self.ip6_reverse_zone = None + self.ip6_16_nibble_zone = None + self.ip4_reverse_zone = None + self.classless_base_zone = None + self.classless_zone_delegation_zone = None + self.system_test_zone = None + self.parent_zone = None + self.ds_zone = None + self.requires_review_zone = None + self.shared_zone = None self.ip4_10_prefix = None self.ip4_classless_prefix = None self.ip6_prefix = None def setup(self): + if self.setup_started: + # Safeguard against reentrance + return + self.setup_started = True + partition_id = self.partition_id try: ok_group = { @@ -181,7 +136,11 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._history_zone = history_zone_change["zone"] + self.history_zone = history_zone_change["zone"] + + # initialize history + self.history_client.wait_until_zone_active(history_zone_change["zone"]["id"]) + self.init_history() ok_zone_change = self.ok_vinyldns_client.create_zone( { @@ -205,7 +164,7 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._ok_zone = ok_zone_change["zone"] + self.ok_zone = ok_zone_change["zone"] dummy_zone_change = self.dummy_vinyldns_client.create_zone( { @@ -229,7 +188,7 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._dummy_zone = dummy_zone_change["zone"] + self.dummy_zone = dummy_zone_change["zone"] self.ip6_prefix = f"fd69:27cc:fe9{partition_id}" ip6_reverse_zone_change = self.ok_vinyldns_client.create_zone( @@ -255,7 +214,7 @@ class SharedZoneTestContext(object): } }, status=202 ) - self._ip6_reverse_zone = ip6_reverse_zone_change["zone"] + self.ip6_reverse_zone = ip6_reverse_zone_change["zone"] ip6_16_nibble_zone_change = self.ok_vinyldns_client.create_zone( { @@ -267,7 +226,7 @@ class SharedZoneTestContext(object): "backendId": "func-test-backend" }, status=202 ) - self._ip6_16_nibble_zone = ip6_16_nibble_zone_change["zone"] + self.ip6_16_nibble_zone = ip6_16_nibble_zone_change["zone"] self.ip4_10_prefix = f"10.{partition_id}" ip4_reverse_zone_change = self.ok_vinyldns_client.create_zone( @@ -293,7 +252,7 @@ class SharedZoneTestContext(object): } }, status=202 ) - self._ip4_reverse_zone = ip4_reverse_zone_change["zone"] + self.ip4_reverse_zone = ip4_reverse_zone_change["zone"] self.ip4_classless_prefix = f"192.0.{partition_id}" classless_base_zone_change = self.ok_vinyldns_client.create_zone( @@ -319,7 +278,7 @@ class SharedZoneTestContext(object): } }, status=202 ) - self._classless_base_zone = classless_base_zone_change["zone"] + self.classless_base_zone = classless_base_zone_change["zone"] classless_zone_delegation_change = self.ok_vinyldns_client.create_zone( { @@ -344,7 +303,7 @@ class SharedZoneTestContext(object): } }, status=202 ) - self._classless_zone_delegation_zone = classless_zone_delegation_change["zone"] + self.classless_zone_delegation_zone = classless_zone_delegation_change["zone"] system_test_zone_change = self.ok_vinyldns_client.create_zone( { @@ -369,7 +328,7 @@ class SharedZoneTestContext(object): } }, status=202 ) - self._system_test_zone = system_test_zone_change["zone"] + self.system_test_zone = system_test_zone_change["zone"] # parent zone gives access to the dummy user, dummy user cannot manage ns records parent_zone_change = self.ok_vinyldns_client.create_zone( @@ -403,7 +362,7 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._parent_zone = parent_zone_change["zone"] + self.parent_zone = parent_zone_change["zone"] # mimicking the spec example ds_zone_change = self.ok_vinyldns_client.create_zone( @@ -428,7 +387,7 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._ds_zone = ds_zone_change["zone"] + self.ds_zone = ds_zone_change["zone"] # zone with name configured for manual review requires_review_zone_change = self.ok_vinyldns_client.create_zone( @@ -440,7 +399,7 @@ class SharedZoneTestContext(object): "isTest": True, "backendId": "func-test-backend" }, status=202) - self._requires_review_zone = requires_review_zone_change["zone"] + self.requires_review_zone = requires_review_zone_change["zone"] # Shared zone shared_zone_change = self.support_user_client.create_zone( @@ -465,7 +424,7 @@ class SharedZoneTestContext(object): "primaryServer": VinylDNSTestContext.name_server_ip } }, status=202) - self._shared_zone = shared_zone_change["zone"] + self.shared_zone = shared_zone_change["zone"] # wait until our zones are created self.ok_vinyldns_client.wait_until_zone_active(system_test_zone_change["zone"]["id"]) @@ -480,23 +439,13 @@ class SharedZoneTestContext(object): self.ok_vinyldns_client.wait_until_zone_active(parent_zone_change["zone"]["id"]) self.ok_vinyldns_client.wait_until_zone_active(ds_zone_change["zone"]["id"]) self.ok_vinyldns_client.wait_until_zone_active(requires_review_zone_change["zone"]["id"]) - self.history_client.wait_until_zone_active(history_zone_change["zone"]["id"]) self.shared_zone_vinyldns_client.wait_until_zone_active(shared_zone_change["zone"]["id"]) - # validate all in there - zones = self.dummy_vinyldns_client.list_zones()["zones"] - assert_that(len(zones), is_(2)) - zones = self.ok_vinyldns_client.list_zones()["zones"] - assert_that(len(zones), is_(11)) - - # initialize history - self.init_history() - # initialize group activity self.init_group_activity() # initialize list zones, only do this when constructing the whole! - self.list_zones.build() + self.list_zones.setup() # note: there are no state to load, the tests only need the client self.list_zones_client = self.list_zones.client @@ -505,9 +454,7 @@ class SharedZoneTestContext(object): self.list_records_context.setup() # build the list of groups - self.list_groups_context.build() - - self.list_batch_summaries_context = ListBatchChangeSummariesTestContext() + self.list_groups_context.setup() except Exception: # Cleanup if setup fails self.tear_down() @@ -519,7 +466,7 @@ class SharedZoneTestContext(object): # change the zone nine times to we have update events in zone change history, # ten total changes including creation for i in range(2, 11): - zone_update = copy.deepcopy(self._history_zone) + zone_update = copy.deepcopy(self.history_zone) zone_update["connection"]["key"] = VinylDNSTestContext.dns_key zone_update["transferConnection"]["key"] = VinylDNSTestContext.dns_key zone_update["email"] = "i.changed.this.{0}.times@history-test.com".format(i) @@ -527,11 +474,11 @@ class SharedZoneTestContext(object): # create some record sets test_a = TestData.A.copy() - test_a["zoneId"] = self._history_zone["id"] + test_a["zoneId"] = self.history_zone["id"] test_aaaa = TestData.AAAA.copy() - test_aaaa["zoneId"] = self._history_zone["id"] + test_aaaa["zoneId"] = self.history_zone["id"] test_cname = TestData.CNAME.copy() - test_cname["zoneId"] = self._history_zone["id"] + test_cname["zoneId"] = self.history_zone["id"] a_record = self.history_client.create_recordset(test_a, status=202)["recordSet"] aaaa_record = self.history_client.create_recordset(test_aaaa, status=202)["recordSet"] @@ -574,13 +521,7 @@ class SharedZoneTestContext(object): def init_group_activity(self): client = self.ok_vinyldns_client - group_name = "test-list-group-activity-max-item-success" - - # cleanup existing group if it's already in there - groups = client.list_all_my_groups() - existing = [grp for grp in groups if grp["name"] == group_name] - for grp in existing: - client.delete_group(grp["id"], status=200) + group_name = f"test-list-group-activity-max-item-success{self.partition_id}" members = [{"id": "ok"}] new_group = { @@ -625,12 +566,11 @@ class SharedZoneTestContext(object): if self.list_groups_context: self.list_groups_context.tear_down() - clear_zones(self.dummy_vinyldns_client) - clear_zones(self.ok_vinyldns_client) - clear_zones(self.history_client) - clear_groups(self.dummy_vinyldns_client, "global-acl-group-id") - clear_groups(self.ok_vinyldns_client, "global-acl-group-id") - clear_groups(self.history_client) + for client in self.clients: + client.clear_zones() + + for client in self.clients: + client.clear_groups() # Close all clients for client in self.clients: @@ -648,30 +588,4 @@ class SharedZoneTestContext(object): success = group in client.list_all_my_groups(status=200) time.sleep(.05) retries -= 1 - assert_that(success, is_(True)) - - def attempt_retrieve_value(self, attribute_name: str) -> Mapping: - """ - Attempts to retrieve the data for the attribute given by `attribute_name` - :param attribute_name: The name of the attribute for which to attempt to retrieve the value - :return: The value of the attribute given by `attribute_name` - """ - if not VinylDNSTestContext.enable_safety_check: - # Just return the real data - return getattr(self, attribute_name) - - # Get the real data, stored on this instance - real_data = getattr(self, attribute_name) - - # If we don't have a cache of the original value, make a copy and cache it - if self._data_cache.get(attribute_name) is None: - self._data_cache[attribute_name] = {"caller": "", "data": copy.deepcopy(real_data)} - else: - print("last caller: " + str(self._data_cache[attribute_name]["caller"])) - assert_that(real_data, has_entries(self._data_cache[attribute_name]["data"])) - - # Set last known caller to print if our assertion fails - self._data_cache[attribute_name]["caller"] = inspect.stack()[2][3] - - # Return the data - return self._data_cache[attribute_name]["data"] + assert_that(success, is_(True)) \ No newline at end of file diff --git a/modules/api/functional_test/live_tests/zones/create_zone_test.py b/modules/api/functional_test/live_tests/zones/create_zone_test.py index 419410ace..fbb5bed16 100644 --- a/modules/api/functional_test/live_tests/zones/create_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/create_zone_test.py @@ -245,8 +245,6 @@ def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): # Check response from create assert_that(zone["name"], is_(zone_name + ".")) - print("`connection` not in zone = " + "connection" not in zone) - assert_that("connection" not in zone) assert_that("transferConnection" not in zone) diff --git a/modules/api/functional_test/live_tests/zones/list_zones_test.py b/modules/api/functional_test/live_tests/zones/list_zones_test.py index 12493ed63..1279f5b2f 100644 --- a/modules/api/functional_test/live_tests/zones/list_zones_test.py +++ b/modules/api/functional_test/live_tests/zones/list_zones_test.py @@ -12,7 +12,7 @@ def test_list_zones_success(list_zone_context, shared_zone_test_context): """ Test that we can retrieve a list of the user's zones """ - result = shared_zone_test_context.list_zones_client.list_zones(status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*{shared_zone_test_context.partition_id}", status=200) retrieved = result["zones"] assert_that(retrieved, has_length(5)) @@ -20,6 +20,8 @@ def test_list_zones_success(list_zone_context, shared_zone_test_context): assert_that(retrieved, has_item(has_entry("adminGroupName", list_zone_context.list_zones_group["name"]))) assert_that(retrieved, has_item(has_entry("backendId", "func-test-backend"))) + assert_that(result["nameFilter"], is_(f"*{shared_zone_test_context.partition_id}")) + def test_list_zones_max_items_100(shared_zone_test_context): """ @@ -56,7 +58,7 @@ def test_list_zones_no_search_first_page(list_zone_context, shared_zone_test_con """ Test that the first page of listing zones returns correctly when no name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(max_items=3) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*{shared_zone_test_context.partition_id}", max_items=3) zones = result["zones"] assert_that(zones, has_length(3)) @@ -67,14 +69,18 @@ def test_list_zones_no_search_first_page(list_zone_context, shared_zone_test_con assert_that(result["nextId"], is_(list_zone_context.search_zone3["name"])) assert_that(result["maxItems"], is_(3)) assert_that(result, is_not(has_key("startFrom"))) - assert_that(result, is_not(has_key("nameFilter"))) + + assert_that(result["nameFilter"], is_(f"*{shared_zone_test_context.partition_id}")) def test_list_zones_no_search_second_page(list_zone_context, shared_zone_test_context): """ Test that the second page of listing zones returns correctly when no name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(start_from=list_zone_context.search_zone2["name"], max_items=2, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*{shared_zone_test_context.partition_id}", + start_from=list_zone_context.search_zone2["name"], + max_items=2, + status=200) zones = result["zones"] assert_that(zones, has_length(2)) @@ -84,14 +90,18 @@ def test_list_zones_no_search_second_page(list_zone_context, shared_zone_test_co assert_that(result["nextId"], is_(list_zone_context.non_search_zone1["name"])) assert_that(result["maxItems"], is_(2)) assert_that(result["startFrom"], is_(list_zone_context.search_zone2["name"])) - assert_that(result, is_not(has_key("nameFilter"))) + + assert_that(result["nameFilter"], is_(f"*{shared_zone_test_context.partition_id}")) def test_list_zones_no_search_last_page(list_zone_context, shared_zone_test_context): """ Test that the last page of listing zones returns correctly when no name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(start_from=list_zone_context.search_zone3["name"], max_items=4, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*{shared_zone_test_context.partition_id}", + start_from=list_zone_context.search_zone3["name"], + max_items=4, + status=200) zones = result["zones"] assert_that(zones, has_length(2)) @@ -101,14 +111,14 @@ def test_list_zones_no_search_last_page(list_zone_context, shared_zone_test_cont assert_that(result, is_not(has_key("nextId"))) assert_that(result["maxItems"], is_(4)) assert_that(result["startFrom"], is_(list_zone_context.search_zone3["name"])) - assert_that(result, is_not(has_key("nameFilter"))) + assert_that(result["nameFilter"], is_(f"*{shared_zone_test_context.partition_id}")) def test_list_zones_with_search_first_page(list_zone_context, shared_zone_test_context): """ Test that the first page of listing zones returns correctly when a name filter is provided """ - result = shared_zone_test_context.list_zones_client.list_zones(name_filter="*searched*", max_items=2, status=200) + result = shared_zone_test_context.list_zones_client.list_zones(name_filter=f"*searched*{shared_zone_test_context.partition_id}", max_items=2, status=200) zones = result["zones"] assert_that(zones, has_length(2)) @@ -117,7 +127,7 @@ def test_list_zones_with_search_first_page(list_zone_context, shared_zone_test_c assert_that(result["nextId"], is_(list_zone_context.search_zone2["name"])) assert_that(result["maxItems"], is_(2)) - assert_that(result["nameFilter"], is_("*searched*")) + assert_that(result["nameFilter"], is_(f"*searched*{shared_zone_test_context.partition_id}")) assert_that(result, is_not(has_key("startFrom"))) diff --git a/modules/api/functional_test/live_tests/zones/update_zone_test.py b/modules/api/functional_test/live_tests/zones/update_zone_test.py index e869fb120..5a964bd8c 100644 --- a/modules/api/functional_test/live_tests/zones/update_zone_test.py +++ b/modules/api/functional_test/live_tests/zones/update_zone_test.py @@ -729,9 +729,6 @@ def test_user_can_update_zone_to_another_admin_group(shared_zone_test_context): zone = result["zone"] client.wait_until_zone_active(result["zone"]["id"]) - import json - print(json.dumps(zone, indent=3)) - new_joint_group = { "name": "new-ok-group", "email": "test@test.com", diff --git a/modules/api/functional_test/pytest.ini b/modules/api/functional_test/pytest.ini index 4e186a482..07d7f297e 100644 --- a/modules/api/functional_test/pytest.ini +++ b/modules/api/functional_test/pytest.ini @@ -1,4 +1,3 @@ [pytest] norecursedirs=.virtualenv eggs .venv_win addopts = -rfesxX --capture=sys --junitxml=../target/pytest_reports/pytest.xml --durations=30 - diff --git a/modules/api/functional_test/requirements.txt b/modules/api/functional_test/requirements.txt index 716727e4b..decd2c187 100644 --- a/modules/api/functional_test/requirements.txt +++ b/modules/api/functional_test/requirements.txt @@ -3,10 +3,10 @@ pytz>=2014 pytest==6.2.5 mock==4.0.3 dnspython==2.1.0 -boto3==1.18.47 -botocore==1.21.47 +boto3==1.18.51 +botocore==1.21.51 requests==2.26.0 pytest-xdist==2.4.0 python-dateutil==2.8.2 -filelock==3.0.12 +filelock==3.2.0 pytest-custom_exit_code==0.3.0 \ No newline at end of file diff --git a/modules/api/functional_test/run.sh b/modules/api/functional_test/run.sh index f014b523b..c47998611 100755 --- a/modules/api/functional_test/run.sh +++ b/modules/api/functional_test/run.sh @@ -1,12 +1,13 @@ #!/usr/bin/env bash -set -euo pipefail +set -eo pipefail +ROOT_DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) UPDATE_DEPS="" if [ "$1" == "--update" ]; then UPDATE_DEPS="$1" shift fi -PARAMS=("$@") -./pytest.sh "${UPDATE_DEPS}" --suppress-no-test-exit-code -v live_tests "${PARAMS[@]}" +cd "${ROOT_DIR}" +"./pytest.sh" "${UPDATE_DEPS}" -n4 --suppress-no-test-exit-code -v live_tests "$@" diff --git a/modules/api/functional_test/utils.py b/modules/api/functional_test/utils.py index 12b425bd1..449119e29 100644 --- a/modules/api/functional_test/utils.py +++ b/modules/api/functional_test/utils.py @@ -101,7 +101,6 @@ def dns_do_command(zone, record_name, record_type, command, ttl=0, rdata=""): (name_server, name_server_port) = dns_server_port(zone) fqdn = record_name + "." + zone["name"] - print("updating " + fqdn + " to have data " + rdata) update = dns.update.Update(zone["name"], keyring=keyring) if command == "add": @@ -198,9 +197,6 @@ def parse_record(record_string): # for each record, we have exactly 4 fields in order: 1 record name; 2 TTL; 3 DCLASS; 4 TYPE; 5 RDATA parts = record_string.split(" ") - print("record parts") - print(str(parts)) - # any parts over 4 have to be kept together offset = record_string.find(parts[3]) + len(parts[3]) + 1 length = len(record_string) - offset @@ -214,8 +210,6 @@ def parse_record(record_string): "rdata": record_data } - print("parsed record:") - print(str(record)) return record @@ -318,37 +312,42 @@ def remove_classless_acl_rules(test_context, rules): def clear_ok_acl_rules(test_context): zone = test_context.ok_zone - zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) - test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) + if zone is not None and "acl" in zone and "rules" in zone["acl"]: + zone["acl"]["rules"] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) + test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_shared_zone_acl_rules(test_context): zone = test_context.shared_zone - zone["acl"]["rules"] = [] - update_change = test_context.shared_zone_vinyldns_client.update_zone(zone, status=(202, 404)) - test_context.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(update_change) + if zone is not None and "acl" in zone and "rules" in zone["acl"]: + zone["acl"]["rules"] = [] + update_change = test_context.shared_zone_vinyldns_client.update_zone(zone, status=(202, 404)) + test_context.shared_zone_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip4_acl_rules(test_context): zone = test_context.ip4_reverse_zone - zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) - test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) + if zone is not None and "acl" in zone and "rules" in zone["acl"]: + zone["acl"]["rules"] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) + test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_ip6_acl_rules(test_context): zone = test_context.ip6_reverse_zone - zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) - test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) + if zone is not None and "acl" in zone and "rules" in zone["acl"]: + zone["acl"]["rules"] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) + test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def clear_classless_acl_rules(test_context): zone = test_context.classless_zone_delegation_zone - zone["acl"]["rules"] = [] - update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) - test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) + if zone is not None and "acl" in zone and "rules" in zone["acl"]: + zone["acl"]["rules"] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=(202, 404)) + test_context.ok_vinyldns_client.wait_until_zone_change_status_synced(update_change) def seed_text_recordset(client, record_name, zone, records=[{"text": "someText"}]): @@ -361,10 +360,7 @@ def seed_text_recordset(client, record_name, zone, records=[{"text": "someText"} } result = client.create_recordset(new_rs, status=202) result_rs = result["recordSet"] - if client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]): - print("\r\n!!! record set exists !!!") - else: - print("\r\n!!! record set does not exist !!!") + client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]) return result_rs @@ -379,10 +375,7 @@ def seed_ptr_recordset(client, record_name, zone, records=[{"ptrdname": "foo.com } result = client.create_recordset(new_rs, status=202) result_rs = result["recordSet"] - if client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]): - print("\r\n!!! record set exists !!!") - else: - print("\r\n!!! record set does not exist !!!") + client.wait_until_recordset_exists(result_rs["zoneId"], result_rs["id"]) return result_rs @@ -536,14 +529,19 @@ def clear_recordset_list(to_delete, client): try: delete_result = client.delete_recordset(result_rs["zone"]["id"], result_rs["recordSet"]["id"], status=202) delete_changes.append(delete_result) + except AssertionError: + pass except Exception: traceback.print_exc() + raise for change in delete_changes: try: client.wait_until_recordset_change_status(change, "Complete") + except AssertionError: + pass except Exception: traceback.print_exc() - pass + raise def clear_zoneid_rsid_tuple_list(to_delete, client): @@ -552,15 +550,19 @@ def clear_zoneid_rsid_tuple_list(to_delete, client): try: delete_result = client.delete_recordset(tup[0], tup[1], status=202) delete_changes.append(delete_result) + except AssertionError: + pass except Exception: traceback.print_exc() - pass + raise for change in delete_changes: try: client.wait_until_recordset_change_status(change, "Complete") + except AssertionError: + pass except Exception: traceback.print_exc() - pass + raise def get_group_json(group_name, email="test@test.com", description="this is a description", members=[{"id": "ok"}], diff --git a/modules/api/functional_test/vinyldns_python.py b/modules/api/functional_test/vinyldns_python.py index 41475012d..63df14233 100644 --- a/modules/api/functional_test/vinyldns_python.py +++ b/modules/api/functional_test/vinyldns_python.py @@ -28,7 +28,8 @@ class VinylDNSClient(object): "Accept": "application/json, text/plain", "Content-Type": "application/json" } - + self.created_zones = [] + self.created_groups = [] self.signer = AwsSigV4RequestSigner(self.index_url, access_key, secret_key) self.session = self.requests_retry_session() self.session_not_found_ok = self.requests_retry_not_found_ok_session() @@ -39,11 +40,18 @@ class VinylDNSClient(object): def __exit__(self, exc_type, exc_val, exc_tb): self.tear_down() + def clear_groups(self): + for group_id in self.created_groups: + self.delete_group(group_id) + + def clear_zones(self): + self.abandon_zones(self.created_zones) + def tear_down(self): self.session.close() self.session_not_found_ok.close() - def requests_retry_not_found_ok_session(self, retries=5, backoff_factor=0.4, status_forcelist=(500, 502, 504), session=None): + def requests_retry_not_found_ok_session(self, retries=20, backoff_factor=0.1, status_forcelist=(500, 502, 504), session=None): session = session or requests.Session() retry = Retry( total=retries, @@ -57,7 +65,7 @@ class VinylDNSClient(object): session.mount("https://", adapter) return session - def requests_retry_session(self, retries=5, backoff_factor=0.4, status_forcelist=(500, 502, 504), session=None): + def requests_retry_session(self, retries=20, backoff_factor=0.1, status_forcelist=(500, 502, 504), session=None): session = session or requests.Session() retry = Retry( total=retries, @@ -104,13 +112,9 @@ class VinylDNSClient(object): if status_code is not None: if isinstance(status_code, Iterable): - if response.status_code not in status_code: - print(response.text) - assert_that(response.status_code, is_in(status_code)) + assert_that(response.status_code, is_in(status_code), response.text) else: - if response.status_code != status_code: - print(response.text) - assert_that(response.status_code, is_(status_code)) + assert_that(response.status_code, is_(status_code), response.text) try: return response.status_code, response.json() @@ -177,6 +181,9 @@ class VinylDNSClient(object): url = urljoin(self.index_url, "/groups") response, data = self.make_request(url, "POST", self.headers, json.dumps(group), **kwargs) + if type(data) != str and "id" in data: + self.created_groups.append(data["id"]) + return data def get_group(self, group_id, **kwargs): @@ -213,7 +220,7 @@ class VinylDNSClient(object): return data - def list_my_groups(self, group_name_filter=None, start_from=None, max_items=None, ignore_access=False, **kwargs): + def list_my_groups(self, group_name_filter=None, start_from=None, max_items=200, ignore_access=False, **kwargs): """ Retrieves my groups :param start_from: the start key of the page @@ -332,6 +339,9 @@ class VinylDNSClient(object): url = urljoin(self.index_url, "/zones") response, data = self.make_request(url, "POST", self.headers, json.dumps(zone), **kwargs) + if type(data) != str and "zone" in data: + self.created_zones.append(data["zone"]["id"]) + return data def update_zone(self, zone, **kwargs): @@ -361,6 +371,7 @@ class VinylDNSClient(object): :param zone_id: the id of the zone to be deleted :return: nothing, will fail if the status code was not expected """ + url = urljoin(self.index_url, "/zones/{0}".format(zone_id)) response, data = self.make_request(url, "DELETE", self.headers, not_found_ok=True, **kwargs) @@ -471,7 +482,6 @@ class VinylDNSClient(object): url = urljoin(self.index_url, "/zones/{0}/recordsets".format(recordset["zoneId"])) response, data = self.make_request(url, "POST", self.headers, json.dumps(recordset), **kwargs) - return data def delete_recordset(self, zone_id, rs_id, **kwargs): @@ -752,7 +762,7 @@ class VinylDNSClient(object): Waits a period of time for the record set creation to complete. :param zone_id: the id of the zone the record set lives in - :param record_set_id: the id of the recprdset that has been created. + :param record_set_id: the id of the recordset that has been created. :param kw: Additional parameters for the http request :return: True when the recordset creation is complete False if the timeout expires """ @@ -764,6 +774,7 @@ class VinylDNSClient(object): time.sleep(RETRY_WAIT) response, data = self.make_request(url, "GET", self.headers, not_found_ok=True, status=(200, 404), **kwargs) + assert_that(response, equal_to(200), data) if response == 200: return data diff --git a/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsBackend.scala b/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsBackend.scala index f4f869ce1..de888e13c 100644 --- a/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsBackend.scala +++ b/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsBackend.scala @@ -17,11 +17,11 @@ package vinyldns.api.backend.dns import java.net.SocketAddress - import cats.effect._ import cats.syntax.all._ import org.slf4j.{Logger, LoggerFactory} import org.xbill.DNS +import org.xbill.DNS.Name import vinyldns.api.domain.zone.ZoneTooLargeError import vinyldns.core.crypto.CryptoAlgebra import vinyldns.core.domain.backend.{Backend, BackendResponse} @@ -166,7 +166,7 @@ class DnsBackend(val id: String, val resolver: DNS.SimpleResolver, val xfrInfo: logger.info(s"Querying for dns dnsRecordName='${dnsName.toString}'; recordType='$typ'") val lookup = new DNS.Lookup(dnsName, toDnsRecordType(typ)) lookup.setResolver(resolver) - lookup.setSearchPath(Array.empty[String]) + lookup.setSearchPath(List(Name.empty).asJava) lookup.setCache(null) Right(new DnsQuery(lookup, zoneDnsName(zoneName))) @@ -283,6 +283,7 @@ object DnsBackend { val (host, port) = parseHostAndPort(conn.primaryServer) val resolver = new DNS.SimpleResolver(host) resolver.setPort(port) + resolver.setTCP(true) tsig.foreach(resolver.setTSIGKey) resolver } diff --git a/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsConversions.scala b/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsConversions.scala index efe9647ac..c27646b5b 100644 --- a/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsConversions.scala +++ b/modules/api/src/main/scala/vinyldns/api/backend/dns/DnsConversions.scala @@ -125,7 +125,7 @@ trait DnsConversions { /* Remove the additional record of the TSIG key from the message before generating the string */ def obscuredDnsMessage(msg: DNS.Message): DNS.Message = { val clone = msg.clone.asInstanceOf[DNS.Message] - val sections = clone.getSectionArray(DNS.Section.ADDITIONAL) + val sections = clone.getSection(DNS.Section.ADDITIONAL).asScala if (sections != null && sections.nonEmpty) { sections.filter(_.getType == DNS.Type.TSIG).foreach { tsigRecord => clone.removeRecord(tsigRecord, DNS.Section.ADDITIONAL) @@ -231,7 +231,7 @@ trait DnsConversions { def fromCNAMERecord(r: DNS.CNAMERecord, zoneName: DNS.Name, zoneId: String): RecordSet = fromDnsRecord(r, zoneName, zoneId) { data => - List(CNAMEData(Fqdn(data.getAlias.toString))) + List(CNAMEData(Fqdn(data.getTarget.toString))) } def fromDSRecord(r: DNS.DSRecord, zoneName: DNS.Name, zoneId: String): RecordSet = diff --git a/modules/core/src/main/scala/vinyldns/core/Messages.scala b/modules/core/src/main/scala/vinyldns/core/Messages.scala index 4275b1c88..fcaff7c2b 100644 --- a/modules/core/src/main/scala/vinyldns/core/Messages.scala +++ b/modules/core/src/main/scala/vinyldns/core/Messages.scala @@ -19,7 +19,8 @@ package vinyldns.core object Messages { // Error displayed when less than two letters or numbers is filled in Record Name Filter field in RecordSetSearch page - val RecordNameFilterError = "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." + val RecordNameFilterError = + "Record Name Filter field must contain at least two letters or numbers to perform a RecordSet Search." /* * Error displayed when attempting to create group with name that already exists @@ -28,7 +29,8 @@ object Messages { * 1. [string] group name * 2. [string] group email address */ - val GroupAlreadyExistsErrorMsg = "Group with name %s already exists. Please try a different name or contact %s to be added to the group." + val GroupAlreadyExistsErrorMsg = + "Group with name %s already exists. Please try a different name or contact %s to be added to the group." /* * Error displayed when deleting a group being the admin of a zone @@ -36,7 +38,8 @@ object Messages { * Placeholders: * 1. [string] group name */ - val ZoneAdminError = "%s is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." + val ZoneAdminError = + "%s is the admin of a zone. Cannot delete. Please transfer the ownership to another group before deleting." /* * Error displayed when deleting a group being the owner for a record set @@ -45,7 +48,8 @@ object Messages { * 1. [string] group name * 2. [string] record set id */ - val RecordSetOwnerError = "%s is the owner for a record set including %s. Cannot delete. Please transfer the ownership to another group before deleting." + val RecordSetOwnerError = + "%s is the owner for a record set including %s. Cannot delete. Please transfer the ownership to another group before deleting." /* * Error displayed when deleting a group which has an ACL rule for a zone @@ -54,7 +58,8 @@ object Messages { * 1. [string] group name * 2. [string] zone id */ - val ACLRuleError = "%s has an ACL rule for a zone including %s. Cannot delete. Please transfer the ownership to another group before deleting." + val ACLRuleError = + "%s has an ACL rule for a zone including %s. Cannot delete. Please transfer the ownership to another group before deleting." // Error displayed when NSData field is not a positive integer val NSDataError = "NS data must be a positive integer" @@ -71,6 +76,7 @@ object Messages { * 3. [string] owner group name | owner group id * 4. [string] contact email */ - val NotAuthorizedErrorMsg = "User \"%s\" is not authorized. Contact %s owner group: %s at %s to make DNS changes." + val NotAuthorizedErrorMsg = + "User \"%s\" is not authorized. Contact %s owner group: %s at %s to make DNS changes." } diff --git a/modules/mysql/src/main/resources/test/ddl.sql b/modules/mysql/src/main/resources/test/ddl.sql new file mode 100644 index 000000000..c54ad1a54 --- /dev/null +++ b/modules/mysql/src/main/resources/test/ddl.sql @@ -0,0 +1,238 @@ +-- This script will populate the database with the VinylDNS schema +-- It is used for testing with the H2 in-memory database where +-- migration is not necessary. +-- +-- This should be run via the INIT parameter in the H2 JDBC URL +-- Ex: "jdbc:h2:mem:vinyldns;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=TRUE;INIT=RUNSCRIPT FROM 'classpath:test/ddl.sql'" +-- + +CREATE TABLE batch_change +( + id char(36) not null primary key, + user_id char(36) not null, + user_name varchar(45) not null, + created_time datetime not null, + comments varchar(1024) null, + owner_group_id char(36) null, + approval_status tinyint null, + reviewer_id char(36) null, + review_comment varchar(1024) null, + review_timestamp datetime null, + scheduled_time datetime null, + cancelled_timestamp datetime null +); + +create index batch_change_approval_status_index + on batch_change (approval_status); + +create index batch_change_user_id_created_time_index + on batch_change (user_id, created_time); + +create index batch_change_user_id_index + on batch_change (user_id); + +create table group_change +( + id char(36) not null primary key, + group_id char(36) not null, + created_timestamp bigint(13) not null, + data blob not null +); + +create index group_change_group_id_index + on group_change (group_id); + +create table `groups` +( + id char(36) not null primary key, + name varchar(256) not null, + data blob not null, + description varchar(256) null, + created_timestamp datetime not null, + email varchar(256) not null +); + +create index groups_name_index + on `groups` (name); + +create table membership +( + user_id char(36) not null, + group_id char(36) not null, + is_admin tinyint(1) not null, + primary key (user_id, group_id) +); + +create table message_queue +( + id char(36) not null primary key, + message_type tinyint null, + in_flight bit null, + data blob not null, + created datetime not null, + updated datetime not null, + timeout_seconds int not null, + attempts int default 0 not null +); + +create index message_queue_inflight_index + on message_queue (in_flight); + +create index message_queue_timeout_index + on message_queue (timeout_seconds); + +create index message_queue_updated_index + on message_queue (updated); + +create table record_change +( + id char(36) not null primary key, + zone_id char(36) not null, + created bigint(13) not null, + type tinyint not null, + data blob not null +); + +create index record_change_created_index + on record_change (created); + +create index record_change_zone_id_index + on record_change (zone_id); + +create table recordset +( + id char(36) not null primary key, + zone_id char(36) not null, + name varchar(256) not null, + type tinyint not null, + data blob not null, + fqdn varchar(255) not null, + owner_group_id char(36) null, + constraint recordset_zone_id_name_type_index + unique (zone_id, name, type) +); + +create index recordset_fqdn_index + on recordset (fqdn); + +create index recordset_owner_group_id_index + on recordset (owner_group_id); + +create index recordset_type_index + on recordset (type); + +create table single_change +( + id char(36) not null primary key, + seq_num smallint not null, + input_name varchar(255) not null, + change_type varchar(25) not null, + data blob not null, + status varchar(16) not null, + batch_change_id char(36) not null, + record_set_change_id char(36) null, + record_set_id char(36) null, + zone_id char(36) null, + constraint fk_single_change_batch_change_id_batch_change + foreign key (batch_change_id) references batch_change (id) + on delete cascade +); + +create index single_change_batch_change_id_index + on single_change (batch_change_id); + +create index single_change_record_set_change_id_index + on single_change (record_set_change_id); + +create table stats +( + id bigint auto_increment primary key, + name varchar(255) not null, + count bigint not null, + created datetime not null +); + +create index stats_name_created_index + on stats (name, created); + +create index stats_name_index + on stats (name); + +create table task +( + name varchar(255) not null primary key, + in_flight bit not null, + created datetime not null, + updated datetime null +); + +create table user +( + id char(36) not null primary key, + user_name varchar(256) not null, + access_key varchar(256) not null, + data blob not null +); + +create index user_access_key_index + on user (access_key); + +create index user_user_name_index + on user (user_name); + +create table user_change +( + change_id char(36) not null primary key, + user_id char(36) not null, + data blob not null, + created_timestamp bigint(13) not null +); + +create table zone +( + id char(36) not null primary key, + name varchar(256) not null, + admin_group_id char(36) not null, + data blob not null, + constraint zone_name_unique + unique (name) +); + +create index zone_admin_group_id_index + on zone (admin_group_id); + +create index zone_name_index + on zone (name); + +create table zone_access +( + accessor_id char(36) not null, + zone_id char(36) not null, + primary key (accessor_id, zone_id), + constraint fk_zone_access_zone_id + foreign key (zone_id) references zone (id) + on delete cascade +); + +create index zone_access_accessor_id_index + on zone_access (accessor_id); + +create index zone_access_zone_id_index + on zone_access (zone_id); + +create table zone_change +( + change_id char(36) not null primary key, + zone_id char(36) not null, + data blob not null, + created_timestamp bigint(13) not null +); + +create index zone_change_created_timestamp_index + on zone_change (created_timestamp); + +create index zone_change_zone_id_index + on zone_change (zone_id); + +INSERT IGNORE INTO task(name, in_flight, created, updated) +VALUES ('user_sync', 0, NOW(), NULL); diff --git a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala index 013eb7e6e..1b63236e5 100644 --- a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala +++ b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala @@ -28,33 +28,38 @@ object MySqlConnector { private val logger = LoggerFactory.getLogger("MySqlConnector") def runDBMigrations(config: MySqlConnectionConfig): IO[Unit] = { - val migrationConnectionSettings = MySqlDataSourceSettings( - "flywayConnectionPool", - config.driver, - config.migrationUrl, - config.user, - config.password, - minimumIdle = Some(3) - ) + // We can skip migrations for h2, we'll use the test/ddl.sql for initializing + // that for testing + if (config.driver.contains("h2")) IO.unit + else { + val migrationConnectionSettings = MySqlDataSourceSettings( + "flywayConnectionPool", + config.driver, + config.migrationUrl, + config.user, + config.password, + minimumIdle = Some(3) + ) - getDataSource(migrationConnectionSettings).map { migrationDataSource => - logger.info("Running migrations to ready the databases") + getDataSource(migrationConnectionSettings).map { migrationDataSource => + logger.info("Running migrations to ready the databases") - val migration = new Flyway() - migration.setDataSource(migrationDataSource) - // flyway changed the default schema table name in v5.0.0 - // this allows to revert to an old naming convention if needed - config.migrationSchemaTable.foreach { tableName => - migration.setTable(tableName) + val migration = new Flyway() + migration.setDataSource(migrationDataSource) + // flyway changed the default schema table name in v5.0.0 + // this allows to revert to an old naming convention if needed + config.migrationSchemaTable.foreach { tableName => + migration.setTable(tableName) + } + + val placeholders = Map("dbName" -> config.name) + migration.setPlaceholders(placeholders.asJava) + migration.setSchemas(config.name) + + // Runs flyway migrations + migration.migrate() + logger.info("migrations complete") } - - val placeholders = Map("dbName" -> config.name) - migration.setPlaceholders(placeholders.asJava) - migration.setSchemas(config.name) - - // Runs flyway migrations - migration.migrate() - logger.info("migrations complete") } } diff --git a/project/Dependencies.scala b/project/Dependencies.scala index 160a98e79..9e6547319 100644 --- a/project/Dependencies.scala +++ b/project/Dependencies.scala @@ -30,7 +30,7 @@ object Dependencies { "com.github.ben-manes.caffeine" % "caffeine" % "2.2.7", "com.github.cb372" %% "scalacache-caffeine" % "0.9.4", "com.google.protobuf" % "protobuf-java" % "2.6.1", - "dnsjava" % "dnsjava" % "2.1.8", + "dnsjava" % "dnsjava" % "3.4.2", "org.apache.commons" % "commons-lang3" % "3.4", "org.apache.commons" % "commons-text" % "1.4", "org.flywaydb" % "flyway-core" % "5.1.4", @@ -73,7 +73,7 @@ object Dependencies { "io.dropwizard.metrics" % "metrics-jvm" % "3.2.2", "co.fs2" %% "fs2-core" % "2.3.0", "javax.xml.bind" % "jaxb-api" % "2.3.0", - "javax.activation" % "activation" % "1.1.1" + "javax.activation" % "activation" % "1.1.1", ) lazy val mysqlDependencies = Seq( @@ -81,7 +81,8 @@ object Dependencies { "org.mariadb.jdbc" % "mariadb-java-client" % "2.3.0", "org.scalikejdbc" %% "scalikejdbc" % scalikejdbcV, "org.scalikejdbc" %% "scalikejdbc-config" % scalikejdbcV, - "com.zaxxer" % "HikariCP" % "3.2.0" + "com.zaxxer" % "HikariCP" % "3.2.0", + "com.h2database" % "h2" % "1.4.200", ) lazy val sqsDependencies = Seq( @@ -122,7 +123,7 @@ object Dependencies { "com.nimbusds" % "oauth2-oidc-sdk" % "6.5", "com.nimbusds" % "nimbus-jose-jwt" % "7.0", "co.fs2" %% "fs2-core" % fs2V, - "de.leanovate.play-mockws" %% "play-mockws" % "2.7.1" % "test", + "de.leanovate.play-mockws" %% "play-mockws" % "2.7.1" % "test", "com.iheart" %% "ficus" % ficusV ) } From 07b683cbd027907411dc85de9e95ae10fffe4559 Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Fri, 15 Oct 2021 15:06:04 -0400 Subject: [PATCH 14/82] Updates - Remove old, unused scripts in `bin/` - Remove old images from release - `test` and `test-bind` are no longer necessary. Test images are in a different repo now - Remove Docker image creation from sbt build config - actual `Dockerfile` files are easier to deal with - Update scripts in `bin/` to utilize new Docker images - Update documentation for changes - Update all Docker Compose and configuration to use exposed ports on the `integration` image (19001, 19002, etc) both inside the container and outside to make testing more consistent irrespective of method - Update FlywayDB dependency to v8 to fix a weird logging bug that showed up during integration testing. See: https://github.com/flyway/flyway/issues/2270 - Add `test/api/integration` Docker container definition to be used for any integration testing - Move `module/api/functional_test` to `test/api/functional` to centralize the "integration-type" external tests and testing utilities - Move functional testing and integration image to the `test/` folder off of the root to reduce confusion with `bin/` and `docker/` --- .gitignore | 4 + AUTHORS.md | 1 + DEVELOPER_GUIDE.md | 244 ++++++++------ MAINTAINERS.md | 83 +---- README.md | 32 +- bin/build.sh | 34 -- bin/docker-publish-api.sh | 10 - bin/docker-up-api-server.sh | 58 ---- bin/docker-up-dns-server.sh | 5 - bin/func-test-api-testbind9.sh | 57 ---- bin/func-test-api-travis.sh | 57 ---- bin/func-test-api.sh | 60 +--- bin/func-test-portal.sh | 49 +-- bin/generate-aes-256-hex-key.sh | 18 -- bin/release.sh | 2 - bin/verify.sh | 18 +- build.sbt | 193 +++++------ build/README.md | 24 -- build/docker-release.sh | 8 - build/docker/test/Dockerfile | 28 -- build/docker/test/run-tests.py | 16 - build/docker/test/run.sh | 76 ----- docker/api/Dockerfile | 1 - docker/api/docker.conf | 8 +- docker/api/run.sh | 12 +- docker/bind9/README.md | 3 +- docker/bind9/etc/named.conf.local | 8 +- ....partition1.conf => named.conf.partition1} | 0 ....partition2.conf => named.conf.partition2} | 0 ....partition3.conf => named.conf.partition3} | 0 ....partition4.conf => named.conf.partition4} | 0 docker/docker-compose-func-test-testbind9.yml | 60 ---- docker/docker-compose-func-test.yml | 87 ----- docker/docker-compose-quick-start.yml | 2 +- docker/docker-compose.yml | 2 +- docker/elasticmq/Dockerfile | 10 - docker/elasticmq/custom.conf | 22 -- docker/elasticmq/run.sh | 8 - docker/email/.gitignore | 2 - docker/functest/Dockerfile | 23 -- docker/functest/run-tests.py | 18 -- docker/functest/run.sh | 81 ----- modules/api/functional_test/Dockerfile | 33 -- modules/api/functional_test/Makefile | 25 -- .../dns/DnsBackendIntegrationSpec.scala | 2 +- .../zone/ZoneViewLoaderIntegrationSpec.scala | 15 +- .../sns/SnsNotifierIntegrationSpec.scala | 31 +- .../route53/Route53ApiIntegrationSpec.scala | 2 +- modules/api/src/main/resources/reference.conf | 82 +++-- .../api/backend/dns/DnsBackendSpec.scala | 64 ++-- .../api/backend/dns/DnsConversionsSpec.scala | 16 +- .../universal/bin/wait-for-dependencies.sh | 28 -- .../scala/vinyldns/mysql/MySqlConnector.scala | 3 +- modules/portal/karma.conf.js | 10 +- .../backend/Route53IntegrationSpec.scala | 2 +- modules/sqs/src/it/resources/application.conf | 8 +- .../SqsMessageQueueIntegrationSpec.scala | 7 - ...sMessageQueueProviderIntegrationSpec.scala | 18 +- project/Dependencies.scala | 4 +- project/plugins.sbt | 2 - test/api/functional/Dockerfile | 29 ++ .../api/functional}/Dockerfile.dockerignore | 3 +- test/api/functional/Makefile | 45 +++ .../api/functional/test}/.gitignore | 0 .../api/functional/test}/__init__.py | 0 .../functional/test}/aws_request_signer.py | 0 .../api/functional/test}/conftest.py | 2 +- .../api/functional/test}/pytest.ini | 0 .../api/functional/test}/pytest.sh | 0 .../api/functional/test}/requirements.txt | 0 .../api/functional/test}/run.sh | 7 +- .../test/tests}/authentication_test.py | 0 .../tests}/batch/approve_batch_change_test.py | 0 .../tests}/batch/cancel_batch_change_test.py | 0 .../tests}/batch/create_batch_change_test.py | 0 .../tests}/batch/get_batch_change_test.py | 0 .../batch/list_batch_change_summaries_test.py | 0 .../tests}/batch/reject_batch_change_test.py | 0 .../api/functional/test/tests}/conftest.py | 0 .../test/tests}/internal/color_test.py | 0 .../test/tests}/internal/health_test.py | 0 .../test/tests}/internal/ping_test.py | 0 .../test/tests}/internal/status_test.py | 0 .../list_batch_summaries_test_context.py | 0 .../test/tests}/list_groups_test_context.py | 0 .../tests}/list_recordsets_test_context.py | 0 .../test/tests}/list_zones_test_context.py | 0 .../tests}/membership/create_group_test.py | 0 .../tests}/membership/delete_group_test.py | 0 .../membership/get_group_changes_test.py | 0 .../test/tests}/membership/get_group_test.py | 0 .../membership/list_group_admins_test.py | 0 .../membership/list_group_members_test.py | 0 .../tests}/membership/list_my_groups_test.py | 0 .../tests}/membership/update_group_test.py | 0 .../test/tests}/production_verify_test.py | 0 .../recordsets/create_recordset_test.py | 2 +- .../recordsets/delete_recordset_test.py | 2 +- .../tests}/recordsets/get_recordset_test.py | 0 .../recordsets/list_recordset_changes_test.py | 0 .../tests}/recordsets/list_recordsets_test.py | 0 .../recordsets/update_recordset_test.py | 2 +- .../test/tests}/shared_zone_test_context.py | 13 +- .../api/functional/test/tests}/test_data.py | 0 .../test/tests}/zones/create_zone_test.py | 0 .../test/tests}/zones/delete_zone_test.py | 0 .../test/tests}/zones/get_zone_test.py | 0 .../tests}/zones/list_zone_changes_test.py | 0 .../test/tests}/zones/list_zones_test.py | 0 .../test/tests}/zones/sync_zone_test.py | 0 .../test/tests}/zones/update_zone_test.py | 0 .../api/functional/test}/utils.py | 0 .../api/functional/test}/vinyldns_context.py | 0 .../api/functional/test}/vinyldns_python.py | 0 .../api/functional/vinyldns.conf | 10 +- test/api/integration/Dockerfile | 28 ++ test/api/integration/Dockerfile.dockerignore | 15 + test/api/integration/Makefile | 51 +++ test/api/integration/vinyldns.conf | 302 ++++++++++++++++++ test/portal/functional/Dockerfile | 14 + .../portal/functional/Dockerfile.dockerignore | 15 + test/portal/functional/Makefile | 42 +++ test/portal/functional/run.sh | 13 + 123 files changed, 978 insertions(+), 1393 deletions(-) delete mode 100755 bin/build.sh delete mode 100755 bin/docker-publish-api.sh delete mode 100755 bin/docker-up-api-server.sh delete mode 100755 bin/docker-up-dns-server.sh delete mode 100755 bin/func-test-api-testbind9.sh delete mode 100755 bin/func-test-api-travis.sh delete mode 100755 bin/generate-aes-256-hex-key.sh delete mode 100644 build/docker/test/Dockerfile delete mode 100644 build/docker/test/run-tests.py delete mode 100755 build/docker/test/run.sh rename docker/bind9/etc/{named.partition1.conf => named.conf.partition1} (100%) rename docker/bind9/etc/{named.partition2.conf => named.conf.partition2} (100%) rename docker/bind9/etc/{named.partition3.conf => named.conf.partition3} (100%) rename docker/bind9/etc/{named.partition4.conf => named.conf.partition4} (100%) delete mode 100644 docker/docker-compose-func-test-testbind9.yml delete mode 100644 docker/docker-compose-func-test.yml delete mode 100644 docker/elasticmq/Dockerfile delete mode 100644 docker/elasticmq/custom.conf delete mode 100755 docker/elasticmq/run.sh delete mode 100644 docker/email/.gitignore delete mode 100644 docker/functest/Dockerfile delete mode 100644 docker/functest/run-tests.py delete mode 100755 docker/functest/run.sh delete mode 100644 modules/api/functional_test/Dockerfile delete mode 100644 modules/api/functional_test/Makefile delete mode 100755 modules/api/src/universal/bin/wait-for-dependencies.sh create mode 100644 test/api/functional/Dockerfile rename {modules/api/functional_test => test/api/functional}/Dockerfile.dockerignore (87%) create mode 100644 test/api/functional/Makefile rename {modules/api/functional_test => test/api/functional/test}/.gitignore (100%) mode change 100755 => 100644 rename {modules/api/functional_test => test/api/functional/test}/__init__.py (100%) rename {modules/api/functional_test => test/api/functional/test}/aws_request_signer.py (100%) rename {modules/api/functional_test => test/api/functional/test}/conftest.py (98%) rename {modules/api/functional_test => test/api/functional/test}/pytest.ini (100%) rename {modules/api/functional_test => test/api/functional/test}/pytest.sh (100%) mode change 100755 => 100644 rename {modules/api/functional_test => test/api/functional/test}/requirements.txt (100%) rename {modules/api/functional_test => test/api/functional/test}/run.sh (57%) mode change 100755 => 100644 rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/authentication_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/approve_batch_change_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/cancel_batch_change_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/create_batch_change_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/get_batch_change_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/list_batch_change_summaries_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/batch/reject_batch_change_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/conftest.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/internal/color_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/internal/health_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/internal/ping_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/internal/status_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/list_batch_summaries_test_context.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/list_groups_test_context.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/list_recordsets_test_context.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/list_zones_test_context.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/create_group_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/delete_group_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/get_group_changes_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/get_group_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/list_group_admins_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/list_group_members_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/list_my_groups_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/membership/update_group_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/production_verify_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/create_recordset_test.py (99%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/delete_recordset_test.py (99%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/get_recordset_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/list_recordset_changes_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/list_recordsets_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/recordsets/update_recordset_test.py (99%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/shared_zone_test_context.py (98%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/test_data.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/create_zone_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/delete_zone_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/get_zone_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/list_zone_changes_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/list_zones_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/sync_zone_test.py (100%) rename {modules/api/functional_test/live_tests => test/api/functional/test/tests}/zones/update_zone_test.py (100%) rename {modules/api/functional_test => test/api/functional/test}/utils.py (100%) rename {modules/api/functional_test => test/api/functional/test}/vinyldns_context.py (100%) rename {modules/api/functional_test => test/api/functional/test}/vinyldns_python.py (100%) rename modules/api/functional_test/docker.conf => test/api/functional/vinyldns.conf (96%) create mode 100644 test/api/integration/Dockerfile create mode 100644 test/api/integration/Dockerfile.dockerignore create mode 100644 test/api/integration/Makefile create mode 100644 test/api/integration/vinyldns.conf create mode 100644 test/portal/functional/Dockerfile create mode 100644 test/portal/functional/Dockerfile.dockerignore create mode 100644 test/portal/functional/Makefile create mode 100644 test/portal/functional/run.sh diff --git a/.gitignore b/.gitignore index 12421c8c6..f42ce6dd5 100644 --- a/.gitignore +++ b/.gitignore @@ -32,3 +32,7 @@ tmp.out project/metals.sbt .bsp docker/data +**/.virtualenv +**/.venv* +**/*cache* + diff --git a/AUTHORS.md b/AUTHORS.md index 6ac41f16e..ba5f2dbad 100644 --- a/AUTHORS.md +++ b/AUTHORS.md @@ -22,6 +22,7 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Luke Cori - Jearvon Dharrie - Andrew Dunn +- Ryan Emerle - David Grizzanti - Alejandro Guirao - Daniel Jin diff --git a/DEVELOPER_GUIDE.md b/DEVELOPER_GUIDE.md index 6ef1c08c5..41ce0b58c 100644 --- a/DEVELOPER_GUIDE.md +++ b/DEVELOPER_GUIDE.md @@ -1,13 +1,14 @@ # Developer Guide ## Table of Contents + - [Developer Requirements](#developer-requirements) - [Project Layout](#project-layout) - [Running VinylDNS Locally](#running-vinyldns-locally) - [Testing](#testing) -- [Validating VinylDNS](#validating-vinyldns) ## Developer Requirements + - Scala 2.12 - sbt 1+ - Java 8 (at least u162) @@ -21,49 +22,54 @@ Make sure that you have the requirements installed before proceeding. ## Project Layout -[SYSTEM_DESIGN.md](SYSTEM_DESIGN.md) provides a high-level architectural overview of VinylDNS and interoperability of its components. -The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project, -from the root directory run `sbt`. Most of the code can be found in the `modules` directory. -The following modules are present: +[SYSTEM_DESIGN.md](SYSTEM_DESIGN.md) provides a high-level architectural overview of VinylDNS and interoperability of +its components. + +The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project, from the +root directory run `sbt`. Most of the code can be found in the `modules` directory. The following modules are present: * `root` - this is the parent project, if you run tasks here, it will run against all sub-modules * [`core`](#core): core modules that are used by both the API and portal, such as cryptography implementations. -* [`api`](#api): the API is the main engine for all of VinylDNS. This is the most active area of the codebase, as everything else typically just funnels through -the API. -* [`portal`](#portal): The portal is a user interface wrapper around the API. Most of the business rules, logic, and processing can be found in the API. The -_only_ features in the portal not found in the API are creation of users and user authentication. +* [`api`](#api): the API is the main engine for all of VinylDNS. This is the most active area of the codebase, as + everything else typically just funnels through the API. +* [`portal`](#portal): The portal is a user interface wrapper around the API. Most of the business rules, logic, and + processing can be found in the API. The + _only_ features in the portal not found in the API are creation of users and user authentication. * [`docs`](#documentation): documentation for VinylDNS. ### Core + Code that is used across multiple modules in the VinylDNS ecosystem live in `core`. #### Code Layout + * `src/main` - the main source code * `src/test` - unit tests ### API -The API is the RESTful API for interacting with VinylDNS. The following technologies are used: + +The API is the RESTful API for interacting with VinylDNS. The following technologies are used: * [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) - Used primarily for REST and HTTP calls. * [FS2](https://functional-streams-for-scala.github.io/fs2/) - Used for backend change processing off of message queues. -FS2 has back-pressure built in, and gives us tools like throttling and concurrency. + FS2 has back-pressure built in, and gives us tools like throttling and concurrency. * [Cats Effect](https://typelevel.org/cats-effect/) - We are currently migrating away from `Future` as our primary type -and towards cats effect IO. Hopefully, one day, all the things will be using IO. + and towards cats effect IO. Hopefully, one day, all the things will be using IO. * [Cats](https://typelevel.org/cats) - Used for functional programming. -* [PureConfig](https://pureconfig.github.io/) - For loading configuration values. We are currently migrating to -use PureConfig everywhere. Not all the places use it yet. +* [PureConfig](https://pureconfig.github.io/) - For loading configuration values. We are currently migrating to use + PureConfig everywhere. Not all the places use it yet. The API has the following dependencies: -* MySQL - the SQL database that houses zone data -* DynamoDB - where all of the other data is stored + +* MySQL - the SQL database that houses the data * SQS - for managing concurrent updates and enabling high-availability * Bind9 - for testing integration with a real DNS system #### Code Layout + The API code can be found in `modules/api`. -* `functional_test` - contains the python black box / regression tests * `src/it` - integration tests * `src/main` - the main source code * `src/test` - unit tests @@ -71,27 +77,31 @@ The API code can be found in `modules/api`. The package structure for the source code follows: -* `vinyldns.api.domain` - contains the core front-end logic. This includes things like the application services, -repository interfaces, domain model, validations, and business rules. -* `vinyldns.api.engine` - the back-end processing engine. This is where we process commands including record changes, -zone changes, and zone syncs. +* `vinyldns.api.domain` - contains the core front-end logic. This includes things like the application services, + repository interfaces, domain model, validations, and business rules. +* `vinyldns.api.engine` - the back-end processing engine. This is where we process commands including record changes, + zone changes, and zone syncs. * `vinyldns.api.protobuf` - marshalling and unmarshalling to and from protobuf to types in our system * `vinyldns.api.repository` - repository implementations live here * `vinyldns.api.route` - HTTP endpoints ### Portal + The project is built using: + * [Play Framework](https://www.playframework.com/documentation/2.6.x/Home) * [AngularJS](https://angularjs.org/) -The portal is _mostly_ a shim around the API. Most actions in the user interface are translated into API calls. +The portal is _mostly_ a shim around the API. Most actions in the user interface are translated into API calls. The features that the Portal provides that are not in the API include: + * Authentication against LDAP -* Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user and new credentials for them in the -database with their LDAP information. +* Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user and new credentials + for them in the database with their LDAP information. #### Code Layout + The portal code can be found in `modules/portal`. * `app` - source code for portal back-end @@ -108,38 +118,54 @@ The portal code can be found in `modules/portal`. * `test` - unit tests for portal back-end ### Documentation -Code used to build the microsite content for the API, operator and portal guides at https://www.vinyldns.io/. Some settings for the microsite -are also configured in `build.sbt` of the project root. + +Code used to build the microsite content for the API, operator and portal guides at https://www.vinyldns.io/. Some +settings for the microsite are also configured in `build.sbt` of the project root. #### Code Layout + * `src/main/resources` - Microsite resources and configurations * `src/main/tut` - Content for microsite web pages ## Running VinylDNS Locally -VinylDNS can be started in the background by running the [quickstart instructions](README.md#quickstart) located in the README. However, VinylDNS -can also be run in the foreground. + +VinylDNS can be started in the background by running the [quickstart instructions](README.md#quickstart) located in the +README. However, VinylDNS can also be run in the foreground. ### Starting the API Server -To start the API for integration, functional, or portal testing. Start up sbt by running `sbt` from the root directory. -* `dockerComposeUp` to spin up the dependencies on your machine from the root project. + +Before starting the API service, you can start the dependencies for local development: +``` +cd test/api/integration +make build && make run-bg +``` +This will start a container running in the background with necessary prerequisites. + +Once the prerequisites are running, you can start up sbt by running `sbt` from the root directory. + * `project api` to change the sbt project to the API * `reStart` to start up the API server * Wait until you see the message `VINYLDNS SERVER STARTED SUCCESSFULLY` before working with the server * To stop the VinylDNS server, run `reStop` from the api project -* To stop the dependent Docker containers, change to the root project `project root`, then run `dockerComposeStop` from the API project +* To stop the dependent Docker containers, change to the root project `project root`, then run `dockerComposeStop` from + the API project -See the [API Configuration Guide](https://www.vinyldns.io/operator/config-api) for information regarding API configuration. +See the [API Configuration Guide](https://www.vinyldns.io/operator/config-api) for information regarding API +configuration. ### Starting the Portal -To run the portal locally, you _first_ have to start up the VinylDNS API Server (see instructions above). Once -that is done, in the same `sbt` session or a different one, go to `project portal` and then execute `;preparePortal; run`. -See the [Portal Configuration Guide](https://www.vinyldns.io/operator/config-portal) for information regarding portal configuration. +To run the portal locally, you _first_ have to start up the VinylDNS API Server (see instructions above). Once that is +done, in the same `sbt` session or a different one, go to `project portal` and then execute `;preparePortal; run`. + +See the [Portal Configuration Guide](https://www.vinyldns.io/operator/config-portal) for information regarding portal +configuration. ### Loading test data + Normally the portal can be used for all VinylDNS requests. Test users are locked down to only have access to test zones, -which the portal connection modal has not been updated to incorporate. To connect to a zone with testuser, you will need to use an alternative -client and set `isTest=true` on the zone being connected to. +which the portal connection modal has not been updated to incorporate. To connect to a zone with testuser, you will need +to use an alternative client and set `isTest=true` on the zone being connected to. Use the vinyldns-js client (Note, you need Node installed): @@ -159,78 +185,95 @@ You should now be able to see the zone in the portal at localhost:9001 when logg ``` ## Testing + ### Unit Tests -1. First, start up your Scala build tool: `sbt`. Running *clean* immediately after starting is recommended. -1. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the portal. + +1. First, start up your Scala build tool: `sbt`. Running *clean* immediately after starting is recommended. +1. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the + portal. 1. Run _all_ unit tests by just running `test`. 1. Run an individual unit test by running `testOnly *MySpec`. -1. If you are working on a unit test and production code at the same time, use `~` (eg. `~testOnly *MySpec`) to automatically background compile for you! +1. If you are working on a unit test and production code at the same time, use `~` (e.g., `~testOnly *MySpec`) to + automatically background compile for you! ### Integration Tests -Integration tests are used to test integration with _real_ dependent services. We use Docker to spin up those -backend services for integration test development. + +Integration tests are used to test integration with _real_ dependent services. We use Docker to spin up those backend +services for integration test development. 1. Type `dockerComposeUp` to start up dependent background services 1. Go to the target module in sbt, example: `project api` 1. Run all integration tests by typing `it:test`. 1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec` 1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec` -1. You must stop (`dockerComposeStop`) and start (`dockerComposeUp`) the dependent services from the root project (`project root`) before you rerun the tests. -1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for setup to complete. +1. You must stop (`dockerComposeStop`) and start (`dockerComposeUp`) the dependent services from the root + project (`project root`) before you rerun the tests. +1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for + setup to complete. #### Running both You can run all unit and integration tests for the api and portal by running `sbt verify` ### Functional Tests -When adding new features, you will often need to write new functional tests that black box / regression test the -API. We have over 350 (and growing) automated regression tests. The API functional tests are written in Python -and live under `modules/api/functional_test`. -#### Running functional tests -To run functional tests, make sure that you have started the API server (directions above). -Then in another terminal session: +When adding new features, you will often need to write new functional tests that black box / regression test the API. -1. `cd modules/api/functional_test` -1. `./run.py live_tests -v` +- The API functional tests are written in Python and live under `test/api/functional`. +The Portal functional tests are written in JavaScript and live under `test/portal/functional`. -You can run a specific test by name by running `./run.py live_tests -v -k ` +#### Running Functional Tests -You run specific tests for a portion of the project, say recordsets, by running `./run.py live_tests/recordsets -v` +To run functional tests you can simply execute the following command: -#### Our Setup -We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation -so that you are familiar with pytest and how our functional tests operate. +``` +make build && make run +``` +During iterative test development, you can use `make run-local` which will mount the current functional tests in the +container, allowing for easier test development. -We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for matchers in order to write easy -to read tests. Please browse that documentation as well so that you are familiar with the different matchers -for PyHamcrest. There aren't a lot, so it should be quick. +Additionally, you can pass `--interactive` to `make run` or `make run-local` to drop to a shell inside the container. +From there you can run tests with the `/functional_test/run.sh` command. This allows for finer-grained control over the +test execution process as well as easier inspection of logs. -In the `modules/api/functional_test` directory are a few important files for you to be familiar with: +##### API Functional Tests +You can run a specific test by name by running `make run -- -k `. Any arguments after +`make run --` will be passed to the test runner [`test/api/functional/run.sh`](test/api/functional/run.sh). -* vinyl_client.py - this provides the interface to the VinylDNS API. It handles signing the request for you, as well -as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should -be a corresponding function in the vinyl_client -* utils.py - provides general use functions that can be used anywhere in your tests. Feel free to contribute new -functions here when you see repetition in the code -Functional tests run on every build, and are designed to work _in every environment_. That means locally, in Docker, -and in production environments. -In the `modules/api/functional_test/live_tests` directory, we have directories / modules for different areas of the application. +#### Setup -* membership - for managing groups and users -* recordsets - for managing record sets -* zones - for managing zones -* internal - for internal endpoints (not intended for public consumption) -* batch - for managing batch updates +We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation so +that you are familiar with pytest and how our functional tests operate. + +We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for matchers in order to write easy to read +tests. Please browse that documentation as well so that you are familiar with the different matchers for PyHamcrest. +There aren't a lot, so it should be quick. + +In the `test/api/functional` directory are a few important files for you to be familiar with: + +* `vinyl_client.py` - this provides the interface to the VinylDNS API. It handles signing the request for you, as well + as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should + be a corresponding function in the vinyl_client +* `utils.py` - provides general use functions that can be used anywhere in your tests. Feel free to contribute new + functions here when you see repetition in the code + +In the `test/api/functional/tests` directory, we have directories / modules for different areas of the application. + +* `batch` - for managing batch updates +* `internal` - for internal endpoints (not intended for public consumption) +* `membership` - for managing groups and users +* `recordsets` - for managing record sets +* `zones` - for managing zones ##### Functional Test Context -Our func tests use pytest contexts. There is a main test context that lives in `shared_zone_test_context.py` -that creates and tears down a shared test context used by many functional tests. The -beauty of pytest is that it will ensure that the test context is stood up exactly once, then all individual tests -that use the context are called using that same context. + +Our functional tests use `pytest` contexts. There is a main test context that lives in `shared_zone_test_context.py` +that creates and tears down a shared test context used by many functional tests. The beauty of pytest is that it will +ensure that the test context is stood up exactly once, then all individual tests that use the context are called using +that same context. The shared test context sets up several things that can be reused: @@ -243,30 +286,33 @@ The shared test context sets up several things that can be reused: 1. A classless IPv4 reverse zone 1. A parent zone that has child zones - used for testing NS record management and zone delegations +##### Partitioning + +Each of the test zones are configured in a `partition`. By default, there are four partitions. These partitions are +effectively copies of the zones so that parallel tests can run without interfering with one another. + +For instance, there are four zones for the `ok` zone: `ok1`, `ok2`, `ok3`, and `ok4`. The functional tests will handle +distributing which zone is being used by which of the parallel test runners. + +As such, you should **never** hardcode the name of the zone. Always get the zone from the `shared_zone_test_context`. +For instance, to get the `ok` zone, you would write: + +```python +zone = shared_zone_test_context.ok_zone +zone_name = shared_zone_test_context.ok_zone["name"] +zone_id = shared_zone_test_context.ok_zone["id"] +``` + ##### Really Important Test Context Rules! -1. Try to use the `shared_zone_test_context` whenever possible! This reduces the time -it takes to run functional tests (which is in minutes). -1. Limit changes to users, groups, and zones in the shared test context, as doing so could impact downstream tests +1. Try to use the `shared_zone_test_context` whenever possible! This reduces the time it takes to run functional + tests (which is in minutes). +1. Be mindful of changes to users, groups, and zones in the shared test context, as doing so could impact downstream + tests 1. If you do modify any entities in the shared zone context, roll those back when your function completes! ##### Managing Test Zone Files -When functional tests are run, we spin up several Docker containers. One of the Docker containers is a Bind9 DNS -server. If you need to add or modify the test DNS zone files, you can find them in + +When functional tests are run, we spin up several Docker containers. One of the Docker containers is a Bind9 DNS server. +If you need to add or modify the test DNS zone files, you can find them in `docker/bind9/zones` - -## Validating VinylDNS -VinylDNS comes with a build script `./build.sh` that validates VinylDNS compiles, verifies that unit tests pass, and then runs functional tests. -Note: This takes a while to run, and typically is only necessary if you want to simulate the same process that runs on the build servers. - -When functional tests run, you will see a lot of output intermingled together across the various containers. You can view only the output -of the functional tests at `target/vinyldns-functest.log`. If you want to see the Docker log output from any one container, you can view -them after the tests complete at: - -* `target/vinyldns-api.log` - the API server logs -* `target/vinyldns-bind9.log` - the Bind9 DNS server logs -* `target/vinyldns-elasticmq.log` - the ElasticMQ (SQS) server logs -* `target/vinyldns-functest.log` - the output of running the functional tests -* `target/vinyldns-mysql.log` - the MySQL server logs - -When the func tests complete, the entire Docker setup will be automatically torn down. diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 8ea97b296..37e7bf93c 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -2,12 +2,6 @@ ## Table of Contents * [Docker Content Trust](#docker-content-trust) - * [Docker Hub Account](#docker-hub-account) - * [Delegating Image Signing](#delegating-image-signing) - * [Setting up Notary](#setting-up-notary) - * [Generating a Personal Delegation Key](#generating-a-personal-delegation-key) - * [Adding a Delegation Key To a Repository](#adding-a-delegation-key-to-a-repository) - * [Pushing a Signed Image with your Delegation Key](#pushing-a-signed-image-with-your-delegation-key) * [Sonatype Credentials](#sonatype-credentials) * [Release Process](#release-process) @@ -26,8 +20,6 @@ the [vinyldns organization](https://hub.docker.com/u/vinyldns/dashboard/). Namel * vinyldns/api: images for vinyldns core api engine * vinyldns/portal: images for vinyldns web client * vinyldns/bind9: images for local DNS server used for testing -* vinyldns/test-bind9: contains the setup to run functional tests -* vinyldns/test: has the actual functional tests pinned to a version of VinylDNS The offline root key and repository keys are managed by the core maintainer team. The keys managed are: @@ -35,8 +27,6 @@ The offline root key and repository keys are managed by the core maintainer team * api key: used to sign tagged images in vinyldns/api * portal key: used to sign tagged images in vinyldns/portal * bind9 key: used to sign tagged images in the vinyldns/bind9 -* test-bind9 key: used to sign tagged images in the vinyldns/test-bind9 -* test key: used to sign tagged images in the vinyldns/test These keys are named in a .key format, e.g. 5526ecd15bd413e08718e66c440d17a28968d5cd2922b59a17510da802ca6572.key, do not change the names of the keys. @@ -44,77 +34,6 @@ do not change the names of the keys. Docker expects these keys to be saved in `~/.docker/trust/private`. Each key is encrypted with a passphrase, that you must have available when pushing an image. -### Docker Hub Account - -If you don't have one already, make an account on Docker Hub. Get added as a Collaborator to vinyldns/api, vinyldns/portal, -and vinyldns/bind9 - -### Delegating Image Signing -Someone with our keys can sign images when pushing, but instead of sharing those keys we can utilize -notary to delegate image signing permissions in a safer way. Notary will have you make a public-private key pair and -upload your public key. This way you only need your private key, and a developer's permissions can easily be revoked. - -#### Setting up Notary -If you do not already have notary: - -1. Download the latest release for your machine at https://github.com/theupdateframework/notary/releases, -for example, on a mac download the precompiled binary `notary-Darwin-amd64` -1. Rename the binary to notary, and choose a location where it will live, -e.g. `cd ~/Downloads/; mv notary-Darwin-amd64 notary; mv notary ~/Documents/notary; cd ~/Documents` -1. Make it executable, e.g. `chmod +x notary` -1. Add notary to your path, e.g. `vim ~/.bashrc`, add `export PATH="$PATH":` -1. Create a `~/.notary/config.json` with - -``` -{ - "trust_dir" : "~/.docker/trust", - "remote_server": { - "url": "https://notary.docker.io" - } - } -``` - -You can test notary with `notary -s https://notary.docker.io -d ~/.docker/trust list docker.io/vinyldns/api`, in which -you should see tagged images for the VinylDNS API - -> Note: you'll pretty much always use the `-s https://notary.docker.io -d ~/.docker/trust` args when running notary, -it will be easier for you to alias a command like `notarydefault` to `notary -s https://notary.docker.io -d ~/.docker/trust` -in your `.bashrc` - -#### Generating a Personal Delegation Key -1. `cd` to a directory where you will save your delegation keys and certs -1. Generate your private key: `openssl genrsa -out delegation.key 2048` -1. Generate your public key: `openssl req -new -sha256 -key delegation.key -out delegation.csr`, all fields are optional, -but when it gets to your email it makes sense to add that -1. Self-sign your public key (valid for one year): -`openssl x509 -req -sha256 -days 365 -in delegation.csr -signkey delegation.key -out delegation.crt` -1. Change the `delegation.crt` to some unique name, like `my-name-vinyldns-delegation.crt` -1. Give your `my-name-vinyldns-delegation.crt` to someone that has the root keys and passphrases so -they can upload your delegation key to the repository - -#### Adding a Delegation Key to a Repository -This expects you to have the root keys and passphrases for the Docker repositories - -1. List current keys: `notary -s https://notary.docker.io -d ~/.docker/trust delegation list docker.io/vinyldns/api` -1. Add team member's public key: `notary delegation add docker.io/vinyldns/api targets/releases --all-paths` -1. Push key: `notary publish docker.io/vinyldns/api` -1. Repeat above steps for `docker.io/vinyldns/portal` - -Add their key ID to the table below, it can be viewed with `notary -s https://notary.docker.io -d ~/.docker/trust delegation list docker.io/vinyldns/api`. -It will be the one that didn't show up when you ran step one of this section - -| Name | Key ID | -|----------------|------------------------------------------------------------------ -| Nima Eskandary | 66027c822d68133da859f6639983d6d3d9643226b3f7259fc6420964993b499a, cdca33de91c54f801d89240d18b5037e274461ba1c88c10451070c97e9f665b4 | -| Rebecca Star | 04285e24d3b9a8b614b34da229669de1f75c9faa471057e8b4a7d60aac0d5bf5 | -| Michael Ly |dd3a5938fc927de087ad4b59d6ac8f62b6502d05b2cc9b0623276cbac7dbf05b | - -#### Pushing a Signed Image with your Delegation Key -1. Run `notary key import --role user` -1. You will have to create a passphrase for this key that encrypts it at rest. Use a password generator to make a -strong password and save it somewhere safe, like apple keychain or some other password manager -1. From now on `docker push` will be try to sign images with the delegation key if it was configured for that Docker -repository ## Sonatype Credentials @@ -163,7 +82,7 @@ running the release 1. Run `bin/release.sh` _Note: the arg "skip-tests" will skip unit, integration and functional testing before a release_ 1. You will be asked to confirm the version which originally comes from `version.sbt`. _NOTE: if the version ends with `SNAPSHOT`, then the docker latest tag won't be applied and the core module will only be published to the sonatype -staging repo. +staging repo._ 1. When it comes to the sonatype stage, you will need the passphrase handy for the signing key, [Sonatype Credentials](#sonatype-credentials) 1. Assuming things were successful, make a pr since sbt release auto-bumped `version.sbt` and made a commit for you 1. Run `./build/docker-release.sh --branch [TAG CREATED FROM PREVIOUS STEP, e.g. v0.9.3] --clean --push` diff --git a/README.md b/README.md index 71301077b..aea300f55 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,5 @@ -[![Join the chat at https://gitter.im/vinyldns](https://badges.gitter.im/vinyldns/vinyldns.svg)](https://gitter.im/vinyldns) ![Build](https://github.com/vinyldns/vinyldns/workflows/Continuous%20Integration/badge.svg) [![CodeCov ](https://codecov.io/gh/vinyldns/vinyldns/branch/master/graph/badge.svg)](https://codecov.io/gh/vinyldns/vinyldns) -[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/2682/badge)](https://bestpractices.coreinfrastructure.org/projects/2682) [![License](https://img.shields.io/github/license/vinyldns/vinyldns)](https://github.com/vinyldns/vinyldns/blob/master/LICENSE) [![conduct](https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg)](https://github.com/vinyldns/vinyldns/blob/master/CODE_OF_CONDUCT.md) @@ -23,17 +21,16 @@ secure RESTful API, and integration with infrastructure automation tools like An It is designed to integrate with your existing DNS infrastructure, and provides extensibility to fit your installation. VinylDNS helps secure DNS management via: -* AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit -* Throttling of DNS updates to rate limit concurrent updates against your DNS systems -* Encrypting user secrets and TSIG keys at rest and in-transit -* Recording every change made to DNS records and zones +- AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit +- Throttling of DNS updates to rate limit concurrent updates against your DNS systems +- Encrypting user secrets and TSIG keys at rest and in-transit +- Recording every change made to DNS records and zones Integration is simple with first-class language support including: -* java -* ruby -* python -* go-lang -* javascript +- Java +- Python +- Go +- JavaScript ## Table of Contents - [Quickstart](#quickstart) @@ -59,7 +56,7 @@ There exist several clients at that can be used to ## Things to try in the portal 1. View the portal at in a web browser -1. Login with the credentials ***professor*** and ***professor*** +1. Login with the credentials `testuser` and `testpassword` 1. Navigate to the `groups` tab: 1. Click on the **New Group** button and create a new group, the group id is the uuid in the url after you view the group 1. View zones you connected to in the `zones` tab: . For a quick test, create a new zone named "ok" with an email of "test@test.com" and choose a group you created from the previous step. (Note, see [Developer Guide](DEVELOPER_GUIDE.md#loading-test-data) for creating a zone) @@ -79,7 +76,7 @@ TTL = 300, IP Addressess = 1.1.1.1` 1. A similar `docker/.env.quickstart` can be modified to change the default ports for the Portal and API. You must also modify their config files with the new port: https://www.vinyldns.io/operator/config-portal & https://www.vinyldns.io/operator/config-api ## Code of Conduct -This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By +This project, and everyone participating in it, are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com. ## Developer Guide @@ -89,15 +86,14 @@ See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for instructions on setting up Viny See the [Contributing Guide](CONTRIBUTING.md). ## Contact -- [Gitter](https://gitter.im/vinyldns) - If you have any security concerns please contact the maintainers directly vinyldns-core@googlegroups.com ## Maintainers and Contributors The current maintainers (people who can merge pull requests) are: -- Paul Cleary -- Ryan Emerle -- Sriram Ramakrishnan -- Jim Wakemen + +- Ryan Emerle ([@remerle](https://github.com/remerle)) +- Sriram Ramakrishnan ([@sramakr](https://github.com/sramakr)) +- Jim Wakemen ([@jwakemen](https://github.com/jwakemen)) See [AUTHORS.md](AUTHORS.md) for the full list of contributors to VinylDNS. diff --git a/bin/build.sh b/bin/build.sh deleted file mode 100755 index eec65e38c..000000000 --- a/bin/build.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env bash -DIR=$( cd $(dirname $0) ; pwd -P ) - -echo "Verifying code..." -#${DIR}/verify.sh - -#step_result=$? -step_result=0 -if [ ${step_result} != 0 ] -then - echo "Failed to verify build!!!" - exit ${step_result} -fi - -echo "Func testing the api..." -${DIR}/func-test-api.sh - -step_result=$? -if [ ${step_result} != 0 ] -then - echo "Failed API func tests!!!" - exit ${step_result} -fi - -echo "Func testing the portal..." -${DIR}/func-test-portal.sh -step_result=$? -if [ ${step_result} != 0 ] -then - echo "Failed Portal func tests!!!" - exit ${step_result} -fi - -exit 0 diff --git a/bin/docker-publish-api.sh b/bin/docker-publish-api.sh deleted file mode 100755 index 80d1282d3..000000000 --- a/bin/docker-publish-api.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash -DIR=$( cd $(dirname $0) ; pwd -P ) - -cd $DIR/../ - -echo "Publishing docker image..." -sbt clean docker:publish -publish_result=$? -cd $DIR -exit ${publish_result} diff --git a/bin/docker-up-api-server.sh b/bin/docker-up-api-server.sh deleted file mode 100755 index 8b1efb865..000000000 --- a/bin/docker-up-api-server.sh +++ /dev/null @@ -1,58 +0,0 @@ -#!/bin/bash -###################################################################### -# Copies the contents of `docker` into target/scala-2.12 -# to start up dependent services via docker compose. Once -# dependent services are started up, the fat jar built by sbt assembly -# is loaded into a docker container. The api will be available -# by default on port 9000 -###################################################################### - - -DIR=$( cd $(dirname $0) ; pwd -P ) - -set -a # Required in order to source docker/.env -# Source customizable env files -source "$DIR"/.env -source "$DIR"/../docker/.env - -WORK_DIR="$DIR"/../target/scala-2.12 -mkdir -p "$WORK_DIR" - -echo "Copy all Docker to the target directory so we can start up properly and the Docker context is small..." -cp -af "$DIR"/../docker "$WORK_DIR"/ - -echo "Copy the vinyldns.jar to the API Docker folder so it is in context..." -if [[ ! -f "$DIR"/../modules/api/target/scala-2.12/vinyldns.jar ]]; then - echo "vinyldns.jar not found, building..." - cd "$DIR"/../ - sbt api/clean api/assembly - cd "$DIR" -fi -cp -f "$DIR"/../modules/api/target/scala-2.12/vinyldns.jar "$WORK_DIR"/docker/api - -echo "Starting API server and all dependencies in the background..." -docker-compose -f "$WORK_DIR"/docker/docker-compose-func-test.yml --project-directory "$WORK_DIR"/docker up --build -d api - -echo "Waiting for API to be ready at ${VINYLDNS_API_URL} ..." -DATA="" -RETRY=40 -while [ "$RETRY" -gt 0 ] -do - DATA=$(curl -I -s "${VINYLDNS_API_URL}/ping" -o /dev/null -w "%{http_code}") - if [ $? -eq 0 ] - then - echo "Succeeded in connecting to VinylDNS API!" - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep 1 - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for VinylDNS API to be ready, failing" - exit 1 - fi - fi -done diff --git a/bin/docker-up-dns-server.sh b/bin/docker-up-dns-server.sh deleted file mode 100755 index 864a47d0a..000000000 --- a/bin/docker-up-dns-server.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash -DIR=$( cd $(dirname $0) ; pwd -P ) - -echo "Starting ONLY the bind9 server. To start an api server use the api server script" -docker-compose -f $DIR/../docker/docker-compose-func-test.yml --project-directory $DIR/../docker up -d bind9 diff --git a/bin/func-test-api-testbind9.sh b/bin/func-test-api-testbind9.sh deleted file mode 100755 index 1ad4d7466..000000000 --- a/bin/func-test-api-testbind9.sh +++ /dev/null @@ -1,57 +0,0 @@ -#!/bin/bash -###################################################################### -# Copies the contents of `docker` into target/scala-2.12 -# to start up dependent services via docker compose. Once -# dependent services are started up, the fat jar built by sbt assembly -# is loaded into a docker container. Finally, the func tests run inside -# another docker container -# At the end, we grab all the logs and place them in the target -# directory -###################################################################### - -DIR=$( cd $(dirname $0) ; pwd -P ) -WORK_DIR=$DIR/../target/scala-2.12 -mkdir -p $WORK_DIR - -echo "Cleaning up unused networks..." -docker network prune -f - -echo "Copy all docker to the target directory so we can start up properly and the docker context is small..." -cp -af $DIR/../docker $WORK_DIR/ - -echo "Copy over the functional tests as well as those that are run in a container..." -mkdir -p $WORK_DIR/functest -rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest - -echo "Copy the vinyldns.jar to the api docker folder so it is in context..." -if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then - echo "vinyldns jar not found, building..." - cd $DIR/../ - sbt api/clean api/assembly - cd $DIR -fi -cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api - -echo "Starting docker environment and running func tests..." - -# If PAR_CPU is unset; default to auto -if [ -z "${PAR_CPU}" ]; then - export PAR_CPU=auto -fi - -docker-compose -f $WORK_DIR/docker/docker-compose-func-test-testbind9.yml --project-directory $WORK_DIR/docker --log-level ERROR up --build --exit-code-from functest -test_result=$? - -echo "Grabbing the logs..." - -docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null -docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null -docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null -docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null -docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null - -echo "Cleaning up docker containers..." -$DIR/./remove-vinyl-containers.sh - -echo "Func tests returned result: ${test_result}" -exit ${test_result} diff --git a/bin/func-test-api-travis.sh b/bin/func-test-api-travis.sh deleted file mode 100755 index f8794c90d..000000000 --- a/bin/func-test-api-travis.sh +++ /dev/null @@ -1,57 +0,0 @@ -#!/bin/bash -###################################################################### -# Copies the contents of `docker` into target/scala-2.12 -# to start up dependent services via docker compose. Once -# dependent services are started up, the fat jar built by sbt assembly -# is loaded into a docker container. Finally, the func tests run inside -# another docker container -# At the end, we grab all the logs and place them in the target -# directory -###################################################################### - -DIR=$( cd $(dirname $0) ; pwd -P ) -WORK_DIR=$DIR/../target/scala-2.12 -mkdir -p $WORK_DIR - -echo "Cleaning up unused networks..." -docker network prune -f - -echo "Copy all docker to the target directory so we can start up properly and the docker context is small..." -cp -af $DIR/../docker $WORK_DIR/ - -echo "Copy over the functional tests as well as those that are run in a container..." -mkdir -p $WORK_DIR/functest -rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest - -echo "Copy the vinyldns.jar to the api docker folder so it is in context..." -if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then - echo "vinyldns jar not found, building..." - cd $DIR/../ - sbt build-api - cd $DIR -fi -cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api - -echo "Starting docker environment and running func tests..." - -if [ -z "${PAR_CPU}" ]; then - export PAR_CPU=2 -fi - -docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker up --build --exit-code-from functest -test_result=$? - -echo "Grabbing the logs..." -docker logs vinyldns-functest -docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null -docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null -docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null -docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null -docker logs vinyldns-dynamodb > $DIR/../target/vinyldns-dynamodb.log 2>/dev/null -docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null - -echo "Cleaning up docker containers..." -$DIR/./remove-vinyl-containers.sh - -echo "Func tests returned result: ${test_result}" -exit ${test_result} diff --git a/bin/func-test-api.sh b/bin/func-test-api.sh index 473618148..4a3050597 100755 --- a/bin/func-test-api.sh +++ b/bin/func-test-api.sh @@ -1,57 +1,7 @@ -#!/bin/bash -###################################################################### -# Copies the contents of `docker` into target/scala-2.12 -# to start up dependent services via docker compose. Once -# dependent services are started up, the fat jar built by sbt assembly -# is loaded into a docker container. Finally, the func tests run inside -# another docker container -# At the end, we grab all the logs and place them in the target -# directory -###################################################################### +#!/usr/bin/env bash +set -euo pipefail -DIR=$( cd $(dirname $0) ; pwd -P ) -WORK_DIR=$DIR/../target/scala-2.12 -mkdir -p $WORK_DIR +DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) -echo "Cleaning up unused networks..." -docker network prune -f - -echo "Copy all docker to the target directory so we can start up properly and the docker context is small..." -cp -af $DIR/../docker $WORK_DIR/ - -echo "Copy over the functional tests as well as those that are run in a container..." -mkdir -p $WORK_DIR/functest -rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest - -echo "Copy the vinyldns.jar to the api docker folder so it is in context..." -if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then - echo "vinyldns jar not found, building..." - cd $DIR/../ - sbt api/clean api/assembly - cd $DIR -fi -cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api - -echo "Starting docker environment and running func tests..." - -# If PAR_CPU is unset; default to auto -if [ -z "${PAR_CPU}" ]; then - export PAR_CPU=auto -fi - -docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker --log-level ERROR up --build --exit-code-from functest -test_result=$? - -echo "Grabbing the logs..." - -docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null -docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null -docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null -docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null -docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null - -echo "Cleaning up docker containers..." -$DIR/./remove-vinyl-containers.sh - -echo "Func tests returned result: ${test_result}" -exit ${test_result} +cd "$DIR/../test/api/functional" +make diff --git a/bin/func-test-portal.sh b/bin/func-test-portal.sh index 816828a2f..c40d0df60 100755 --- a/bin/func-test-portal.sh +++ b/bin/func-test-portal.sh @@ -1,46 +1,7 @@ -#!/bin/bash -###################################################################### -# Runs e2e tests against the portal -###################################################################### +#!/usr/bin/env bash +set -euo pipefail -DIR=$( cd $(dirname $0) ; pwd -P ) -WORK_DIR=$DIR/../modules/portal +DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) -function check_for() { - which $1 >/dev/null 2>&1 - EXIT_CODE=$? - if [ ${EXIT_CODE} != 0 ] - then - echo "$1 is not installed" - exit ${EXIT_CODE} - fi -} - -cd $WORK_DIR -check_for python -check_for npm - -# if the program exits before this has been captured then there must have been an error -EXIT_CODE=1 - -# javascript code generate -npm install -grunt default - -TEST_SUITES=('grunt unit') - -for TEST in "${TEST_SUITES[@]}" -do - echo "##### Running test: [$TEST]" - $TEST - EXIT_CODE=$? - echo "##### Test [$TEST] ended with status [$EXIT_CODE]" - if [ ${EXIT_CODE} != 0 ] - then - cd - - exit ${EXIT_CODE} - fi -done - -cd - -exit 0 +cd "$DIR/../test/portal/functional" +make diff --git a/bin/generate-aes-256-hex-key.sh b/bin/generate-aes-256-hex-key.sh deleted file mode 100755 index ea8f4a74b..000000000 --- a/bin/generate-aes-256-hex-key.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash - -# Generate 256-bit AES key. -# -# Usage: -# $ ./generate-aes-256-hex-key.sh [passphrase] -# * passphrase: Optional passphrase used to generate secret key. A pseudo-random passphrase will be used if -# one is not provided. - -if [[ ! -z "$1" ]] -then - echo "Using user-provided passphrase." -fi - -PASSPHRASE=${1:-$(openssl rand 32)} - -KEY=$(openssl enc -aes-256-cbc -k "$PASSPHRASE" -P -md sha1 | awk -F'=' 'NR == 2 {print $2}') -echo "Your 256-bit AES hex key: $KEY" diff --git a/bin/release.sh b/bin/release.sh index 7fb5ba096..002719882 100755 --- a/bin/release.sh +++ b/bin/release.sh @@ -34,13 +34,11 @@ if [ "$1" != "skip-tests" ]; then fi printf "\nrunning api func tests... \n" - "$DIR"/remove-vinyl-containers.sh if ! "$DIR"/func-test-api.sh then printf "\nerror: bin/func-test-api.sh failed \n" exit 1 fi - "$DIR"/remove-vinyl-containers.sh printf "\nrunning portal func tests... \n" if ! "$DIR"/func-test-portal.sh diff --git a/bin/verify.sh b/bin/verify.sh index fa4628faa..ec4b8f57f 100755 --- a/bin/verify.sh +++ b/bin/verify.sh @@ -1,18 +1,14 @@ -#!/bin/bash +#!/usr/bin/env bash +set -euo pipefail + +DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) echo 'Running tests...' -echo 'Stopping any docker containers...' -./bin/remove-vinyl-containers.sh - -echo 'Starting up docker for integration testing and running unit and integration tests on all modules...' -sbt ";validate;verify" +cd "$DIR/../test/api/integration" +make build && make run -- sbt ";validate;verify" verify_result=$? -echo 'Stopping any docker containers...' -./bin/remove-vinyl-containers.sh - -if [ ${verify_result} -eq 0 ] -then +if [ ${verify_result} -eq 0 ]; then echo 'Verify successful!' exit 0 else diff --git a/build.sbt b/build.sbt index 8c4b891d3..15cbf3b13 100644 --- a/build.sbt +++ b/build.sbt @@ -1,7 +1,6 @@ import CompilerOptions._ import Dependencies._ import Resolvers._ -import com.typesafe.sbt.packager.docker._ import microsites._ import org.scalafmt.sbt.ScalafmtPlugin._ import sbtrelease.ReleasePlugin.autoImport.ReleaseTransformations._ @@ -11,7 +10,7 @@ import scala.util.Try resolvers ++= additionalResolvers -lazy val IntegrationTest = config("it") extend Test +lazy val IntegrationTest = config("it").extend(Test) // settings that should be inherited by all projects lazy val sharedSettings = Seq( @@ -21,11 +20,12 @@ lazy val sharedSettings = Seq( startYear := Some(2018), licenses += ("Apache-2.0", new URL("https://www.apache.org/licenses/LICENSE-2.0.txt")), scalacOptions ++= scalacOptionsByV(scalaVersion.value), - scalacOptions in(Compile, doc) += "-no-link-warnings", + scalacOptions in (Compile, doc) += "-no-link-warnings", // Use wart remover to eliminate code badness wartremoverErrors := ( if (getPropertyFlagOrDefault("build.lintOnCompile", true)) - Seq(Wart.EitherProjectionPartial, + Seq( + Wart.EitherProjectionPartial, Wart.IsInstanceOf, Wart.JavaConversions, Wart.Return, @@ -33,22 +33,21 @@ lazy val sharedSettings = Seq( Wart.ExplicitImplicitTypes ) else Seq.empty - ), + ), // scala format - scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", true), - scalafmtOnCompile in IntegrationTest := getPropertyFlagOrDefault("build.scalafmtOnCompile", true), + scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", false), // coverage options coverageMinimum := 85, coverageFailOnMinimum := true, - coverageHighlighting := true, + coverageHighlighting := true ) lazy val testSettings = Seq( parallelExecution in Test := true, parallelExecution in IntegrationTest := false, - fork in IntegrationTest := true, + fork in IntegrationTest := false, testOptions in Test += Tests.Argument("-oDNCXEPQRMIK", "-l", "SkipCI"), logBuffered in Test := false, // Hide stack traces in tests @@ -75,74 +74,16 @@ lazy val apiAssemblySettings = Seq( // there are some odd things from dnsjava including update.java and dig.java that we don't use assemblyMergeStrategy in assembly := { case "update.class" | "dig.class" => MergeStrategy.discard - case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => MergeStrategy.discard - case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => MergeStrategy.discard + case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => + MergeStrategy.discard + case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => + MergeStrategy.discard case x => val oldStrategy = (assemblyMergeStrategy in assembly).value oldStrategy(x) } ) -lazy val apiDockerSettings = Seq( - dockerBaseImage := "adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine", - dockerUsername := Some("vinyldns"), - packageName in Docker := "api", - dockerExposedPorts := Seq(9000), - dockerEntrypoint := Seq("/opt/docker/bin/api"), - dockerExposedVolumes := Seq("/opt/docker/lib_extra"), // mount extra libs to the classpath - dockerExposedVolumes := Seq("/opt/docker/conf"), // mount extra config to the classpath - - // add extra libs to class path via mount - scriptClasspath in bashScriptDefines ~= (cp => cp :+ "${app_home}/../lib_extra/*"), - - // adds config file to mount - bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/application.conf"""", - bashScriptExtraDefines += """addJava "-Dlogback.configurationFile=${app_home}/../conf/logback.xml"""", // adds logback - - // this is the default version, can be overridden - bashScriptExtraDefines += s"""addJava "-Dvinyldns.base-version=${(version in ThisBuild).value}"""", - bashScriptExtraDefines += "(cd ${app_home} && ./wait-for-dependencies.sh && cd -)", - credentials in Docker := Seq(Credentials(Path.userHome / ".ivy2" / ".dockerCredentials")), - dockerCommands ++= Seq( - Cmd("USER", "root"), // switch to root so we can install netcat - ExecCmd("RUN", "apk", "add", "--update", "--no-cache", "netcat-openbsd", "bash"), - Cmd("USER", "1001:0") // switch back to the daemon user - ), -) - -lazy val portalDockerSettings = Seq( - dockerBaseImage := "adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine", - dockerUsername := Some("vinyldns"), - packageName in Docker := "portal", - dockerExposedPorts := Seq(9001), - dockerExposedVolumes := Seq("/opt/docker/lib_extra"), // mount extra libs to the classpath - dockerExposedVolumes := Seq("/opt/docker/conf"), // mount extra config to the classpath - - // add extra libs to class path via mount - scriptClasspath in bashScriptDefines ~= (cp => cp :+ "${app_home}/../lib_extra/*"), - - // adds config file to mount - bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/application.conf"""", - bashScriptExtraDefines += """addJava "-Dlogback.configurationFile=${app_home}/../conf/logback.xml"""", - - // this is the default version, can be overridden - bashScriptExtraDefines += s"""addJava "-Dvinyldns.base-version=${(version in ThisBuild).value}"""", - - // needed to avoid access issue in play for the RUNNING_PID - // https://github.com/lightbend/sbt-reactive-app/issues/177 - bashScriptExtraDefines += s"""addJava "-Dplay.server.pidfile.path=/dev/null"""", - - // wait for mysql - bashScriptExtraDefines += "(cd ${app_home}/../ && ls && ./wait-for-dependencies.sh && cd -)", - dockerCommands ++= Seq( - Cmd("USER", "root"), // switch to root so we can install netcat - ExecCmd("RUN", "apk", "add", "--update", "--no-cache", "netcat-openbsd", "bash"), - Cmd("USER", "1001:0") // switch back to the user that runs the process - ), - - credentials in Docker := Seq(Credentials(Path.userHome / ".ivy2" / ".dockerCredentials")) -) - lazy val noPublishSettings = Seq( publish := {}, publishLocal := {}, @@ -164,7 +105,7 @@ lazy val portalPublishSettings = Seq( case (file, _) => file.getName.equals("local.conf") }), // for local.conf to be excluded in jars - mappings in(Compile, packageBin) ~= (_.filterNot { + mappings in (Compile, packageBin) ~= (_.filterNot { case (file, _) => file.getName.equals("local.conf") }) ) @@ -181,8 +122,7 @@ lazy val allApiSettings = Revolver.settings ++ Defaults.itSettings ++ sharedSettings ++ apiAssemblySettings ++ testSettings ++ - apiPublishSettings ++ - apiDockerSettings + apiPublishSettings lazy val api = (project in file("modules/api")) .enablePlugins(JavaAppPackaging, AutomateHeaderPlugin) @@ -197,23 +137,18 @@ lazy val api = (project in file("modules/api")) r53 % "compile->compile;it->it" ) -val killDocker = TaskKey[Unit]("killDocker", "Kills all vinyldns docker containers") -lazy val root = (project in file(".")).enablePlugins(DockerComposePlugin, AutomateHeaderPlugin) +lazy val root = (project in file(".")) + .enablePlugins(AutomateHeaderPlugin) .configs(IntegrationTest) .settings(headerSettings(IntegrationTest)) .settings(sharedSettings) .settings( - inConfig(IntegrationTest)(scalafmtConfigSettings), - killDocker := { - import scala.sys.process._ - "./bin/remove-vinyl-containers.sh" ! - }, + inConfig(IntegrationTest)(scalafmtConfigSettings) ) .aggregate(core, api, portal, mysql, sqs, r53) lazy val coreBuildSettings = Seq( name := "core", - // do not use unused params as NoOpCrypto ignores its constructor, we should provide a way // to write a crypto plugin so that we fall back to a noarg constructor scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params") @@ -221,7 +156,9 @@ lazy val coreBuildSettings = Seq( lazy val corePublishSettings = Seq( publishMavenStyle := true, publishArtifact in Test := false, - pomIncludeRepository := { _ => false }, + pomIncludeRepository := { _ => + false + }, autoAPIMappings := true, publish in Docker := {}, mainClass := None, @@ -235,7 +172,8 @@ lazy val corePublishSettings = Seq( sonatypeProfileName := "io.vinyldns" ) -lazy val core = (project in file("modules/core")).enablePlugins(AutomateHeaderPlugin) +lazy val core = (project in file("modules/core")) + .enablePlugins(AutomateHeaderPlugin) .settings(sharedSettings) .settings(coreBuildSettings) .settings(corePublishSettings) @@ -257,7 +195,8 @@ lazy val mysql = (project in file("modules/mysql")) .settings(libraryDependencies ++= mysqlDependencies ++ commonTestDependencies.map(_ % "test, it")) .settings( organization := "io.vinyldns" - ).dependsOn(core % "compile->compile;test->test") + ) + .dependsOn(core % "compile->compile;test->test") .settings(name := "mysql") lazy val sqs = (project in file("modules/sqs")) @@ -271,8 +210,9 @@ lazy val sqs = (project in file("modules/sqs")) .settings(Defaults.itSettings) .settings(libraryDependencies ++= sqsDependencies ++ commonTestDependencies.map(_ % "test, it")) .settings( - organization := "io.vinyldns", - ).dependsOn(core % "compile->compile;test->test") + organization := "io.vinyldns" + ) + .dependsOn(core % "compile->compile;test->test") .settings(name := "sqs") lazy val r53 = (project in file("modules/r53")) @@ -287,53 +227,51 @@ lazy val r53 = (project in file("modules/r53")) .settings(libraryDependencies ++= r53Dependencies ++ commonTestDependencies.map(_ % "test, it")) .settings( organization := "io.vinyldns", - coverageMinimum := 65, - ).dependsOn(core % "compile->compile;test->test") + coverageMinimum := 65 + ) + .dependsOn(core % "compile->compile;test->test") .settings(name := "r53") val preparePortal = TaskKey[Unit]("preparePortal", "Runs NPM to prepare portal for start") -val checkJsHeaders = TaskKey[Unit]("checkJsHeaders", "Runs script to check for APL 2.0 license headers") -val createJsHeaders = TaskKey[Unit]("createJsHeaders", "Runs script to prepend APL 2.0 license headers to files") +val checkJsHeaders = + TaskKey[Unit]("checkJsHeaders", "Runs script to check for APL 2.0 license headers") +val createJsHeaders = + TaskKey[Unit]("createJsHeaders", "Runs script to prepend APL 2.0 license headers to files") -lazy val portal = (project in file("modules/portal")).enablePlugins(PlayScala, AutomateHeaderPlugin) +lazy val portal = (project in file("modules/portal")) + .enablePlugins(PlayScala, AutomateHeaderPlugin) .settings(sharedSettings) .settings(testSettings) .settings(portalPublishSettings) - .settings(portalDockerSettings) .settings( name := "portal", libraryDependencies ++= portalDependencies, routesGenerator := InjectedRoutesGenerator, coverageExcludedPackages := ";views.html.*;router.*;controllers\\.javascript.*;.*Reverse.*", javaOptions in Test += "-Dconfig.file=conf/application-test.conf", - // ads the version when working locally with sbt run PlayKeys.devSettings += "vinyldns.base-version" -> (version in ThisBuild).value, - // adds an extra classpath to the portal loading so we can externalize jars, make sure to create the lib_extra // directory and lay down any dependencies that are required when deploying scriptClasspath in bashScriptDefines ~= (cp => cp :+ "lib_extra/*"), mainClass in reStart := None, - // we need to filter out unused for the portal as the play framework needs a lot of unused things - scalacOptions ~= { opts => opts.filterNot(p => p.contains("unused")) }, - + scalacOptions ~= { opts => + opts.filterNot(p => p.contains("unused")) + }, // runs our prepare portal process preparePortal := { import scala.sys.process._ "./modules/portal/prepare-portal.sh" ! }, - checkJsHeaders := { import scala.sys.process._ "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" ! }, - createJsHeaders := { import scala.sys.process._ "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js" ! }, - // change the name of the output to portal.zip packageName in Universal := "portal" ) @@ -365,8 +303,16 @@ lazy val docSettings = Seq( mdocIn := (sourceDirectory in Compile).value / "mdoc", micrositeCssDirectory := (resourceDirectory in Compile).value / "microsite" / "css", micrositeCompilingDocsTool := WithMdoc, - micrositeFavicons := Seq(MicrositeFavicon("favicon16x16.png", "16x16"), MicrositeFavicon("favicon32x32.png", "32x32")), - micrositeEditButton := Some(MicrositeEditButton("Improve this page", "/edit/master/modules/docs/src/main/mdoc/{{ page.path }}")), + micrositeFavicons := Seq( + MicrositeFavicon("favicon16x16.png", "16x16"), + MicrositeFavicon("favicon32x32.png", "32x32") + ), + micrositeEditButton := Some( + MicrositeEditButton( + "Improve this page", + "/edit/master/modules/docs/src/main/mdoc/{{ page.path }}" + ) + ), micrositeFooterText := None, micrositeHighlightTheme := "atom-one-light", includeFilter in makeSite := "*.html" | "*.css" | "*.png" | "*.jpg" | "*.jpeg" | "*.gif" | "*.js" | "*.swf" | "*.md" | "*.webm" | "*.ico" | "CNAME" | "*.yml" | "*.svg" | "*.json" | "*.csv" @@ -384,8 +330,10 @@ lazy val setSonatypeReleaseSettings = ReleaseStep(action = oldState => { val v = extracted.get(Keys.version) val snap = v.endsWith("SNAPSHOT") if (!snap) { - val publishToSettings = Some("releases" at "https://oss.sonatype.org/" + "service/local/staging/deploy/maven2") - val newState = extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState) + val publishToSettings = + Some("releases".at("https://oss.sonatype.org/" + "service/local/staging/deploy/maven2")) + val newState = + extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState) // create sonatypeReleaseCommand with releaseSonatype step val sonatypeCommand = Command.command("sonatypeReleaseCommand") { @@ -397,8 +345,10 @@ lazy val setSonatypeReleaseSettings = ReleaseStep(action = oldState => { newState.copy(definedCommands = newState.definedCommands :+ sonatypeCommand) } else { - val publishToSettings = Some("snapshots" at "https://oss.sonatype.org/" + "content/repositories/snapshots") - val newState = extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState) + val publishToSettings = + Some("snapshots".at("https://oss.sonatype.org/" + "content/repositories/snapshots")) + val newState = + extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState) // create sonatypeReleaseCommand without releaseSonatype step val sonatypeCommand = Command.command("sonatypeReleaseCommand") { @@ -437,21 +387,24 @@ releaseProcess := finalReleaseStage // Let's do things in parallel! -addCommandAlias("validate", "; root/clean; " + - "all core/headerCheck core/test:headerCheck " + - "api/headerCheck api/test:headerCheck api/it:headerCheck " + - "mysql/headerCheck mysql/test:headerCheck mysql/it:headerCheck " + - "r53/headerCheck r53/test:headerCheck r53/it:headerCheck " + - "sqs/headerCheck sqs/test:headerCheck sqs/it:headerCheck " + - "portal/headerCheck portal/test:headerCheck; " + - "portal/createJsHeaders;portal/checkJsHeaders;" + - "root/compile;root/test:compile;root/it:compile" +addCommandAlias( + "validate", + "; root/clean; " + + "all core/headerCheck core/test:headerCheck " + + "api/headerCheck api/test:headerCheck api/it:headerCheck " + + "mysql/headerCheck mysql/test:headerCheck mysql/it:headerCheck " + + "r53/headerCheck r53/test:headerCheck r53/it:headerCheck " + + "sqs/headerCheck sqs/test:headerCheck sqs/it:headerCheck " + + "portal/headerCheck portal/test:headerCheck; " + + "portal/createJsHeaders;portal/checkJsHeaders;" + + "root/compile;root/test:compile;root/it:compile" ) -addCommandAlias("verify", "; project root; killDocker; dockerComposeUp; " + - "project root; coverage; " + - "all test it:test; " + - "project root; coverageReport; coverageAggregate; killDocker" +addCommandAlias( + "verify", + "; project root; coverage; " + + "all test it:test; " + + "project root; coverageReport; coverageAggregate" ) // Build the artifacts for release diff --git a/build/README.md b/build/README.md index 51ddb68f6..b913d5d62 100644 --- a/build/README.md +++ b/build/README.md @@ -50,8 +50,6 @@ The build will generate several VinylDNS docker images that are used to deploy i - `vinyldns/api` - this is the heart of the VinylDNS system, the backend API - `vinyldns/portal` - the VinylDNS web UI -- `vinyldns/test-bind9` - a DNS server that is configured to support running the functional tests -- `vinyldns/test` - a container that will execute functional tests, and exit success or failure when the tests are complete ### vinyldns/api @@ -80,25 +78,3 @@ it is set as part of the container build any production environments. Typically, you will add your own `application.conf` file in here with your settings. - `/opt/docker/lib_extra/` - if you need to have additional jar files available to your VinylDNS instance. Rarely used, but if you want to bring your own message queue or database you can put the `jar` files there - -### vinyldns/test-bind9 - -This pulls correct DNS configuration to run func tests. You can largely disregard what is in here - -### vinyldns/test - -This is used to run functional tests against a vinyldns instance. **This is very useful for verifying -your environment as part of doing an upgrade.** By default, it will run against a local docker-compose setup. - -**Environment Variables** -- `VINYLDNS_URL` - the url to the vinyldns you will test against -- `DNS_IP` - the IP address to the `vinyldns/test-bind9` container that you will use for test purposes -- `TEST_PATTERN` - the actual functional test you want to run. *Important, set to empty string to run -ALL test; otherwise, omit the environment variable when you run to just run smoke tests*. - -**Example** - -This example will run all functional tests on the given VinylDNS url and DNS IP address -`docker run -e VINYLDNS_URL="https://my.vinyldns.example.com" -e DNS_IP="1.2.3.4" -e TEST_PATTERN=""` - - diff --git a/build/docker-release.sh b/build/docker-release.sh index 6d0f0a7c8..8487f90e9 100755 --- a/build/docker-release.sh +++ b/build/docker-release.sh @@ -107,15 +107,11 @@ if [ $DO_BUILD -eq 1 ]; then fi if [ $? -eq 0 ]; then - docker tag vinyldns/test-bind9:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test-bind9:$VINYLDNS_VERSION - docker tag vinyldns/test:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test:$VINYLDNS_VERSION docker tag vinyldns/api:$VINYLDNS_VERSION $REPOSITORY/vinyldns/api:$VINYLDNS_VERSION docker tag vinyldns/portal:$VINYLDNS_VERSION $REPOSITORY/vinyldns/portal:$VINYLDNS_VERSION if [ $TAG_LATEST -eq 1 ]; then echo "Tagging latest..." - docker tag vinyldns/test-bind9:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test-bind9:latest - docker tag vinyldns/test:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test:latest docker tag vinyldns/api:$VINYLDNS_VERSION $REPOSITORY/vinyldns/api:latest docker tag vinyldns/portal:$VINYLDNS_VERSION $REPOSITORY/vinyldns/portal:latest fi @@ -123,15 +119,11 @@ if [ $DO_BUILD -eq 1 ]; then fi if [ $DOCKER_PUSH -eq 1 ]; then - docker push $REPOSITORY/vinyldns/test-bind9:$VINYLDNS_VERSION - docker push $REPOSITORY/vinyldns/test:$VINYLDNS_VERSION docker push $REPOSITORY/vinyldns/api:$VINYLDNS_VERSION docker push $REPOSITORY/vinyldns/portal:$VINYLDNS_VERSION if [ $TAG_LATEST -eq 1 ]; then echo "Pushing latest..." - docker push $REPOSITORY/vinyldns/test-bind9:latest - docker push $REPOSITORY/vinyldns/test:latest docker push $REPOSITORY/vinyldns/api:latest docker push $REPOSITORY/vinyldns/portal:latest fi diff --git a/build/docker/test/Dockerfile b/build/docker/test/Dockerfile deleted file mode 100644 index d05e261fd..000000000 --- a/build/docker/test/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -FROM alpine/git:1.0.7 as gitcheckout - -ARG BRANCH=master - -RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns/vinyldns.git /vinyldns - -FROM python:2.7.16-alpine3.9 - -RUN apk add --update --no-cache bind-tools netcat-openbsd bash curl - -# The run script is what actually runs our func tests -COPY run.sh /app/run.sh -COPY run-tests.py /app/run-tests.py - -RUN chmod a+x /app/run.sh && chmod a+x /app/run-tests.py - -# Copy over the functional test directory, this must have been copied into the build context previous to this building! -COPY --from=gitcheckout /vinyldns/modules/api/functional_test/ /app/ - -# Install our func test requirements -RUN pip install --index-url https://pypi.python.org/simple/ -r /app/requirements.txt - -ENV VINYLDNS_URL="" -ENV DNS_IP="" -ENV TEST_PATTERN="test_verify_production" - -# set the entry point for the container to start vinyl, specify the config resource -ENTRYPOINT ["/app/run.sh"] diff --git a/build/docker/test/run-tests.py b/build/docker/test/run-tests.py deleted file mode 100644 index e4e464be0..000000000 --- a/build/docker/test/run-tests.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python -import os -import sys - -basedir = os.path.dirname(os.path.realpath(__file__)) - -report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports') -if not os.path.exists(report_dir): - os.system('mkdir -p ' + report_dir) - -import pytest - -result = 1 -result = pytest.main(list(sys.argv[1:])) - -sys.exit(result) diff --git a/build/docker/test/run.sh b/build/docker/test/run.sh deleted file mode 100755 index 5128b2acb..000000000 --- a/build/docker/test/run.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env bash - -# Assume defaults of local docker-compose if not set -if [ -z "${VINYLDNS_URL}" ]; then - VINYLDNS_URL="http://vinyldns-api:9000" -fi -if [ -z "${DNS_IP}" ]; then - DNS_IP=$(dig +short vinyldns-bind9) -fi - -# Assume all tests if not specified -if [ -z "${TEST_PATTERN}" ]; then - TEST_PATTERN= -else - TEST_PATTERN="-k ${TEST_PATTERN}" -fi - -echo "Waiting for API to be ready at ${VINYLDNS_URL} ..." -DATA="" -RETRY=60 -SLEEP_DURATION=1 -while [ "$RETRY" -gt 0 ] -do - DATA=$(curl -I -s "${VINYLDNS_URL}/ping" -o /dev/null -w "%{http_code}") - if [ $? -eq 0 ] - then - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep "$SLEEP_DURATION" - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for VinylDNS API to be ready, failing" - exit 1 - fi - fi -done - -echo "Running live tests against ${VINYLDNS_URL} and DNS server ${DNS_IP}" - -cd /app - -# Cleanup any errant cached file copies -find . -name "*.pyc" -delete -find . -name "__pycache__" -delete - -ls -al - -# -m plays havoc with -k, using variables is a headache, so doing this by hand -# run parallel tests first (not serial) -set -x -./run-tests.py live_tests -n2 -v -m "not skip_production and not serial" ${TEST_PATTERN} --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} --teardown=False -ret1=$? - -# IMPORTANT! pytest exists status code 5 if no tests are run, force that to 0 -if [ "$ret1" = 5 ]; then - echo "No tests collected." - ret1=0 -fi - -./run-tests.py live_tests -n0 -v -m "not skip_production and serial" ${TEST_PATTERN} --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} --teardown=True -ret2=$? -if [ "$ret2" = 5 ]; then - echo "No tests collected." - ret2=0 -fi - -if [ $ret1 -ne 0 ] || [ $ret2 -ne 0 ]; then - exit 1 -else - exit 0 -fi - diff --git a/docker/api/Dockerfile b/docker/api/Dockerfile index a65b93ad4..9a0f47df3 100644 --- a/docker/api/Dockerfile +++ b/docker/api/Dockerfile @@ -10,7 +10,6 @@ RUN chmod a+x /app/run.sh COPY docker.conf /app/docker.conf EXPOSE 9000 -EXPOSE 2551 # set the entry point for the container to start vinyl, specify the config resource ENTRYPOINT ["/app/run.sh"] diff --git a/docker/api/docker.conf b/docker/api/docker.conf index 35ca95bfe..02d4ecdd9 100644 --- a/docker/api/docker.conf +++ b/docker/api/docker.conf @@ -106,17 +106,17 @@ vinyldns { settings { # AWS access key and secret. - access-key = "x" + access-key = "test" access-key = ${?AWS_ACCESS_KEY} - secret-key = "x" + secret-key = "test" secret-key = ${?AWS_SECRET_ACCESS_KEY} # Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed. - signing-region = "x" + signing-region = "us-east-1" signing-region = ${?SQS_REGION} # Endpoint to access queue - service-endpoint = "http://vinyldns-elasticmq:9324/" + service-endpoint = "http://vinyldns-localstack:19007/" service-endpoint = ${?SQS_ENDPOINT} # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. diff --git a/docker/api/run.sh b/docker/api/run.sh index c9985fd65..11e0a80eb 100755 --- a/docker/api/run.sh +++ b/docker/api/run.sh @@ -3,20 +3,10 @@ # gets the docker-ized ip address, sets it to an environment variable export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'` -export DYNAMO_ADDRESS="vinyldns-dynamodb" -export DYNAMO_PORT=8000 -export JOURNAL_HOST="vinyldns-dynamodb" -export JOURNAL_PORT=8000 export MYSQL_ADDRESS="vinyldns-mysql" export MYSQL_PORT=3306 export JDBC_USER=root export JDBC_PASSWORD=pass -export DNS_ADDRESS="vinyldns-bind9" -export DYNAMO_KEY="local" -export DYNAMO_SECRET="local" -export DYNAMO_TABLE_PREFIX="" -export ELASTICMQ_ADDRESS="vinyldns-elasticmq" -export DYNAMO_ENDPOINT="http://${DYNAMO_ADDRESS}:${DYNAMO_PORT}" export JDBC_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/vinyldns?user=${JDBC_USER}&password=${JDBC_PASSWORD}" export JDBC_MIGRATION_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/?user=${JDBC_USER}&password=${JDBC_PASSWORD}" @@ -27,7 +17,7 @@ RETRY=40 SLEEP_DURATION=1 while [ "$RETRY" -gt 0 ] do - DATA=$(nc -vzw1 vinyldns-mysql 3306) + DATA=$(nc -vzw1 ${MYSQL_ADDRESS} ${MYSQL_PORT}) if [ $? -eq 0 ] then break diff --git a/docker/bind9/README.md b/docker/bind9/README.md index ca9bbdd98..fbf5b1c96 100644 --- a/docker/bind9/README.md +++ b/docker/bind9/README.md @@ -18,6 +18,5 @@ When used in a container, or to run `named`, the files in this directory should | Directory | Target | |:---|:---| -| `etc/named.conf.local` | `/etc/bind/` | -| `etc/named.partition*.conf` | `/var/bind/config/` | +| `etc/named.conf.*` | `/etc/bind/` | | `zones/` | `/var/bind/` | diff --git a/docker/bind9/etc/named.conf.local b/docker/bind9/etc/named.conf.local index 22ba7a61a..371b8e857 100755 --- a/docker/bind9/etc/named.conf.local +++ b/docker/bind9/etc/named.conf.local @@ -29,7 +29,7 @@ key "vinyldns-sha512." { secret "xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA=="; }; -include "/var/bind/config/named.partition1.conf"; -include "/var/bind/config/named.partition2.conf"; -include "/var/bind/config/named.partition3.conf"; -include "/var/bind/config/named.partition4.conf"; +include "/etc/bind/named.conf.partition1"; +include "/etc/bind/named.conf.partition2"; +include "/etc/bind/named.conf.partition3"; +include "/etc/bind/named.conf.partition4"; diff --git a/docker/bind9/etc/named.partition1.conf b/docker/bind9/etc/named.conf.partition1 similarity index 100% rename from docker/bind9/etc/named.partition1.conf rename to docker/bind9/etc/named.conf.partition1 diff --git a/docker/bind9/etc/named.partition2.conf b/docker/bind9/etc/named.conf.partition2 similarity index 100% rename from docker/bind9/etc/named.partition2.conf rename to docker/bind9/etc/named.conf.partition2 diff --git a/docker/bind9/etc/named.partition3.conf b/docker/bind9/etc/named.conf.partition3 similarity index 100% rename from docker/bind9/etc/named.partition3.conf rename to docker/bind9/etc/named.conf.partition3 diff --git a/docker/bind9/etc/named.partition4.conf b/docker/bind9/etc/named.conf.partition4 similarity index 100% rename from docker/bind9/etc/named.partition4.conf rename to docker/bind9/etc/named.conf.partition4 diff --git a/docker/docker-compose-func-test-testbind9.yml b/docker/docker-compose-func-test-testbind9.yml deleted file mode 100644 index d4e3b6704..000000000 --- a/docker/docker-compose-func-test-testbind9.yml +++ /dev/null @@ -1,60 +0,0 @@ -version: "3.0" -services: - mysql: - image: "mysql:5.7" - env_file: - .env - container_name: "vinyldns-mysql" - ports: - - "19002:3306" - logging: - driver: none - - bind9: - image: "vinyldns/test-bind9:0.9.4" - container_name: "vinyldns-bind9" - ports: - - "19001:53/tcp" - - "19001:53/udp" - logging: - driver: none - - localstack: - image: localstack/localstack:0.10.4 - container_name: "vinyldns-localstack" - ports: - - "19000:19000" - - "19006:19006" - - "19007:19007" - - "19009:19009" - environment: - - SERVICES=sns:19006,sqs:19007,route53:19009 - - START_WEB=0 - - HOSTNAME_EXTERNAL=vinyldns-localstack - - # this file is copied into the target directory to get the jar! won't run in place as is! - api: - build: - context: api - env_file: - .env - container_name: "vinyldns-api" - ports: - - "9000:9000" - depends_on: - - mysql - - bind9 - - localstack - logging: - driver: none - - functest: - build: - context: functest - env_file: - .env - environment: - - PAR_CPU=${PAR_CPU} - container_name: "vinyldns-functest" - depends_on: - - api diff --git a/docker/docker-compose-func-test.yml b/docker/docker-compose-func-test.yml deleted file mode 100644 index d24194acc..000000000 --- a/docker/docker-compose-func-test.yml +++ /dev/null @@ -1,87 +0,0 @@ -version: "3.5" - -services: - - # this file is copied into the target directory to get the jar! won't run in place as is! - api: - build: - context: api - env_file: - .env - container_name: "vinyldns-api" - ports: - - "9000:9000" - depends_on: - - mysql - - bind9 - - localstack - networks: - vinyldns: - ipv4_address: 172.10.10.2 - - mysql: - image: "mysql:5.7" - env_file: - .env - container_name: "vinyldns-mysql" - ports: - - "19002:3306" - logging: - driver: none - networks: - vinyldns: - ipv4_address: 172.10.10.3 - - localstack: - image: localstack/localstack:0.10.4 - container_name: "vinyldns-localstack" - ports: - - "19006:19006" - - "19007:19007" - - "19009:19009" - environment: - - SERVICES=sns:19006,sqs:19007,route53:19009 - - START_WEB=0 - - HOSTNAME_EXTERNAL=vinyldns-localstack - networks: - vinyldns: - ipv4_address: 172.10.10.4 - - bind9: - image: "vinyldns/bind9:0.0.4" - env_file: - .env - container_name: "vinyldns-bind9" - volumes: - - ./bind9/etc:/var/cache/bind/config - - ./bind9/zones:/var/cache/bind/zones - ports: - - "19001:53/tcp" - - "19001:53/udp" - logging: - driver: none - networks: - vinyldns: - ipv4_address: 172.10.10.10 - - functest: - build: - context: functest - env_file: - .env - environment: - - PAR_CPU=${PAR_CPU} - container_name: "vinyldns-functest" - depends_on: - - api - networks: - - vinyldns - -networks: - # Custom network so that we have some control over IP space and deterministic container IPs - vinyldns: - name: vinyldns - driver: bridge - ipam: - config: - - subnet: 172.10.10.0/24 diff --git a/docker/docker-compose-quick-start.yml b/docker/docker-compose-quick-start.yml index 28990de85..39fcefda0 100644 --- a/docker/docker-compose-quick-start.yml +++ b/docker/docker-compose-quick-start.yml @@ -9,7 +9,7 @@ services: - "19002:3306" bind9: - image: "vinyldns/bind9:0.0.4" + image: "vinyldns/bind9:0.0.5" env_file: .env.quickstart container_name: "vinyldns-bind9" diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index 0c741a36a..e11a52fba 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -8,7 +8,7 @@ services: - "19002:3306" bind9: - image: vinyldns/bind9:0.0.4 + image: vinyldns/bind9:0.0.5 env_file: .env ports: diff --git a/docker/elasticmq/Dockerfile b/docker/elasticmq/Dockerfile deleted file mode 100644 index a9f515970..000000000 --- a/docker/elasticmq/Dockerfile +++ /dev/null @@ -1,10 +0,0 @@ -FROM alpine:3.2 -FROM anapsix/alpine-java:8_server-jre - -EXPOSE 9324 - -COPY run.sh /elasticmq/run.sh -COPY custom.conf /elasticmq/custom.conf -COPY elasticmq-server-0.13.2.jar /elasticmq/server.jar - -ENTRYPOINT ["/elasticmq/run.sh"] diff --git a/docker/elasticmq/custom.conf b/docker/elasticmq/custom.conf deleted file mode 100644 index 83a3a86c5..000000000 --- a/docker/elasticmq/custom.conf +++ /dev/null @@ -1,22 +0,0 @@ -node-address { - protocol = http - host = "localhost" - host = ${?QUEUE_HOST} - port = 9324 - context-path = "" -} - -rest-sqs { - enabled = true - bind-port = 9324 - bind-hostname = "0.0.0.0" - // Possible values: relaxed, strict - sqs-limits = relaxed -} - -queues { - vinyldns { - defaultVisibilityTimeout = 10 seconds - receiveMessageWait = 0 seconds - } -} diff --git a/docker/elasticmq/run.sh b/docker/elasticmq/run.sh deleted file mode 100755 index f498d5992..000000000 --- a/docker/elasticmq/run.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/usr/bin/env bash - -# gets the docker-ized ip address, sets it to an environment variable -export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'` - -echo "APP HOST = ${APP_HOST}" - -java -Djava.net.preferIPv4Stack=true -Dconfig.file=/elasticmq/custom.conf -jar /elasticmq/server.jar diff --git a/docker/email/.gitignore b/docker/email/.gitignore deleted file mode 100644 index d6b7ef32c..000000000 --- a/docker/email/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -* -!.gitignore diff --git a/docker/functest/Dockerfile b/docker/functest/Dockerfile deleted file mode 100644 index dad66350d..000000000 --- a/docker/functest/Dockerfile +++ /dev/null @@ -1,23 +0,0 @@ -FROM python:2.7.15-stretch - -# Install dns utils so we can run dig -RUN apt-get update && apt-get install dnsutils -y - -# The run script is what actually runs our func tests -COPY run.sh /app/run.sh -RUN chmod a+x /app/run.sh - -COPY run-tests.py /app/run-tests.py -RUN chmod a+x /app/run-tests.py - -# Copy over the functional test directory, this must have been copied into the build context previous to this building! -ADD functional_test /app - -# Install our func test requirements -RUN pip install --index-url https://pypi.python.org/simple/ -r /app/requirements.txt - -# Specifies how many CPUs to use for func tests; the more the better or specifiy "auto" for optimal results -ENV PAR_CPU=2 - -# set the entry point for the container to start vinyl, specify the config resource -ENTRYPOINT ["/app/run.sh"] diff --git a/docker/functest/run-tests.py b/docker/functest/run-tests.py deleted file mode 100644 index 1d270a6d5..000000000 --- a/docker/functest/run-tests.py +++ /dev/null @@ -1,18 +0,0 @@ -#!/usr/bin/env python -import os -import sys - -basedir = os.path.dirname(os.path.realpath(__file__)) - -report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports') -if not os.path.exists(report_dir): - os.system('mkdir -p ' + report_dir) - -import pytest - -result = 1 -result = pytest.main(list(sys.argv[1:])) - -sys.exit(result) - - diff --git a/docker/functest/run.sh b/docker/functest/run.sh deleted file mode 100755 index 812c78c8c..000000000 --- a/docker/functest/run.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/usr/bin/env bash - -# Assume defaults of local docker-compose if not set -if [ -z "${VINYLDNS_URL}" ]; then - VINYLDNS_URL="http://vinyldns-api:9000" -fi -if [ -z "${DNS_IP}" ]; then - DNS_IP=$(dig +short vinyldns-bind9) -fi - -# Assume all tests if not specified -if [ -z "${TEST_PATTERN}" ]; then - TEST_PATTERN= -else - TEST_PATTERN="-k ${TEST_PATTERN}" -fi - -if [ -z "${PAR_CPU}" ]; then - export PAR_CPU=2 -fi - -echo "Waiting for API to be ready at ${VINYLDNS_URL} ..." -DATA="" -RETRY=60 -SLEEP_DURATION=1 -while [ "$RETRY" -gt 0 ] -do - DATA=$(curl -I -s "${VINYLDNS_URL}/ping" -o /dev/null -w "%{http_code}") - if [ $? -eq 0 ] - then - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep "$SLEEP_DURATION" - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for VinylDNS API to be ready, failing" - exit 1 - fi - fi -done - -echo "Running live tests against ${VINYLDNS_URL} and DNS server ${DNS_IP}" - -cd /app - -# Cleanup any errant cached file copies -find . -name "*.pyc" -delete -find . -name "__pycache__" -delete - -result=0 -# If PROD_ENV is not true, we are in a local docker environment so do not skip anything -if [ "${PROD_ENV}" = "true" ]; then - # -m plays havoc with -k, using variables is a headache, so doing this by hand - # run parallel tests first (not serial) - echo "./run-tests.py live_tests -n${PAR_CPU} -v -m \"not skip_production and not serial\" -v --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False" - ./run-tests.py live_tests -n${PAR_CPU} -v -m "not skip_production and not serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False - result=$? - if [ $result -eq 0 ]; then - # run serial tests second (serial marker) - echo "./run-tests.py live_tests -n0 -v -m \"not skip_production and serial\" -v --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True" - ./run-tests.py live_tests -n0 -v -m "not skip_production and serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True - result=$? - fi -else - # run parallel tests first (not serial) - echo "./run-tests.py live_tests -n${PAR_CPU} -v -m \"not serial\" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False" - ./run-tests.py live_tests -n${PAR_CPU} -v -m "not serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False - result=$? - if [ $result -eq 0 ]; then - # run serial tests second (serial marker) - echo "./run-tests.py live_tests -n0 -v -m \"serial\" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True" - ./run-tests.py live_tests -n0 -v -m "serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True - result=$? - fi -fi - -exit $result diff --git a/modules/api/functional_test/Dockerfile b/modules/api/functional_test/Dockerfile deleted file mode 100644 index 93a100a00..000000000 --- a/modules/api/functional_test/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build VinylDNS API if the JAR doesn't already exist -FROM vinyldns/build:base-api as vinyldns-api -COPY modules/api/functional_test/docker.conf modules/api/functional_test/vinyldns*.jar /opt/vinyldns/ -COPY . /build/ -WORKDIR /build - -## Run the build if we don't already have a vinyldns.jar -RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ - env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ - sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ - && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ - fi - -# Build the testing image, copying data from `vinyldns-api` -FROM vinyldns/build:base-test -SHELL ["/bin/bash","-c"] -COPY --from=vinyldns-api /opt/vinyldns /opt/vinyldns - -# Local bind server files -COPY docker/bind9/etc/named.conf.local /etc/bind/ -COPY docker/bind9/etc/*.conf /var/bind/config/ -COPY docker/bind9/zones/ /var/bind/ -RUN named-checkconf - -# Copy over the functional tests -COPY modules/api/functional_test /functional_test - -ENTRYPOINT ["/bin/bash", "-c", "/initialize.sh && \ - (java -Dconfig.file=/opt/vinyldns/docker.conf -jar /opt/vinyldns/vinyldns.jar &> /opt/vinyldns/vinyldns.log &) && \ - echo -n 'Starting VinylDNS API..' && \ - timeout 30s grep -q 'STARTED SUCCESSFULLY' <(timeout 30s tail -f /opt/vinyldns/vinyldns.log) && \ - echo 'done.' && \ - /bin/bash"] \ No newline at end of file diff --git a/modules/api/functional_test/Makefile b/modules/api/functional_test/Makefile deleted file mode 100644 index ed3bd905c..000000000 --- a/modules/api/functional_test/Makefile +++ /dev/null @@ -1,25 +0,0 @@ -SHELL=bash -ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) - -# Check that the required version of make is being used -REQ_MAKE_VER:=3.82 -ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) - $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) -endif - -.ONESHELL: - -.PHONY: all build run - -all: build run - -build: - @set -euo pipefail - trap 'if [ -f modules/api/functional_test/vinyldns.jar ]; then rm modules/api/functional_test/vinyldns.jar; fi' EXIT - cd ../../.. - if [ -f modules/api/target/scala-2.12/vinyldns.jar ]; then cp modules/api/target/scala-2.12/vinyldns.jar modules/api/functional_test/vinyldns.jar; fi - docker build -t vinyldns-test -f modules/api/functional_test/Dockerfile . - -run: - @set -euo pipefail - docker run -it --rm -p 9000:9000 -p 19001:53/tcp -p 19001:53/udp vinyldns-test \ No newline at end of file diff --git a/modules/api/src/it/scala/vinyldns/api/backend/dns/DnsBackendIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/backend/dns/DnsBackendIntegrationSpec.scala index 01650dfd8..22e5fa22a 100644 --- a/modules/api/src/it/scala/vinyldns/api/backend/dns/DnsBackendIntegrationSpec.scala +++ b/modules/api/src/it/scala/vinyldns/api/backend/dns/DnsBackendIntegrationSpec.scala @@ -59,7 +59,7 @@ class DnsBackendIntegrationSpec extends AnyWordSpec with Matchers { config.tsigUsage ) val testZone = Zone( - "open.", + "open1.", "test@test.com", connection = Some(testConnection) ) diff --git a/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala index 345f41372..281939398 100644 --- a/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala +++ b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala @@ -1,3 +1,4 @@ + /* * Copyright 2018 Comcast Cable Communications Management, LLC * @@ -35,7 +36,7 @@ class ZoneViewLoaderIntegrationSpec extends AnyWordSpec with Matchers { "ZoneViewLoader" should { "return a ZoneView upon success" in { - val zone = Zone("vinyldns.", "test@test.com") + val zone = Zone("vinyldns1.", "test@test.com") DnsZoneViewLoader(zone, backendResolver.resolve(zone), 10000) .load() .unsafeRunSync() shouldBe a[ZoneView] @@ -44,7 +45,7 @@ class ZoneViewLoaderIntegrationSpec extends AnyWordSpec with Matchers { "return a failure if the transfer connection is bad" in { assertThrows[IllegalArgumentException] { val zone = Zone( - "vinyldns.", + "vinyldns1.", "bad@transfer.connection", connection = Some( ZoneConnection( @@ -77,7 +78,7 @@ class ZoneViewLoaderIntegrationSpec extends AnyWordSpec with Matchers { "return a failure if the zone is larger than the max zone size" in { assertThrows[ZoneTooLargeError] { val zone = Zone( - "vinyldns.", + "vinyldns1.", "test@test.com", connection = Some( ZoneConnection( @@ -86,6 +87,14 @@ class ZoneViewLoaderIntegrationSpec extends AnyWordSpec with Matchers { "nzisn+4G2ldMn0q1CV3vsg==", "127.0.0.1:19001" ) + ), + transferConnection = Some( + ZoneConnection( + "vinyldns.", + "vinyldns.", + "nzisn+4G2ldMn0q1CV3vsg==", + "127.0.0.1:19001" + ) ) ) DnsZoneViewLoader(zone, backendResolver.resolve(zone), 1) diff --git a/modules/api/src/it/scala/vinyldns/api/notifier/sns/SnsNotifierIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/notifier/sns/SnsNotifierIntegrationSpec.scala index 508d15a04..5677647e8 100644 --- a/modules/api/src/it/scala/vinyldns/api/notifier/sns/SnsNotifierIntegrationSpec.scala +++ b/modules/api/src/it/scala/vinyldns/api/notifier/sns/SnsNotifierIntegrationSpec.scala @@ -16,25 +16,23 @@ package vinyldns.api.notifier.sns +import cats.effect.IO +import com.amazonaws.auth.{AWSStaticCredentialsProvider, BasicAWSCredentials} +import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration +import com.amazonaws.services.sns.AmazonSNSClientBuilder +import com.amazonaws.services.sqs.AmazonSQSClientBuilder import com.typesafe.config.{Config, ConfigFactory} -import vinyldns.core.notifier._ -import vinyldns.api.MySqlApiIntegrationSpec -import vinyldns.mysql.MySqlIntegrationSpec +import org.joda.time.DateTime +import org.json4s.DefaultFormats +import org.json4s.jackson.JsonMethods._ import org.scalatest.matchers.should.Matchers import org.scalatest.wordspec.AnyWordSpecLike -import vinyldns.core.domain.batch._ -import vinyldns.core.domain.record.RecordType -import vinyldns.core.domain.record.AData -import org.joda.time.DateTime +import vinyldns.api.MySqlApiIntegrationSpec import vinyldns.core.TestMembershipData._ -import cats.effect.IO -import com.amazonaws.services.sns.AmazonSNSClientBuilder -import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration -import com.amazonaws.services.sqs.AmazonSQSClientBuilder -import org.json4s.jackson.JsonMethods._ -import org.json4s.DefaultFormats -import com.amazonaws.auth.BasicAWSCredentials -import com.amazonaws.auth.AWSStaticCredentialsProvider +import vinyldns.core.domain.batch._ +import vinyldns.core.domain.record.{AData, RecordType} +import vinyldns.core.notifier._ +import vinyldns.mysql.MySqlIntegrationSpec class SnsNotifierIntegrationSpec extends MySqlApiIntegrationSpec @@ -93,7 +91,7 @@ class SnsNotifierIntegrationSpec val sqs = AmazonSQSClientBuilder .standard() .withEndpointConfiguration( - new EndpointConfiguration("http://127.0.0.1:19007", "us-east-1") + new EndpointConfiguration("http://127.0.0.1:19003", "us-east-1") ) .withCredentials(credentialsProvider) .build() @@ -105,6 +103,7 @@ class SnsNotifierIntegrationSpec notifier <- new SnsNotifierProvider() .load(NotifierConfig("", snsConfig), userRepository) _ <- notifier.notify(Notification(batchChange)) + _ <- IO { Thread.sleep(100) } messages <- IO { sqs.receiveMessage(queueUrl).getMessages } _ <- IO { sns.deleteTopic(topic) diff --git a/modules/api/src/it/scala/vinyldns/api/route53/Route53ApiIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/route53/Route53ApiIntegrationSpec.scala index 73091d609..11d2d166f 100644 --- a/modules/api/src/it/scala/vinyldns/api/route53/Route53ApiIntegrationSpec.scala +++ b/modules/api/src/it/scala/vinyldns/api/route53/Route53ApiIntegrationSpec.scala @@ -56,7 +56,7 @@ class Route53ApiIntegrationSpec "test", Some("access"), Some("secret"), - "http://127.0.0.1:19009", + "http://127.0.0.1:19003", "us-east-1" ) ) diff --git a/modules/api/src/main/resources/reference.conf b/modules/api/src/main/resources/reference.conf index b90a81a4d..e7ced14b2 100644 --- a/modules/api/src/main/resources/reference.conf +++ b/modules/api/src/main/resources/reference.conf @@ -16,43 +16,41 @@ vinyldns { { class-name = "vinyldns.api.backend.dns.DnsBackendProviderLoader" settings = { - legacy = true # set this to true to attempt to load legacy config YAML - backends = [] - - # if not legacy then this... - # legacy = false - # backends = [ - # { - # id = "default" - # zone-connection = { - # name = "vinyldns." - # keyName = "vinyldns." - # key = "nzisn+4G2ldMn0q1CV3vsg==" - # primaryServer = "127.0.0.1:19001" - # } - # transfer-connection = { - # name = "vinyldns." - # keyName = "vinyldns." - # key = "nzisn+4G2ldMn0q1CV3vsg==" - # primaryServer = "127.0.0.1:19001" - # } - # }, - # { - # id = "func-test-backend" - # zone-connection = { - # name = "vinyldns." - # keyName = "vinyldns." - # key = "nzisn+4G2ldMn0q1CV3vsg==" - # primaryServer = "127.0.0.1:19001" - # } - # transfer-connection = { - # name = "vinyldns." - # keyName = "vinyldns." - # key = "nzisn+4G2ldMn0q1CV3vsg==" - # primaryServer = "127.0.0.1:19001" - # } - # } - #] + legacy = false + backends = [ + { + id = "default" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primary-server = "127.0.0.1:19001" + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primary-server = "127.0.0.1:19001" + }, + tsig-usage = "always" + }, + { + id = "func-test-backend" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primary-server = "127.0.0.1:19001" + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primary-server = "127.0.0.1:19001" + }, + tsig-usage = "always" + } + ] } } ] @@ -76,10 +74,10 @@ vinyldns { # secret-key = "x" # Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed. - signing-region = "x" + signing-region = "us-east-1" # Endpoint to access queue - service-endpoint = "http://localhost:19007/" + service-endpoint = "http://localhost:19003/" # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. queue-name = "vinyldns" @@ -141,9 +139,9 @@ vinyldns { class-name = "vinyldns.api.notifier.sns.SnsNotifierProvider" settings { topic-arn = "arn:aws:sns:us-east-1:000000000000:batchChanges" - access-key = "vinyldnsTest" - secret-key = "notNeededForSnsLocal" - service-endpoint = "http://127.0.0.1:19006" + access-key = "test" + secret-key = "test" + service-endpoint = "http://127.0.0.1:19003" signing-region = "us-east-1" } } diff --git a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala index ffc72fbe3..d8cc2276e 100644 --- a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala +++ b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala @@ -101,7 +101,7 @@ class DnsBackendSpec override def beforeEach(): Unit = { doReturn(mockMessage).when(mockMessage).clone() - doReturn(new Array[DNS.Record](0)).when(mockMessage).getSectionArray(DNS.Section.ADDITIONAL) + doReturn(new java.util.ArrayList[DNS.Record](0)).when(mockMessage).getSection(DNS.Section.ADDITIONAL) doReturn(DNS.Rcode.NOERROR).when(mockMessage).getRcode doReturn(mockMessage).when(mockResolver).send(messageCaptor.capture()) doReturn(DNS.Lookup.SUCCESSFUL).when(mockDnsQuery).result @@ -160,7 +160,7 @@ class DnsBackendSpec val conn = zoneConnection.copy(primaryServer = "dns.comcast.net:19001") val dnsConn = DnsBackend("test", conn, None, new NoOpCrypto()) - val simpleResolver = dnsConn.resolver.asInstanceOf[DNS.SimpleResolver] + val simpleResolver = dnsConn.resolver val address = simpleResolver.getAddress @@ -172,7 +172,7 @@ class DnsBackendSpec val conn = zoneConnection.copy(primaryServer = "dns.comcast.net") val dnsConn = DnsBackend("test", conn, None, new NoOpCrypto()) - val simpleResolver = dnsConn.resolver.asInstanceOf[DNS.SimpleResolver] + val simpleResolver = dnsConn.resolver val address = simpleResolver.getAddress @@ -267,14 +267,14 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val dnsRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val dnsRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head dnsRecord.getName.toString shouldBe "a-record.vinyldns." dnsRecord.getTTL shouldBe testA.ttl dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -286,7 +286,7 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val rrset = sentMessage.getSectionRRsets(DNS.Section.UPDATE)(0) + val rrset = sentMessage.getSectionRRsets(DNS.Section.UPDATE).asScala.head rrset.getName.toString shouldBe "a-record.vinyldns." rrset.getTTL shouldBe testA.ttl rrset.getType shouldBe DNS.Type.A @@ -298,7 +298,7 @@ class DnsBackendSpec records should contain theSameElementsAs expected - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -327,20 +327,20 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue // Update record issues a replace, the first section is an EmptyRecord containing the name and type to replace - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head emptyRecord.getName.toString shouldBe "updated-a-record.vinyldns." emptyRecord.getType shouldBe DNS.Type.A emptyRecord.getDClass shouldBe DNS.DClass.ANY // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(1) + val dnsRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala(1) dnsRecord.getName.toString shouldBe "a-record.vinyldns." dnsRecord.getTTL shouldBe testA.ttl dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -353,20 +353,20 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue // Update record issues a replace, the first section is an EmptyRecord containing the name and type to replace - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head emptyRecord.getName.toString shouldBe "a-record.vinyldns." emptyRecord.getType shouldBe DNS.Type.A emptyRecord.getDClass shouldBe DNS.DClass.ANY // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(1) + val dnsRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala(1) dnsRecord.getName.toString shouldBe "a-record.vinyldns." dnsRecord.getTTL shouldBe 300 dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -378,7 +378,7 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE) emptyRecord shouldBe empty result shouldBe a[NoError] @@ -393,13 +393,13 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue // Update record issues a replace, the first section is an EmptyRecord containing the name and type to replace - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head emptyRecord.getName.toString shouldBe "updated-a-record.vinyldns." emptyRecord.getType shouldBe DNS.Type.A emptyRecord.getDClass shouldBe DNS.DClass.ANY // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord1 = sentMessage.getSectionArray(DNS.Section.UPDATE)(1) + val dnsRecord1 = sentMessage.getSection(DNS.Section.UPDATE).asScala(1) dnsRecord1.getName.toString shouldBe "a-record.vinyldns." dnsRecord1.getTTL shouldBe testA.ttl dnsRecord1.getType shouldBe DNS.Type.A @@ -407,7 +407,7 @@ class DnsBackendSpec val dnsRecord1Data = dnsRecord1.asInstanceOf[DNS.ARecord].getAddress.getHostAddress List("1.1.1.1", "2.2.2.2") should contain(dnsRecord1Data) - val dnsRecord2 = sentMessage.getSectionArray(DNS.Section.UPDATE)(2) + val dnsRecord2 = sentMessage.getSection(DNS.Section.UPDATE).asScala(2) dnsRecord2.getName.toString shouldBe "a-record.vinyldns." dnsRecord2.getTTL shouldBe testA.ttl dnsRecord2.getType shouldBe DNS.Type.A @@ -415,7 +415,7 @@ class DnsBackendSpec val dnsRecord2Data = dnsRecord1.asInstanceOf[DNS.ARecord].getAddress.getHostAddress List("1.1.1.1", "2.2.2.2") should contain(dnsRecord2Data) - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -443,7 +443,7 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE) emptyRecord shouldBe empty result shouldBe a[NoError] @@ -457,20 +457,20 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue // A NONE update is sent for each DNS record that is getting deleted - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head emptyRecord.getName.toString shouldBe "a-record.vinyldns." emptyRecord.getType shouldBe DNS.Type.A emptyRecord.getDClass shouldBe DNS.DClass.NONE // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(1) + val dnsRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala(1) dnsRecord.getName.toString shouldBe "a-record.vinyldns." dnsRecord.getTTL shouldBe testA.ttl dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -482,7 +482,7 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val emptyRecord = sentMessage.getSectionArray(DNS.Section.UPDATE) + val emptyRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala emptyRecord shouldBe empty result shouldBe a[NoError] @@ -497,14 +497,14 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue // A NONE update is sent for each DNS record that is getting deleted - val deleteRecord1 = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val deleteRecord1 = sentMessage.getSection(DNS.Section.UPDATE).asScala.head deleteRecord1.getName.toString shouldBe "a-record.vinyldns." deleteRecord1.getType shouldBe DNS.Type.A deleteRecord1.getDClass shouldBe DNS.DClass.NONE val deleteRecord1Data = deleteRecord1.asInstanceOf[DNS.ARecord].getAddress.getHostAddress List("4.4.4.4", "3.3.3.3") should contain(deleteRecord1Data) - val deleteRecord2 = sentMessage.getSectionArray(DNS.Section.UPDATE)(1) + val deleteRecord2 = sentMessage.getSection(DNS.Section.UPDATE).asScala(1) deleteRecord2.getName.toString shouldBe "a-record.vinyldns." deleteRecord2.getType shouldBe DNS.Type.A deleteRecord2.getDClass shouldBe DNS.DClass.NONE @@ -512,7 +512,7 @@ class DnsBackendSpec List("4.4.4.4", "3.3.3.3") should contain(deleteRecord2Data) // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord1 = sentMessage.getSectionArray(DNS.Section.UPDATE)(2) + val dnsRecord1 = sentMessage.getSection(DNS.Section.UPDATE).asScala(2) dnsRecord1.getName.toString shouldBe "a-record.vinyldns." dnsRecord1.getTTL shouldBe testA.ttl dnsRecord1.getType shouldBe DNS.Type.A @@ -520,7 +520,7 @@ class DnsBackendSpec val dnsRecord1Data = dnsRecord1.asInstanceOf[DNS.ARecord].getAddress.getHostAddress List("1.1.1.1", "2.2.2.2") should contain(dnsRecord1Data) - val dnsRecord2 = sentMessage.getSectionArray(DNS.Section.UPDATE)(3) + val dnsRecord2 = sentMessage.getSection(DNS.Section.UPDATE).asScala(3) dnsRecord2.getName.toString shouldBe "a-record.vinyldns." dnsRecord2.getTTL shouldBe testA.ttl dnsRecord2.getType shouldBe DNS.Type.A @@ -528,7 +528,7 @@ class DnsBackendSpec val dnsRecord2Data = dnsRecord1.asInstanceOf[DNS.ARecord].getAddress.getHostAddress List("1.1.1.1", "2.2.2.2") should contain(dnsRecord2Data) - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -556,14 +556,14 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val dnsRecord = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val dnsRecord = sentMessage.getSection(DNS.Section.UPDATE).asScala.head dnsRecord.getName.toString shouldBe "a-record.vinyldns." dnsRecord.getType shouldBe DNS.Type.A dnsRecord.getTTL shouldBe 0 dnsRecord.getDClass shouldBe DNS.DClass.ANY dnsRecord should not be a[DNS.ARecord] - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] @@ -575,14 +575,14 @@ class DnsBackendSpec val sentMessage = messageCaptor.getValue - val dnsRecord1 = sentMessage.getSectionArray(DNS.Section.UPDATE)(0) + val dnsRecord1 = sentMessage.getSection(DNS.Section.UPDATE).asScala.head dnsRecord1.getName.toString shouldBe "a-record.vinyldns." dnsRecord1.getType shouldBe DNS.Type.A dnsRecord1.getTTL shouldBe 0 dnsRecord1.getDClass shouldBe DNS.DClass.ANY dnsRecord1 should not be a[DNS.ARecord] - val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = sentMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." result shouldBe a[NoError] diff --git a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsConversionsSpec.scala b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsConversionsSpec.scala index 52a172e34..5a60c1c6f 100644 --- a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsConversionsSpec.scala +++ b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsConversionsSpec.scala @@ -288,7 +288,7 @@ class DnsConversionsSpec override protected def beforeEach(): Unit = { doReturn(mockMessage).when(mockMessage).clone() - doReturn(new Array[DNS.Record](0)).when(mockMessage).getSectionArray(DNS.Section.ADDITIONAL) + doReturn(new java.util.ArrayList[DNS.Record]()).when(mockMessage).getSection(DNS.Section.ADDITIONAL) } "Collapsing multiple records to record sets" should { @@ -572,47 +572,47 @@ class DnsConversionsSpec "Converting to an update message" should { "work for an Add message" in { val dnsMessage = toAddRecordMessage(rrset(testDnsA), testZoneName).right.value - val dnsRecord = dnsMessage.getSectionArray(DNS.Section.UPDATE)(0) + val dnsRecord = dnsMessage.getSection(DNS.Section.UPDATE).asScala.head dnsRecord.getName.toString shouldBe "a-record." dnsRecord.getTTL shouldBe testA.ttl dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." } "work for an Update message" in { val dnsMessage = toUpdateRecordMessage(rrset(testDnsA), rrset(testDnsAReplace), testZoneName).right.value // Update record issues a replace, the first section is an EmptyRecord containing the name and type to replace - val emptyRecord = dnsMessage.getSectionArray(DNS.Section.UPDATE)(0) + val emptyRecord = dnsMessage.getSection(DNS.Section.UPDATE).asScala.head emptyRecord.getName.toString shouldBe "a-record-2." emptyRecord.getType shouldBe DNS.Type.A emptyRecord.getDClass shouldBe DNS.DClass.ANY // The second section in the replace is the data that is being passed in, this is different than an add - val dnsRecord = dnsMessage.getSectionArray(DNS.Section.UPDATE)(1) + val dnsRecord = dnsMessage.getSection(DNS.Section.UPDATE).asScala(1) dnsRecord.getName.toString shouldBe "a-record." dnsRecord.getTTL shouldBe testA.ttl dnsRecord.getType shouldBe DNS.Type.A dnsRecord shouldBe a[DNS.ARecord] dnsRecord.asInstanceOf[DNS.ARecord].getAddress.getHostAddress shouldBe "10.1.1.1" - val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." } "work for a Delete message" in { val dnsMessage = toDeleteRecordMessage(rrset(testDnsA), testZoneName).right.value - val dnsRecord = dnsMessage.getSectionArray(DNS.Section.UPDATE)(0) + val dnsRecord = dnsMessage.getSection(DNS.Section.UPDATE).asScala.head dnsRecord.getName.toString shouldBe "a-record." dnsRecord.getType shouldBe DNS.Type.A dnsRecord.getTTL shouldBe 0 dnsRecord.getDClass shouldBe DNS.DClass.ANY dnsRecord should not be a[DNS.ARecord] - val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE)(0) + val zoneRRset = dnsMessage.getSectionRRsets(DNS.Section.ZONE).asScala.head zoneRRset.getName.toString shouldBe "vinyldns." } } diff --git a/modules/api/src/universal/bin/wait-for-dependencies.sh b/modules/api/src/universal/bin/wait-for-dependencies.sh deleted file mode 100755 index 3533ce326..000000000 --- a/modules/api/src/universal/bin/wait-for-dependencies.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash - -# the mysql address, default to a local docker setup -MYSQL_ADDRESS=${MYSQL_ADDRESS:-vinyldns-mysql} -MYSQL_PORT=${MYSQL_PORT:-3306} -echo "Waiting for MYSQL to be ready on $MYSQL_ADDRESS:$MYSQL_PORT" -DATA="" -RETRY=30 -while [ $RETRY -gt 0 ] -do - DATA=$(nc -vzw1 $MYSQL_ADDRESS $MYSQL_PORT) - if [ $? -eq 0 ] - then - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep .5 - - if [ $RETRY -eq 0 ] - then - echo "Exceeded retries waiting for MYSQL to be ready on $MYSQL_ADDRESS:$MYSQL_PORT, failing" - return 1 - fi - fi -done - diff --git a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala index 1b63236e5..d0bc286e4 100644 --- a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala +++ b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala @@ -27,7 +27,7 @@ object MySqlConnector { private val logger = LoggerFactory.getLogger("MySqlConnector") - def runDBMigrations(config: MySqlConnectionConfig): IO[Unit] = { + def runDBMigrations(config: MySqlConnectionConfig): IO[Unit] = // We can skip migrations for h2, we'll use the test/ddl.sql for initializing // that for testing if (config.driver.contains("h2")) IO.unit @@ -61,7 +61,6 @@ object MySqlConnector { logger.info("migrations complete") } } - } def getDataSource(settings: MySqlDataSourceSettings): IO[HikariDataSource] = IO { diff --git a/modules/portal/karma.conf.js b/modules/portal/karma.conf.js index f0e1f2342..66f8af984 100644 --- a/modules/portal/karma.conf.js +++ b/modules/portal/karma.conf.js @@ -39,7 +39,7 @@ module.exports = function(config) { // level of logging // possible values: LOG_DISABLE || LOG_ERROR || LOG_WARN || LOG_INFO || LOG_DEBUG logLevel: config.LOG_INFO, - + plugins: [ 'karma-jasmine', 'karma-chrome-launcher', @@ -66,7 +66,13 @@ module.exports = function(config) { // - Safari (only Mac) // - PhantomJS // - IE (only Windows) - browsers: ['ChromeHeadless'], + browsers: ['ChromeHeadlessNoSandbox'], + customLaunchers: { + ChromeHeadlessNoSandbox: { + base: 'ChromeHeadless', + flags: ['--no-sandbox'] + } + }, // Continuous Integration mode // if true, it capture browsers, run tests and exit diff --git a/modules/r53/src/it/scala/vinyldns/route53/backend/Route53IntegrationSpec.scala b/modules/r53/src/it/scala/vinyldns/route53/backend/Route53IntegrationSpec.scala index 3dbb75589..dbd5a17e7 100644 --- a/modules/r53/src/it/scala/vinyldns/route53/backend/Route53IntegrationSpec.scala +++ b/modules/r53/src/it/scala/vinyldns/route53/backend/Route53IntegrationSpec.scala @@ -52,7 +52,7 @@ class Route53IntegrationSpec "test", Option("access"), Option("secret"), - "http://127.0.0.1:19009", + "http://127.0.0.1:19003", "us-east-1" ) ) diff --git a/modules/sqs/src/it/resources/application.conf b/modules/sqs/src/it/resources/application.conf index b579eb937..49705a5b0 100644 --- a/modules/sqs/src/it/resources/application.conf +++ b/modules/sqs/src/it/resources/application.conf @@ -4,10 +4,10 @@ sqs { messages-per-poll = 10 max-retries = 100 settings = { - access-key = "x" - secret-key = "x" - signing-region = "x" - service-endpoint = "http://localhost:19007/" + access-key = "test" + secret-key = "test" + signing-region = "us-east-1" + service-endpoint = "http://localhost:19003/" queue-name = "sqs-override-name" } } diff --git a/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueIntegrationSpec.scala b/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueIntegrationSpec.scala index fd04d05d4..09585e44d 100644 --- a/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueIntegrationSpec.scala +++ b/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueIntegrationSpec.scala @@ -147,13 +147,6 @@ class SqsMessageQueueIntegrationSpec result shouldBe empty } - "succeed when attempting to remove item from empty queue" in { - queue - .remove(SqsMessage(MessageId("does-not-exist"), rsAddChange)) - .attempt - .unsafeRunSync() should beRight(()) - } - "succeed when attempting to remove item from queue" in { queue.send(rsAddChange).unsafeRunSync() val result = queue.receive(MessageCount(1).right.value).unsafeRunSync() diff --git a/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueProviderIntegrationSpec.scala b/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueProviderIntegrationSpec.scala index 6e3282c0b..496212e8a 100644 --- a/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueProviderIntegrationSpec.scala +++ b/modules/sqs/src/it/scala/vinyldns/sqs/queue/SqsMessageQueueProviderIntegrationSpec.scala @@ -36,7 +36,7 @@ class SqsMessageQueueProviderIntegrationSpec extends AnyWordSpec with Matchers { | max-retries = 100 | | settings { - | service-endpoint = "http://localhost:19007/" + | service-endpoint = "http://localhost:19003/" | queue-name = "queue-name" | } | """.stripMargin) @@ -59,8 +59,8 @@ class SqsMessageQueueProviderIntegrationSpec extends AnyWordSpec with Matchers { | settings { | access-key = "x" | secret-key = "x" - | signing-region = "x" - | service-endpoint = "http://localhost:19007/" + | signing-region = "us-east-1" + | service-endpoint = "http://localhost:19003/" | queue-name = "new-queue" | } | """.stripMargin) @@ -86,8 +86,8 @@ class SqsMessageQueueProviderIntegrationSpec extends AnyWordSpec with Matchers { | settings { | access-key = "x" | secret-key = "x" - | signing-region = "x" - | service-endpoint = "http://localhost:19007/" + | signing-region = "us-east-1" + | service-endpoint = "http://localhost:19003/" | queue-name = "bad*queue*name" | } | """.stripMargin) @@ -108,8 +108,8 @@ class SqsMessageQueueProviderIntegrationSpec extends AnyWordSpec with Matchers { | settings { | access-key = "x" | secret-key = "x" - | signing-region = "x" - | service-endpoint = "http://localhost:19007/" + | signing-region = "us-east-1" + | service-endpoint = "http://localhost:19003/" | queue-name = "queue.fifo" | } | """.stripMargin) @@ -131,8 +131,8 @@ class SqsMessageQueueProviderIntegrationSpec extends AnyWordSpec with Matchers { | settings { | access-key = "x" | secret-key = "x" - | signing-region = "x" - | service-endpoint = "http://localhost:19007/" + | signing-region = "us-east-1" + | service-endpoint = "http://localhost:19003/" | queue-name = "new-queue" | } | """.stripMargin) diff --git a/project/Dependencies.scala b/project/Dependencies.scala index 9e6547319..2a4d1eb18 100644 --- a/project/Dependencies.scala +++ b/project/Dependencies.scala @@ -33,7 +33,7 @@ object Dependencies { "dnsjava" % "dnsjava" % "3.4.2", "org.apache.commons" % "commons-lang3" % "3.4", "org.apache.commons" % "commons-text" % "1.4", - "org.flywaydb" % "flyway-core" % "5.1.4", + "org.flywaydb" % "flyway-core" % "8.0.0", "org.json4s" %% "json4s-ext" % "3.5.3", "org.json4s" %% "json4s-jackson" % "3.5.3", "org.scalikejdbc" %% "scalikejdbc" % scalikejdbcV, @@ -77,7 +77,7 @@ object Dependencies { ) lazy val mysqlDependencies = Seq( - "org.flywaydb" % "flyway-core" % "5.1.4", + "org.flywaydb" % "flyway-core" % "8.0.0", "org.mariadb.jdbc" % "mariadb-java-client" % "2.3.0", "org.scalikejdbc" %% "scalikejdbc" % scalikejdbcV, "org.scalikejdbc" %% "scalikejdbc-config" % scalikejdbcV, diff --git a/project/plugins.sbt b/project/plugins.sbt index f6429eb47..d5804b67b 100644 --- a/project/plugins.sbt +++ b/project/plugins.sbt @@ -8,8 +8,6 @@ addSbtPlugin("io.spray" % "sbt-revolver" % "0.9.1") addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.9.0") -addSbtPlugin("io.github.davidmweber" % "flyway-sbt" % "5.0.0") - addSbtPlugin("org.wartremover" % "sbt-wartremover" % "2.4.10") addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.25") diff --git a/test/api/functional/Dockerfile b/test/api/functional/Dockerfile new file mode 100644 index 000000000..5a01aefe1 --- /dev/null +++ b/test/api/functional/Dockerfile @@ -0,0 +1,29 @@ +# Build VinylDNS API if the JAR doesn't already exist +FROM vinyldns/build:base-build as base-build +ARG DOCKERFILE_PATH="test/api/functional" +COPY "${DOCKERFILE_PATH}/vinyldns.*" /opt/vinyldns/ +COPY . /build/ +WORKDIR /build + +## Run the build if we don't already have a vinyldns.jar +RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ + env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ + sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ + && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ + fi + +# Build the testing image, copying data from `vinyldns-api` +FROM vinyldns/build:base-test +SHELL ["/bin/bash","-c"] +ARG DOCKERFILE_PATH +COPY --from=base-build /opt/vinyldns /opt/vinyldns + +# Local bind server files +COPY docker/bind9/etc/named.conf.* /etc/bind/ +COPY docker/bind9/zones/ /var/bind/ +RUN named-checkconf + +# Copy over the functional tests +COPY ${DOCKERFILE_PATH}/test /functional_test + +ENTRYPOINT ["/bin/bash", "-c", "/initialize.sh bind localstack vinyldns-api && /functional_test/run.sh \"$@\""] diff --git a/modules/api/functional_test/Dockerfile.dockerignore b/test/api/functional/Dockerfile.dockerignore similarity index 87% rename from modules/api/functional_test/Dockerfile.dockerignore rename to test/api/functional/Dockerfile.dockerignore index b134391d3..e42085f51 100644 --- a/modules/api/functional_test/Dockerfile.dockerignore +++ b/test/api/functional/Dockerfile.dockerignore @@ -1,6 +1,5 @@ -**/.venv_win +**/.venv* **/.virtualenv -**/.venv **/target **/docs **/out diff --git a/test/api/functional/Makefile b/test/api/functional/Makefile new file mode 100644 index 000000000..80c25f552 --- /dev/null +++ b/test/api/functional/Makefile @@ -0,0 +1,45 @@ +SHELL=bash +IMAGE_NAME=vinyldns-api-test +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) +RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR)) +VINYLDNS_JAR_PATH?=modules/api/target/scala-2.12/vinyldns.jar + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +# Extract arguments for `make run` +EXTRACT_ARGS=true +ifeq (run,$(firstword $(MAKECMDGOALS))) + EXTRACT_ARGS=true +endif +ifeq ($(EXTRACT_ARGS),true) + # use the rest as arguments for "run" + RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + # ...and turn them into do-nothing targets + $(eval $(RUN_ARGS):;@:) +endif + + +.ONESHELL: + +.PHONY: all build run run-local + +all: build run + +build: + @set -euo pipefail + trap 'if [ -f "$(ROOT_DIR)/vinyldns.jar" ]; then rm $(ROOT_DIR)/vinyldns.jar; fi' EXIT + cd ../../.. + if [ -f modules/api/target/scala-2.12/vinyldns.jar ]; then cp modules/api/target/scala-2.12/vinyldns.jar $(ROOT_DIR)/vinyldns.jar; fi + docker build -t $(IMAGE_NAME) $(DOCKER_PARAMS)--build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . + +run: + @set -euo pipefail + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- $(RUN_ARGS) + +run-local: + @set -euo pipefail + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp -v "$$(pwd)/test:/functional_test" $(IMAGE_NAME) -- $(RUN_ARGS) diff --git a/modules/api/functional_test/.gitignore b/test/api/functional/test/.gitignore old mode 100755 new mode 100644 similarity index 100% rename from modules/api/functional_test/.gitignore rename to test/api/functional/test/.gitignore diff --git a/modules/api/functional_test/__init__.py b/test/api/functional/test/__init__.py similarity index 100% rename from modules/api/functional_test/__init__.py rename to test/api/functional/test/__init__.py diff --git a/modules/api/functional_test/aws_request_signer.py b/test/api/functional/test/aws_request_signer.py similarity index 100% rename from modules/api/functional_test/aws_request_signer.py rename to test/api/functional/test/aws_request_signer.py diff --git a/modules/api/functional_test/conftest.py b/test/api/functional/test/conftest.py similarity index 98% rename from modules/api/functional_test/conftest.py rename to test/api/functional/test/conftest.py index 3dfeb86ca..22f7337d1 100644 --- a/modules/api/functional_test/conftest.py +++ b/test/api/functional/test/conftest.py @@ -29,7 +29,7 @@ def pytest_addoption(parser: _pytest.config.argparsing.Parser) -> None: Adds additional options that we can parse when we run the tests, stores them in the parser / py.test context """ parser.addoption("--url", dest="url", action="store", default="http://localhost:9000", help="URL for application to root") - parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1", help="The ip address for the dns name server to update") + parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001", help="The ip address for the dns name server to update") parser.addoption("--resolver-ip", dest="resolver_ip", action="store", help="The ip address for the dns server to use for the tests during resolution. This is usually the same as `--dns-ip`") parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.", help="The zone name that will be used for testing") parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.", help="The name of the key used to sign updates for the zone") diff --git a/modules/api/functional_test/pytest.ini b/test/api/functional/test/pytest.ini similarity index 100% rename from modules/api/functional_test/pytest.ini rename to test/api/functional/test/pytest.ini diff --git a/modules/api/functional_test/pytest.sh b/test/api/functional/test/pytest.sh old mode 100755 new mode 100644 similarity index 100% rename from modules/api/functional_test/pytest.sh rename to test/api/functional/test/pytest.sh diff --git a/modules/api/functional_test/requirements.txt b/test/api/functional/test/requirements.txt similarity index 100% rename from modules/api/functional_test/requirements.txt rename to test/api/functional/test/requirements.txt diff --git a/modules/api/functional_test/run.sh b/test/api/functional/test/run.sh old mode 100755 new mode 100644 similarity index 57% rename from modules/api/functional_test/run.sh rename to test/api/functional/test/run.sh index c47998611..7d48ea51f --- a/modules/api/functional_test/run.sh +++ b/test/api/functional/test/run.sh @@ -10,4 +10,9 @@ if [ "$1" == "--update" ]; then fi cd "${ROOT_DIR}" -"./pytest.sh" "${UPDATE_DEPS}" -n4 --suppress-no-test-exit-code -v live_tests "$@" +if [ "$1" == "--interactive" ]; then + shift + bash +else + "./pytest.sh" "${UPDATE_DEPS}" -n4 --suppress-no-test-exit-code -v tests "$@" +fi diff --git a/modules/api/functional_test/live_tests/authentication_test.py b/test/api/functional/test/tests/authentication_test.py similarity index 100% rename from modules/api/functional_test/live_tests/authentication_test.py rename to test/api/functional/test/tests/authentication_test.py diff --git a/modules/api/functional_test/live_tests/batch/approve_batch_change_test.py b/test/api/functional/test/tests/batch/approve_batch_change_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/approve_batch_change_test.py rename to test/api/functional/test/tests/batch/approve_batch_change_test.py diff --git a/modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py b/test/api/functional/test/tests/batch/cancel_batch_change_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/cancel_batch_change_test.py rename to test/api/functional/test/tests/batch/cancel_batch_change_test.py diff --git a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py b/test/api/functional/test/tests/batch/create_batch_change_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/create_batch_change_test.py rename to test/api/functional/test/tests/batch/create_batch_change_test.py diff --git a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py b/test/api/functional/test/tests/batch/get_batch_change_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/get_batch_change_test.py rename to test/api/functional/test/tests/batch/get_batch_change_test.py diff --git a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py b/test/api/functional/test/tests/batch/list_batch_change_summaries_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py rename to test/api/functional/test/tests/batch/list_batch_change_summaries_test.py diff --git a/modules/api/functional_test/live_tests/batch/reject_batch_change_test.py b/test/api/functional/test/tests/batch/reject_batch_change_test.py similarity index 100% rename from modules/api/functional_test/live_tests/batch/reject_batch_change_test.py rename to test/api/functional/test/tests/batch/reject_batch_change_test.py diff --git a/modules/api/functional_test/live_tests/conftest.py b/test/api/functional/test/tests/conftest.py similarity index 100% rename from modules/api/functional_test/live_tests/conftest.py rename to test/api/functional/test/tests/conftest.py diff --git a/modules/api/functional_test/live_tests/internal/color_test.py b/test/api/functional/test/tests/internal/color_test.py similarity index 100% rename from modules/api/functional_test/live_tests/internal/color_test.py rename to test/api/functional/test/tests/internal/color_test.py diff --git a/modules/api/functional_test/live_tests/internal/health_test.py b/test/api/functional/test/tests/internal/health_test.py similarity index 100% rename from modules/api/functional_test/live_tests/internal/health_test.py rename to test/api/functional/test/tests/internal/health_test.py diff --git a/modules/api/functional_test/live_tests/internal/ping_test.py b/test/api/functional/test/tests/internal/ping_test.py similarity index 100% rename from modules/api/functional_test/live_tests/internal/ping_test.py rename to test/api/functional/test/tests/internal/ping_test.py diff --git a/modules/api/functional_test/live_tests/internal/status_test.py b/test/api/functional/test/tests/internal/status_test.py similarity index 100% rename from modules/api/functional_test/live_tests/internal/status_test.py rename to test/api/functional/test/tests/internal/status_test.py diff --git a/modules/api/functional_test/live_tests/list_batch_summaries_test_context.py b/test/api/functional/test/tests/list_batch_summaries_test_context.py similarity index 100% rename from modules/api/functional_test/live_tests/list_batch_summaries_test_context.py rename to test/api/functional/test/tests/list_batch_summaries_test_context.py diff --git a/modules/api/functional_test/live_tests/list_groups_test_context.py b/test/api/functional/test/tests/list_groups_test_context.py similarity index 100% rename from modules/api/functional_test/live_tests/list_groups_test_context.py rename to test/api/functional/test/tests/list_groups_test_context.py diff --git a/modules/api/functional_test/live_tests/list_recordsets_test_context.py b/test/api/functional/test/tests/list_recordsets_test_context.py similarity index 100% rename from modules/api/functional_test/live_tests/list_recordsets_test_context.py rename to test/api/functional/test/tests/list_recordsets_test_context.py diff --git a/modules/api/functional_test/live_tests/list_zones_test_context.py b/test/api/functional/test/tests/list_zones_test_context.py similarity index 100% rename from modules/api/functional_test/live_tests/list_zones_test_context.py rename to test/api/functional/test/tests/list_zones_test_context.py diff --git a/modules/api/functional_test/live_tests/membership/create_group_test.py b/test/api/functional/test/tests/membership/create_group_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/create_group_test.py rename to test/api/functional/test/tests/membership/create_group_test.py diff --git a/modules/api/functional_test/live_tests/membership/delete_group_test.py b/test/api/functional/test/tests/membership/delete_group_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/delete_group_test.py rename to test/api/functional/test/tests/membership/delete_group_test.py diff --git a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py b/test/api/functional/test/tests/membership/get_group_changes_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/get_group_changes_test.py rename to test/api/functional/test/tests/membership/get_group_changes_test.py diff --git a/modules/api/functional_test/live_tests/membership/get_group_test.py b/test/api/functional/test/tests/membership/get_group_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/get_group_test.py rename to test/api/functional/test/tests/membership/get_group_test.py diff --git a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py b/test/api/functional/test/tests/membership/list_group_admins_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/list_group_admins_test.py rename to test/api/functional/test/tests/membership/list_group_admins_test.py diff --git a/modules/api/functional_test/live_tests/membership/list_group_members_test.py b/test/api/functional/test/tests/membership/list_group_members_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/list_group_members_test.py rename to test/api/functional/test/tests/membership/list_group_members_test.py diff --git a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py b/test/api/functional/test/tests/membership/list_my_groups_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/list_my_groups_test.py rename to test/api/functional/test/tests/membership/list_my_groups_test.py diff --git a/modules/api/functional_test/live_tests/membership/update_group_test.py b/test/api/functional/test/tests/membership/update_group_test.py similarity index 100% rename from modules/api/functional_test/live_tests/membership/update_group_test.py rename to test/api/functional/test/tests/membership/update_group_test.py diff --git a/modules/api/functional_test/live_tests/production_verify_test.py b/test/api/functional/test/tests/production_verify_test.py similarity index 100% rename from modules/api/functional_test/live_tests/production_verify_test.py rename to test/api/functional/test/tests/production_verify_test.py diff --git a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py b/test/api/functional/test/tests/recordsets/create_recordset_test.py similarity index 99% rename from modules/api/functional_test/live_tests/recordsets/create_recordset_test.py rename to test/api/functional/test/tests/recordsets/create_recordset_test.py index cfe5b169f..a25d50f10 100644 --- a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py +++ b/test/api/functional/test/tests/recordsets/create_recordset_test.py @@ -1,6 +1,6 @@ import pytest -from live_tests.test_data import TestData +from tests.test_data import TestData from utils import * diff --git a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py b/test/api/functional/test/tests/recordsets/delete_recordset_test.py similarity index 99% rename from modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py rename to test/api/functional/test/tests/recordsets/delete_recordset_test.py index 135b26abb..a04322826 100644 --- a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py +++ b/test/api/functional/test/tests/recordsets/delete_recordset_test.py @@ -1,6 +1,6 @@ import pytest -from live_tests.test_data import TestData +from tests.test_data import TestData from utils import * diff --git a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py b/test/api/functional/test/tests/recordsets/get_recordset_test.py similarity index 100% rename from modules/api/functional_test/live_tests/recordsets/get_recordset_test.py rename to test/api/functional/test/tests/recordsets/get_recordset_test.py diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py b/test/api/functional/test/tests/recordsets/list_recordset_changes_test.py similarity index 100% rename from modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py rename to test/api/functional/test/tests/recordsets/list_recordset_changes_test.py diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py b/test/api/functional/test/tests/recordsets/list_recordsets_test.py similarity index 100% rename from modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py rename to test/api/functional/test/tests/recordsets/list_recordsets_test.py diff --git a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py b/test/api/functional/test/tests/recordsets/update_recordset_test.py similarity index 99% rename from modules/api/functional_test/live_tests/recordsets/update_recordset_test.py rename to test/api/functional/test/tests/recordsets/update_recordset_test.py index 701dec6fa..db7e51b90 100644 --- a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py +++ b/test/api/functional/test/tests/recordsets/update_recordset_test.py @@ -3,7 +3,7 @@ from urllib.parse import urljoin import pytest -from live_tests.test_data import TestData +from tests.test_data import TestData from utils import * diff --git a/modules/api/functional_test/live_tests/shared_zone_test_context.py b/test/api/functional/test/tests/shared_zone_test_context.py similarity index 98% rename from modules/api/functional_test/live_tests/shared_zone_test_context.py rename to test/api/functional/test/tests/shared_zone_test_context.py index f8359e7e4..8d919cb23 100644 --- a/modules/api/functional_test/live_tests/shared_zone_test_context.py +++ b/test/api/functional/test/tests/shared_zone_test_context.py @@ -3,11 +3,11 @@ import inspect import logging from typing import MutableMapping, Mapping -from live_tests.list_batch_summaries_test_context import ListBatchChangeSummariesTestContext -from live_tests.list_groups_test_context import ListGroupsTestContext -from live_tests.list_recordsets_test_context import ListRecordSetsTestContext -from live_tests.list_zones_test_context import ListZonesTestContext -from live_tests.test_data import TestData +from tests.list_batch_summaries_test_context import ListBatchChangeSummariesTestContext +from tests.list_groups_test_context import ListGroupsTestContext +from tests.list_recordsets_test_context import ListRecordSetsTestContext +from tests.list_zones_test_context import ListZonesTestContext +from tests.test_data import TestData from utils import * from vinyldns_python import VinylDNSClient @@ -64,6 +64,7 @@ class SharedZoneTestContext(object): self.ip4_classless_prefix = None self.ip6_prefix = None + def setup(self): if self.setup_started: # Safeguard against reentrance @@ -588,4 +589,4 @@ class SharedZoneTestContext(object): success = group in client.list_all_my_groups(status=200) time.sleep(.05) retries -= 1 - assert_that(success, is_(True)) \ No newline at end of file + assert_that(success, is_(True)) diff --git a/modules/api/functional_test/live_tests/test_data.py b/test/api/functional/test/tests/test_data.py similarity index 100% rename from modules/api/functional_test/live_tests/test_data.py rename to test/api/functional/test/tests/test_data.py diff --git a/modules/api/functional_test/live_tests/zones/create_zone_test.py b/test/api/functional/test/tests/zones/create_zone_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/create_zone_test.py rename to test/api/functional/test/tests/zones/create_zone_test.py diff --git a/modules/api/functional_test/live_tests/zones/delete_zone_test.py b/test/api/functional/test/tests/zones/delete_zone_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/delete_zone_test.py rename to test/api/functional/test/tests/zones/delete_zone_test.py diff --git a/modules/api/functional_test/live_tests/zones/get_zone_test.py b/test/api/functional/test/tests/zones/get_zone_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/get_zone_test.py rename to test/api/functional/test/tests/zones/get_zone_test.py diff --git a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py b/test/api/functional/test/tests/zones/list_zone_changes_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/list_zone_changes_test.py rename to test/api/functional/test/tests/zones/list_zone_changes_test.py diff --git a/modules/api/functional_test/live_tests/zones/list_zones_test.py b/test/api/functional/test/tests/zones/list_zones_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/list_zones_test.py rename to test/api/functional/test/tests/zones/list_zones_test.py diff --git a/modules/api/functional_test/live_tests/zones/sync_zone_test.py b/test/api/functional/test/tests/zones/sync_zone_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/sync_zone_test.py rename to test/api/functional/test/tests/zones/sync_zone_test.py diff --git a/modules/api/functional_test/live_tests/zones/update_zone_test.py b/test/api/functional/test/tests/zones/update_zone_test.py similarity index 100% rename from modules/api/functional_test/live_tests/zones/update_zone_test.py rename to test/api/functional/test/tests/zones/update_zone_test.py diff --git a/modules/api/functional_test/utils.py b/test/api/functional/test/utils.py similarity index 100% rename from modules/api/functional_test/utils.py rename to test/api/functional/test/utils.py diff --git a/modules/api/functional_test/vinyldns_context.py b/test/api/functional/test/vinyldns_context.py similarity index 100% rename from modules/api/functional_test/vinyldns_context.py rename to test/api/functional/test/vinyldns_context.py diff --git a/modules/api/functional_test/vinyldns_python.py b/test/api/functional/test/vinyldns_python.py similarity index 100% rename from modules/api/functional_test/vinyldns_python.py rename to test/api/functional/test/vinyldns_python.py diff --git a/modules/api/functional_test/docker.conf b/test/api/functional/vinyldns.conf similarity index 96% rename from modules/api/functional_test/docker.conf rename to test/api/functional/vinyldns.conf index 1a570b250..a47ad700b 100644 --- a/modules/api/functional_test/docker.conf +++ b/test/api/functional/vinyldns.conf @@ -24,7 +24,7 @@ vinyldns { key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "127.0.0.1" + primary-server = "127.0.0.1:19001" primary-server = ${?DEFAULT_DNS_ADDRESS} } transfer-connection = { @@ -33,7 +33,7 @@ vinyldns { key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "127.0.0.1" + primary-server = "127.0.0.1:19001" primary-server = ${?DEFAULT_DNS_ADDRESS} }, tsig-usage = "always" @@ -46,7 +46,7 @@ vinyldns { key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "127.0.0.1" + primary-server = "127.0.0.1:19001" primary-server = ${?DEFAULT_DNS_ADDRESS} } transfer-connection = { @@ -55,7 +55,7 @@ vinyldns { key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "127.0.0.1" + primary-server = "127.0.0.1:19001" primary-server = ${?DEFAULT_DNS_ADDRESS} }, tsig-usage = "always" @@ -84,7 +84,7 @@ vinyldns { signing-region = ${?SQS_REGION} # Endpoint to access queue - service-endpoint = "http://localhost:4566/" + service-endpoint = "http://localhost:19003/" service-endpoint = ${?SQS_ENDPOINT} # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. diff --git a/test/api/integration/Dockerfile b/test/api/integration/Dockerfile new file mode 100644 index 000000000..aa0080dda --- /dev/null +++ b/test/api/integration/Dockerfile @@ -0,0 +1,28 @@ +# Build VinylDNS API if the JAR doesn't already exist +FROM vinyldns/build:base-build as base-build +ARG DOCKERFILE_PATH="test/api/integration" +COPY "${DOCKERFILE_PATH}/vinyldns.*" /opt/vinyldns/ +COPY . /build/ +WORKDIR /build + +## Run the build if we don't already have a vinyldns.jar +RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ + env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ + sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ + && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ + fi + +# Build the testing image, copying data from `vinyldns-api` +FROM vinyldns/build:base-test-integration +SHELL ["/bin/bash","-c"] +ARG DOCKERFILE_PATH +COPY --from=base-build /opt/vinyldns /opt/vinyldns + +# Copy the project contents +COPY . /build/ +WORKDIR /build + +# Local bind server files +COPY docker/bind9/etc/named.conf.* /etc/bind/ +COPY docker/bind9/zones/ /var/bind/ +RUN named-checkconf diff --git a/test/api/integration/Dockerfile.dockerignore b/test/api/integration/Dockerfile.dockerignore new file mode 100644 index 000000000..e42085f51 --- /dev/null +++ b/test/api/integration/Dockerfile.dockerignore @@ -0,0 +1,15 @@ +**/.venv* +**/.virtualenv +**/target +**/docs +**/out +**/.log +**/.idea/ +**/.bsp +**/*cache* +**/*.png +**/.git +**/Dockerfile +**/*.dockerignore +**/.github +**/_template diff --git a/test/api/integration/Makefile b/test/api/integration/Makefile new file mode 100644 index 000000000..8817cf335 --- /dev/null +++ b/test/api/integration/Makefile @@ -0,0 +1,51 @@ +SHELL=bash +IMAGE_NAME=vinyldns-integraion +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) +RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR)) +VINYLDNS_JAR_PATH?=modules/api/target/scala-2.12/vinyldns.jar + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +# Extract arguments for `make run` +EXTRACT_ARGS=true +ifeq (run,$(firstword $(MAKECMDGOALS))) + EXTRACT_ARGS=true +endif +ifeq ($(EXTRACT_ARGS),true) + # use the rest as arguments for "run" + RUN_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) +endif + +%: + @: + +.ONESHELL: + +.PHONY: all build run run-local + +all: build run + +build: + @set -euo pipefail + trap 'if [ -f "$(ROOT_DIR)/vinyldns.jar" ]; then rm $(ROOT_DIR)/vinyldns.jar; fi' EXIT + cd ../../.. + if [ -f modules/api/target/scala-2.12/vinyldns.jar ]; then cp modules/api/target/scala-2.12/vinyldns.jar $(ROOT_DIR)/vinyldns.jar; fi + docker build -t $(IMAGE_NAME) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . + +run: + @set -euo pipefail + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- $(RUN_ARGS) + +run-bg: + @set -euo pipefail + docker stop vinyldns-integration &> /dev/null || true + docker rm vinyldns-integration &> /dev/null || true + docker run -td --name vinyldns-integration --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- /bin/bash + +run-local: + @set -euo pipefail + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp -v "$(ROOT_DIR)/../../..:/build" $(IMAGE_NAME) -- $(RUN_ARGS) diff --git a/test/api/integration/vinyldns.conf b/test/api/integration/vinyldns.conf new file mode 100644 index 000000000..a47ad700b --- /dev/null +++ b/test/api/integration/vinyldns.conf @@ -0,0 +1,302 @@ +################################################################################################################ +# This configuration is only used by docker and the build process +################################################################################################################ +vinyldns { + + # configured backend providers + backend { + # Use "default" when dns backend legacy = true + # otherwise, use the id of one of the connections in any of your backends + default-backend-id = "default" + + # this is where we can save additional backends + backend-providers = [ + { + class-name = "vinyldns.api.backend.dns.DnsBackendProviderLoader" + settings = { + legacy = false + backends = [ + { + id = "default" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} + }, + tsig-usage = "always" + }, + { + id = "func-test-backend" + zone-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} + } + transfer-connection = { + name = "vinyldns." + key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "127.0.0.1:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} + }, + tsig-usage = "always" + } + ] + } + } + ] + } + + queue { + class-name = "vinyldns.sqs.queue.SqsMessageQueueProvider" + + messages-per-poll = 10 + polling-interval = 250.millis + + settings { + # AWS access key and secret. + access-key = "test" + access-key = ${?AWS_ACCESS_KEY} + secret-key = "test" + secret-key = ${?AWS_SECRET_ACCESS_KEY} + + # Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed. + signing-region = "us-east-1" + signing-region = ${?SQS_REGION} + + # Endpoint to access queue + service-endpoint = "http://localhost:19003/" + service-endpoint = ${?SQS_ENDPOINT} + + # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. + queue-name = "vinyldns" + queue-name = ${?SQS_QUEUE_NAME} + } + } + + rest { + host = "0.0.0.0" + port = 9000 + } + + sync-delay = 10000 + + approved-name-servers = [ + "172.17.42.1.", + "ns1.parent.com." + "ns1.parent.com1." + "ns1.parent.com2." + "ns1.parent.com3." + "ns1.parent.com4." + ] + + crypto { + type = "vinyldns.core.crypto.NoOpCrypto" + } + + data-stores = ["mysql"] + + mysql { + settings { + # JDBC Settings, these are all values in scalikejdbc-config, not our own + # these must be overridden to use MYSQL for production use + # assumes a docker or mysql instance running locally + name = "vinyldns" + driver = "org.h2.Driver" + driver = ${?JDBC_DRIVER} + migration-url = "jdbc:h2:mem:vinyldns;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=TRUE;IGNORECASE=TRUE;INIT=RUNSCRIPT FROM 'classpath:test/ddl.sql'" + migration-url = ${?JDBC_MIGRATION_URL} + url = "jdbc:h2:mem:vinyldns;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=TRUE;IGNORECASE=TRUE;INIT=RUNSCRIPT FROM 'classpath:test/ddl.sql'" + url = ${?JDBC_URL} + user = "sa" + user = ${?JDBC_USER} + password = "" + password = ${?JDBC_PASSWORD} + # see https://github.com/brettwooldridge/HikariCP + connection-timeout-millis = 1000 + idle-timeout = 10000 + max-lifetime = 600000 + maximum-pool-size = 20 + minimum-idle = 20 + register-mbeans = true + } + # Repositories that use this data store are listed here + repositories { + zone { + # no additional settings for now + } + batch-change { + # no additional settings for now + } + user { + + } + record-set { + + } + group { + + } + membership { + + } + group-change { + + } + zone-change { + + } + record-change { + + } + } + } + + backends = [] + + batch-change-limit = 1000 + + # FQDNs / IPs that cannot be modified via VinylDNS + # regex-list used for all record types except PTR + # ip-list used exclusively for PTR records + high-value-domains = { + regex-list = [ + "high-value-domain.*" # for testing + ] + ip-list = [ + # using reverse zones in the vinyldns/bind9 docker image for testing + "192.0.2.252", + "192.0.2.253", + "fd69:27cc:fe91:0:0:0:0:ffff", + "fd69:27cc:fe91:0:0:0:ffff:0" + ] + } + + # FQDNs / IPs / zone names that require manual review upon submission in batch change interface + # domain-list used for all record types except PTR + # ip-list used exclusively for PTR records + manual-review-domains = { + domain-list = [ + "needs-review.*" + ] + ip-list = [ + "192.0.1.254", + "192.0.1.255", + "192.0.2.254", + "192.0.2.255", + "192.0.3.254", + "192.0.3.255", + "192.0.4.254", + "192.0.4.255", + "fd69:27cc:fe91:0:0:0:ffff:1", + "fd69:27cc:fe91:0:0:0:ffff:2", + "fd69:27cc:fe92:0:0:0:ffff:1", + "fd69:27cc:fe92:0:0:0:ffff:2", + "fd69:27cc:fe93:0:0:0:ffff:1", + "fd69:27cc:fe93:0:0:0:ffff:2", + "fd69:27cc:fe94:0:0:0:ffff:1", + "fd69:27cc:fe94:0:0:0:ffff:2" + ] + zone-name-list = [ + "zone.requires.review." + "zone.requires.review1." + "zone.requires.review2." + "zone.requires.review3." + "zone.requires.review4." + ] + } + + # FQDNs / IPs that cannot be modified via VinylDNS + # regex-list used for all record types except PTR + # ip-list used exclusively for PTR records + high-value-domains = { + regex-list = [ + "high-value-domain.*" # for testing + ] + ip-list = [ + # using reverse zones in the vinyldns/bind9 docker image for testing + "192.0.1.252", + "192.0.1.253", + "192.0.2.252", + "192.0.2.253", + "192.0.3.252", + "192.0.3.253", + "192.0.4.252", + "192.0.4.253", + "fd69:27cc:fe91:0:0:0:0:ffff", + "fd69:27cc:fe91:0:0:0:ffff:0", + "fd69:27cc:fe92:0:0:0:0:ffff", + "fd69:27cc:fe92:0:0:0:ffff:0", + "fd69:27cc:fe93:0:0:0:0:ffff", + "fd69:27cc:fe93:0:0:0:ffff:0", + "fd69:27cc:fe94:0:0:0:0:ffff", + "fd69:27cc:fe94:0:0:0:ffff:0" + ] + } + + # types of unowned records that users can access in shared zones + shared-approved-types = ["A", "AAAA", "CNAME", "PTR", "TXT"] + + manual-batch-review-enabled = true + + scheduled-changes-enabled = true + + multi-record-batch-change-enabled = true + + global-acl-rules = [ + { + group-ids: ["global-acl-group-id"], + fqdn-regex-list: [".*shared[0-9]{1}."] + }, + { + group-ids: ["another-global-acl-group"], + fqdn-regex-list: [".*ok[0-9]{1}."] + } + ] +} + +akka { + loglevel = "INFO" + loggers = ["akka.event.slf4j.Slf4jLogger"] + logging-filter = "akka.event.slf4j.Slf4jLoggingFilter" + logger-startup-timeout = 30s + + actor { + provider = "akka.actor.LocalActorRefProvider" + } +} + +akka.http { + server { + # The time period within which the TCP binding process must be completed. + # Set to `infinite` to disable. + bind-timeout = 5s + + # Show verbose error messages back to the client + verbose-error-messages = on + } + + parsing { + # Spray doesn't like the AWS4 headers + illegal-header-warnings = on + } +} diff --git a/test/portal/functional/Dockerfile b/test/portal/functional/Dockerfile new file mode 100644 index 000000000..01c5b7a9c --- /dev/null +++ b/test/portal/functional/Dockerfile @@ -0,0 +1,14 @@ +FROM vinyldns/build:base-test-portal +SHELL ["/bin/bash","-c"] +ARG DOCKERFILE_PATH="test/portal/functional" + +WORKDIR /functional_test +COPY modules/portal /functional_test +COPY $DOCKERFILE_PATH/run.sh /functional_test +RUN cp /build/node_modules.tar.xz /functional_test && \ + cd /functional_test && \ + tar Jxvf node_modules.tar.xz && \ + rm -rf node_modules.tar.xz + +ENTRYPOINT ["./run.sh"] + diff --git a/test/portal/functional/Dockerfile.dockerignore b/test/portal/functional/Dockerfile.dockerignore new file mode 100644 index 000000000..e42085f51 --- /dev/null +++ b/test/portal/functional/Dockerfile.dockerignore @@ -0,0 +1,15 @@ +**/.venv* +**/.virtualenv +**/target +**/docs +**/out +**/.log +**/.idea/ +**/.bsp +**/*cache* +**/*.png +**/.git +**/Dockerfile +**/*.dockerignore +**/.github +**/_template diff --git a/test/portal/functional/Makefile b/test/portal/functional/Makefile new file mode 100644 index 000000000..fe2a57d00 --- /dev/null +++ b/test/portal/functional/Makefile @@ -0,0 +1,42 @@ +SHELL=bash +IMAGE_NAME=vinyldns-portal-test +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) +RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR)) + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +# Extract arguments for `make run` +EXTRACT_ARGS=true +ifeq (run,$(firstword $(MAKECMDGOALS))) + EXTRACT_ARGS=true +endif +ifeq ($(EXTRACT_ARGS),true) + # use the rest as arguments for "run" + RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + # ...and turn them into do-nothing targets + $(eval $(RUN_ARGS):;@:) +endif + + +.ONESHELL: + +.PHONY: all build run run-local + +all: build run + +build: + @set -euo pipefail + cd ../../.. + docker build -t $(IMAGE_NAME) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . + +run: + @set -euo pipefail + docker run -it --rm $(IMAGE_NAME) -- $(RUN_ARGS) + +run-local: + @set -euo pipefail + docker run -it --rm -v "$$(pwd)/../../../modules/portal:/functional_test" $(IMAGE_NAME) -- $(RUN_ARGS) diff --git a/test/portal/functional/run.sh b/test/portal/functional/run.sh new file mode 100644 index 000000000..753d0e239 --- /dev/null +++ b/test/portal/functional/run.sh @@ -0,0 +1,13 @@ +#!/usr/bin/env bash + +set -eo pipefail + +ROOT_DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) + +cd "${ROOT_DIR}" +if [ "$1" == "--interactive" ]; then + shift + bash +else + grunt unit "$@" +fi From a075c3c35ed4b2bcba82b3597cc543359ca8ef52 Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Wed, 20 Oct 2021 09:07:19 -0400 Subject: [PATCH 15/82] Updates - Move away from using multiple images for "quickstart" and instead use a single "integration" image which provides all of the dependencies - Update `docker-up-vinyldns.sh` to support the new `integration` image - Update `remove-vinyl-containers.sh` to more cleanly.. clean up - Update `verify.sh` to more reliably run `sbt` targets - Update `build/docker/api/application.conf` to allow for overrides and default to the `vinyldns-integration` image - Update `build/docker/portal/application.conf` to allow overrides and use `vinyldns-integration` image - Update `build/docker/portal/Dockerfile` to use `vinyldns/build:base-build-portal` to reduce need to download dependencies over and over - Update `api/assembly` sbt target to output to `assembly` rather than some deeply nested folder in `**/target` - Update documentation to reflect changes - Move `docker/` directory to `quickstart/` to reduce confusion with the `build/docker` directory - Move `bin/` to `utils/` since the files are binaries - Add `.dockerignore` to root --- .../Dockerfile.dockerignore => .dockerignore | 2 +- .github/workflows/ci.yml | 4 +- .gitignore | 4 +- DEVELOPER_GUIDE.md | 54 ++- MAINTAINERS.md | 2 +- README.md | 6 +- bin/.env | 2 - bin/docker-up-vinyldns.sh | 113 ------ bin/remove-vinyl-containers.sh | 30 -- build.sbt | 11 +- build/docker/api/application.conf | 40 ++- build/docker/portal/Dockerfile | 12 +- build/docker/portal/application.conf | 15 +- docker/.env | 17 - docker/.env.quickstart | 17 - docker/api/.dockerignore | 5 - docker/api/Dockerfile | 17 - docker/api/docker.conf | 333 ------------------ docker/api/logback.xml | 12 - docker/api/run.sh | 41 --- docker/docker-compose-quick-start.yml | 68 ---- docker/docker-compose.yml | 41 --- .../zone/ZoneViewLoaderIntegrationSpec.scala | 1 - .../scala/vinyldns/mysql/MySqlConnector.scala | 34 +- modules/portal/README.md | 25 +- modules/portal/dist/wait-for-dependencies.sh | 32 -- modules/portal/prepare-portal.sh | 2 +- quickstart/.env | 17 + {docker => quickstart}/bind9/README.md | 0 .../bind9/etc/_template/named.partition.conf | 0 .../bind9/etc/named.conf.local | 0 .../bind9/etc/named.conf.partition1 | 0 .../bind9/etc/named.conf.partition2 | 0 .../bind9/etc/named.conf.partition3 | 0 .../bind9/etc/named.conf.partition4 | 0 .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../bind9/zones/_template/10.10.in-addr.arpa | 0 .../_template/192^30.2.0.192.in-addr.arpa | 0 .../zones/_template/2.0.192.in-addr.arpa | 0 .../zones/_template/child.parent.com.hosts | 0 .../zones/_template/dskey.example.com.hosts | 0 .../bind9/zones/_template/dummy.hosts | 0 .../bind9/zones/_template/example.com.hosts | 0 .../bind9/zones/_template/invalid-zone.hosts | 0 .../bind9/zones/_template/list-records.hosts | 0 .../list-zones-test-searched-1.hosts | 0 .../list-zones-test-searched-2.hosts | 0 .../list-zones-test-searched-3.hosts | 0 .../list-zones-test-unfiltered-1.hosts | 0 .../list-zones-test-unfiltered-2.hosts | 0 .../zones/_template/non.test.shared.hosts | 0 .../bind9/zones/_template/not.loaded.hosts | 0 .../bind9/zones/_template/ok.hosts | 0 .../bind9/zones/_template/old-shared.hosts | 0 .../bind9/zones/_template/old-vinyldns2.hosts | 0 .../bind9/zones/_template/old-vinyldns3.hosts | 0 .../zones/_template/one-time-shared.hosts | 0 .../bind9/zones/_template/one-time.hosts | 0 .../bind9/zones/_template/open.hosts | 0 .../bind9/zones/_template/parent.com.hosts | 0 .../bind9/zones/_template/shared.hosts | 0 .../bind9/zones/_template/sync-test.hosts | 0 .../zones/_template/system-test-history.hosts | 0 .../bind9/zones/_template/system-test.hosts | 0 .../bind9/zones/_template/vinyldns.hosts | 0 .../_template/zone.requires.review.hosts | 0 .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../bind9/zones/partition1/10.10.in-addr.arpa | 0 .../partition1/192^30.2.0.192.in-addr.arpa | 0 .../zones/partition1/2.0.192.in-addr.arpa | 0 .../zones/partition1/child.parent.com.hosts | 0 .../zones/partition1/dskey.example.com.hosts | 0 .../bind9/zones/partition1/dummy.hosts | 0 .../bind9/zones/partition1/example.com.hosts | 0 .../bind9/zones/partition1/invalid-zone.hosts | 0 .../bind9/zones/partition1/list-records.hosts | 0 .../list-zones-test-searched-1.hosts | 0 .../list-zones-test-searched-2.hosts | 0 .../list-zones-test-searched-3.hosts | 0 .../list-zones-test-unfiltered-1.hosts | 0 .../list-zones-test-unfiltered-2.hosts | 0 .../zones/partition1/non.test.shared.hosts | 0 .../bind9/zones/partition1/not.loaded.hosts | 0 .../bind9/zones/partition1/ok.hosts | 0 .../bind9/zones/partition1/old-shared.hosts | 0 .../zones/partition1/old-vinyldns2.hosts | 0 .../zones/partition1/old-vinyldns3.hosts | 0 .../zones/partition1/one-time-shared.hosts | 0 .../bind9/zones/partition1/one-time.hosts | 0 .../bind9/zones/partition1/open.hosts | 0 .../bind9/zones/partition1/parent.com.hosts | 0 .../bind9/zones/partition1/shared.hosts | 0 .../bind9/zones/partition1/sync-test.hosts | 0 .../partition1/system-test-history.hosts | 0 .../bind9/zones/partition1/system-test.hosts | 0 .../bind9/zones/partition1/vinyldns.hosts | 0 .../partition1/zone.requires.review.hosts | 0 .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../bind9/zones/partition2/10.10.in-addr.arpa | 0 .../partition2/192^30.2.0.192.in-addr.arpa | 0 .../zones/partition2/2.0.192.in-addr.arpa | 0 .../zones/partition2/child.parent.com.hosts | 0 .../zones/partition2/dskey.example.com.hosts | 0 .../bind9/zones/partition2/dummy.hosts | 0 .../bind9/zones/partition2/example.com.hosts | 0 .../bind9/zones/partition2/invalid-zone.hosts | 0 .../bind9/zones/partition2/list-records.hosts | 0 .../list-zones-test-searched-1.hosts | 0 .../list-zones-test-searched-2.hosts | 0 .../list-zones-test-searched-3.hosts | 0 .../list-zones-test-unfiltered-1.hosts | 0 .../list-zones-test-unfiltered-2.hosts | 0 .../zones/partition2/non.test.shared.hosts | 0 .../bind9/zones/partition2/not.loaded.hosts | 0 .../bind9/zones/partition2/ok.hosts | 0 .../bind9/zones/partition2/old-shared.hosts | 0 .../zones/partition2/old-vinyldns2.hosts | 0 .../zones/partition2/old-vinyldns3.hosts | 0 .../zones/partition2/one-time-shared.hosts | 0 .../bind9/zones/partition2/one-time.hosts | 0 .../bind9/zones/partition2/open.hosts | 0 .../bind9/zones/partition2/parent.com.hosts | 0 .../bind9/zones/partition2/shared.hosts | 0 .../bind9/zones/partition2/sync-test.hosts | 0 .../partition2/system-test-history.hosts | 0 .../bind9/zones/partition2/system-test.hosts | 0 .../bind9/zones/partition2/vinyldns.hosts | 0 .../partition2/zone.requires.review.hosts | 0 .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../bind9/zones/partition3/10.10.in-addr.arpa | 0 .../partition3/192^30.2.0.192.in-addr.arpa | 0 .../zones/partition3/2.0.192.in-addr.arpa | 0 .../zones/partition3/child.parent.com.hosts | 0 .../zones/partition3/dskey.example.com.hosts | 0 .../bind9/zones/partition3/dummy.hosts | 0 .../bind9/zones/partition3/example.com.hosts | 0 .../bind9/zones/partition3/invalid-zone.hosts | 0 .../bind9/zones/partition3/list-records.hosts | 0 .../list-zones-test-searched-1.hosts | 0 .../list-zones-test-searched-2.hosts | 0 .../list-zones-test-searched-3.hosts | 0 .../list-zones-test-unfiltered-1.hosts | 0 .../list-zones-test-unfiltered-2.hosts | 0 .../zones/partition3/non.test.shared.hosts | 0 .../bind9/zones/partition3/not.loaded.hosts | 0 .../bind9/zones/partition3/ok.hosts | 0 .../bind9/zones/partition3/old-shared.hosts | 0 .../zones/partition3/old-vinyldns2.hosts | 0 .../zones/partition3/old-vinyldns3.hosts | 0 .../zones/partition3/one-time-shared.hosts | 0 .../bind9/zones/partition3/one-time.hosts | 0 .../bind9/zones/partition3/open.hosts | 0 .../bind9/zones/partition3/parent.com.hosts | 0 .../bind9/zones/partition3/shared.hosts | 0 .../bind9/zones/partition3/sync-test.hosts | 0 .../partition3/system-test-history.hosts | 0 .../bind9/zones/partition3/system-test.hosts | 0 .../bind9/zones/partition3/vinyldns.hosts | 0 .../partition3/zone.requires.review.hosts | 0 .../0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa | 0 .../bind9/zones/partition4/10.10.in-addr.arpa | 0 .../partition4/192^30.2.0.192.in-addr.arpa | 0 .../zones/partition4/2.0.192.in-addr.arpa | 0 .../zones/partition4/child.parent.com.hosts | 0 .../zones/partition4/dskey.example.com.hosts | 0 .../bind9/zones/partition4/dummy.hosts | 0 .../bind9/zones/partition4/example.com.hosts | 0 .../bind9/zones/partition4/invalid-zone.hosts | 0 .../bind9/zones/partition4/list-records.hosts | 0 .../list-zones-test-searched-1.hosts | 0 .../list-zones-test-searched-2.hosts | 0 .../list-zones-test-searched-3.hosts | 0 .../list-zones-test-unfiltered-1.hosts | 0 .../list-zones-test-unfiltered-2.hosts | 0 .../zones/partition4/non.test.shared.hosts | 0 .../bind9/zones/partition4/not.loaded.hosts | 0 .../bind9/zones/partition4/ok.hosts | 0 .../bind9/zones/partition4/old-shared.hosts | 0 .../zones/partition4/old-vinyldns2.hosts | 0 .../zones/partition4/old-vinyldns3.hosts | 0 .../zones/partition4/one-time-shared.hosts | 0 .../bind9/zones/partition4/one-time.hosts | 0 .../bind9/zones/partition4/open.hosts | 0 .../bind9/zones/partition4/parent.com.hosts | 0 .../bind9/zones/partition4/shared.hosts | 0 .../bind9/zones/partition4/sync-test.hosts | 0 .../partition4/system-test-history.hosts | 0 .../bind9/zones/partition4/system-test.hosts | 0 .../bind9/zones/partition4/vinyldns.hosts | 0 .../partition4/zone.requires.review.hosts | 0 quickstart/docker-compose.yml | 45 +++ quickstart/portal/Dockerfile | 34 ++ quickstart/portal/Makefile | 43 +++ .../portal/application.conf | 5 +- {docker => quickstart}/portal/application.ini | 0 test/api/functional/Dockerfile | 3 +- test/api/functional/Makefile | 8 +- test/api/integration/Dockerfile | 5 +- test/api/integration/Dockerfile.dockerignore | 15 - test/api/integration/Makefile | 20 +- .../portal/functional/Dockerfile.dockerignore | 15 - test/portal/functional/Makefile | 8 +- {bin => utils}/add-license-headers.sh | 0 {docker => utils}/admin/Dockerfile | 0 .../admin/update-support-user.py | 0 utils/clean-vinyldns-containers.sh | 27 ++ {bin => utils}/func-test-api.sh | 0 {bin => utils}/func-test-portal.sh | 0 utils/quickstart-vinyldns.sh | 141 ++++++++ {bin => utils}/release.sh | 6 +- {bin => utils}/update-support-user.sh | 0 {bin => utils}/verify.sh | 2 +- 217 files changed, 461 insertions(+), 873 deletions(-) rename test/api/functional/Dockerfile.dockerignore => .dockerignore (94%) delete mode 100644 bin/.env delete mode 100755 bin/docker-up-vinyldns.sh delete mode 100755 bin/remove-vinyl-containers.sh delete mode 100644 docker/.env delete mode 100644 docker/.env.quickstart delete mode 100644 docker/api/.dockerignore delete mode 100644 docker/api/Dockerfile delete mode 100644 docker/api/docker.conf delete mode 100644 docker/api/logback.xml delete mode 100755 docker/api/run.sh delete mode 100644 docker/docker-compose-quick-start.yml delete mode 100644 docker/docker-compose.yml delete mode 100755 modules/portal/dist/wait-for-dependencies.sh create mode 100644 quickstart/.env rename {docker => quickstart}/bind9/README.md (100%) rename {docker => quickstart}/bind9/etc/_template/named.partition.conf (100%) rename {docker => quickstart}/bind9/etc/named.conf.local (100%) mode change 100755 => 100644 rename {docker => quickstart}/bind9/etc/named.conf.partition1 (100%) rename {docker => quickstart}/bind9/etc/named.conf.partition2 (100%) rename {docker => quickstart}/bind9/etc/named.conf.partition3 (100%) rename {docker => quickstart}/bind9/etc/named.conf.partition4 (100%) rename {docker => quickstart}/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/_template/10.10.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/_template/192^30.2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/_template/2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/_template/child.parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/dskey.example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/dummy.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/invalid-zone.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-records.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-zones-test-searched-1.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-zones-test-searched-2.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-zones-test-searched-3.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-zones-test-unfiltered-1.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/list-zones-test-unfiltered-2.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/non.test.shared.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/not.loaded.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/ok.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/old-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/old-vinyldns2.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/old-vinyldns3.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/one-time-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/one-time.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/open.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/shared.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/sync-test.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/system-test-history.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/system-test.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/vinyldns.hosts (100%) rename {docker => quickstart}/bind9/zones/_template/zone.requires.review.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition1/10.10.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition1/2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition1/child.parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/dskey.example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/dummy.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/invalid-zone.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-records.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-zones-test-searched-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-zones-test-searched-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-zones-test-searched-3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/non.test.shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/not.loaded.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/ok.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/old-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/old-vinyldns2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/old-vinyldns3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/one-time-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/one-time.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/open.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/sync-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/system-test-history.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/system-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/vinyldns.hosts (100%) rename {docker => quickstart}/bind9/zones/partition1/zone.requires.review.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition2/10.10.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition2/2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition2/child.parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/dskey.example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/dummy.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/invalid-zone.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-records.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-zones-test-searched-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-zones-test-searched-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-zones-test-searched-3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/non.test.shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/not.loaded.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/ok.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/old-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/old-vinyldns2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/old-vinyldns3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/one-time-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/one-time.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/open.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/sync-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/system-test-history.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/system-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/vinyldns.hosts (100%) rename {docker => quickstart}/bind9/zones/partition2/zone.requires.review.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition3/10.10.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition3/2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition3/child.parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/dskey.example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/dummy.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/invalid-zone.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-records.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-zones-test-searched-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-zones-test-searched-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-zones-test-searched-3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/non.test.shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/not.loaded.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/ok.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/old-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/old-vinyldns2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/old-vinyldns3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/one-time-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/one-time.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/open.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/sync-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/system-test-history.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/system-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/vinyldns.hosts (100%) rename {docker => quickstart}/bind9/zones/partition3/zone.requires.review.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa (100%) rename {docker => quickstart}/bind9/zones/partition4/10.10.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition4/2.0.192.in-addr.arpa (100%) rename {docker => quickstart}/bind9/zones/partition4/child.parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/dskey.example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/dummy.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/example.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/invalid-zone.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-records.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-zones-test-searched-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-zones-test-searched-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-zones-test-searched-3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/non.test.shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/not.loaded.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/ok.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/old-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/old-vinyldns2.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/old-vinyldns3.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/one-time-shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/one-time.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/open.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/parent.com.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/shared.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/sync-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/system-test-history.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/system-test.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/vinyldns.hosts (100%) rename {docker => quickstart}/bind9/zones/partition4/zone.requires.review.hosts (100%) create mode 100644 quickstart/docker-compose.yml create mode 100644 quickstart/portal/Dockerfile create mode 100644 quickstart/portal/Makefile rename {docker => quickstart}/portal/application.conf (92%) rename {docker => quickstart}/portal/application.ini (100%) delete mode 100644 test/api/integration/Dockerfile.dockerignore delete mode 100644 test/portal/functional/Dockerfile.dockerignore rename {bin => utils}/add-license-headers.sh (100%) mode change 100755 => 100644 rename {docker => utils}/admin/Dockerfile (100%) rename {docker => utils}/admin/update-support-user.py (100%) create mode 100644 utils/clean-vinyldns-containers.sh rename {bin => utils}/func-test-api.sh (100%) mode change 100755 => 100644 rename {bin => utils}/func-test-portal.sh (100%) mode change 100755 => 100644 create mode 100644 utils/quickstart-vinyldns.sh rename {bin => utils}/release.sh (89%) mode change 100755 => 100644 rename {bin => utils}/update-support-user.sh (100%) mode change 100755 => 100644 rename {bin => utils}/verify.sh (83%) mode change 100755 => 100644 diff --git a/test/api/functional/Dockerfile.dockerignore b/.dockerignore similarity index 94% rename from test/api/functional/Dockerfile.dockerignore rename to .dockerignore index e42085f51..6882e4713 100644 --- a/test/api/functional/Dockerfile.dockerignore +++ b/.dockerignore @@ -7,9 +7,9 @@ **/.idea/ **/.bsp **/*cache* -**/*.png **/.git **/Dockerfile **/*.dockerignore **/.github **/_template +img/ diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 89a58987c..6e7fd66eb 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -5,7 +5,7 @@ on: pull_request: branches: ['*'] push: - branches: ['master'] + branches: ['master','main'] env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -140,4 +140,4 @@ jobs: path: ~/.sbt key: ${{ runner.os }}-sbt-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }} - name: Func tests - run: ./bin/func-test-portal.sh && ./bin/func-test-api-travis.sh + run: ./utils/func-test-portal.sh && ./utils/func-test-api.sh diff --git a/.gitignore b/.gitignore index f42ce6dd5..fc48f02ef 100644 --- a/.gitignore +++ b/.gitignore @@ -31,8 +31,8 @@ tmp.out .vscode project/metals.sbt .bsp -docker/data +quickstart/data **/.virtualenv **/.venv* **/*cache* - +**/assembly/ diff --git a/DEVELOPER_GUIDE.md b/DEVELOPER_GUIDE.md index 41ce0b58c..a9bdd702b 100644 --- a/DEVELOPER_GUIDE.md +++ b/DEVELOPER_GUIDE.md @@ -7,19 +7,31 @@ - [Running VinylDNS Locally](#running-vinyldns-locally) - [Testing](#testing) -## Developer Requirements +## Developer Requirements (Local) -- Scala 2.12 -- sbt 1+ - Java 8 (at least u162) -- Python 2.7 -- virtualenv -- Docker -- curl -- npm -- grunt +- Scala 2.12 +- sbt 1.4+ -Make sure that you have the requirements installed before proceeding. + +- curl +- docker +- docker-compose +- GNU Make 3.82+ +- grunt +- npm +- Python 3.5+ + +## Developer Requirements (Docker) + +Since almost everything can be run with Docker and GNU Make, if you don't want to setup a local development environment, +then you simply need: + +- `Docker` v19.03+ _(earlier versions may work fine)_ +- `Docker Compose` v2.0+ _(earlier versions may work fine)_ +- `GNU Make` v3.82+ +- `Bash` 3.2+ + - Basic utilities: `awk`, `sed`, `curl`, `grep`, etc may be needed for scripts ## Project Layout @@ -135,10 +147,12 @@ README. However, VinylDNS can also be run in the foreground. ### Starting the API Server Before starting the API service, you can start the dependencies for local development: + ``` cd test/api/integration make build && make run-bg ``` + This will start a container running in the background with necessary prerequisites. Once the prerequisites are running, you can start up sbt by running `sbt` from the root directory. @@ -147,16 +161,21 @@ Once the prerequisites are running, you can start up sbt by running `sbt` from t * `reStart` to start up the API server * Wait until you see the message `VINYLDNS SERVER STARTED SUCCESSFULLY` before working with the server * To stop the VinylDNS server, run `reStop` from the api project -* To stop the dependent Docker containers, change to the root project `project root`, then run `dockerComposeStop` from - the API project +* To stop the dependent Docker containers: `utils/clean-vinyldns-containers.sh` See the [API Configuration Guide](https://www.vinyldns.io/operator/config-api) for information regarding API configuration. ### Starting the Portal -To run the portal locally, you _first_ have to start up the VinylDNS API Server (see instructions above). Once that is -done, in the same `sbt` session or a different one, go to `project portal` and then execute `;preparePortal; run`. +To run the portal locally, you _first_ have to start up the VinylDNS API Server: + +``` +utils/quickstart-vinyldns.sh +``` + +Once that is done, in the same `sbt` session or a different one, go to `project portal` and then +execute `;preparePortal; run`. See the [Portal Configuration Guide](https://www.vinyldns.io/operator/config-portal) for information regarding portal configuration. @@ -220,7 +239,7 @@ You can run all unit and integration tests for the api and portal by running `sb When adding new features, you will often need to write new functional tests that black box / regression test the API. - The API functional tests are written in Python and live under `test/api/functional`. -The Portal functional tests are written in JavaScript and live under `test/portal/functional`. +- The Portal functional tests are written in JavaScript and live under `test/portal/functional`. #### Running Functional Tests @@ -229,6 +248,7 @@ To run functional tests you can simply execute the following command: ``` make build && make run ``` + During iterative test development, you can use `make run-local` which will mount the current functional tests in the container, allowing for easier test development. @@ -236,13 +256,11 @@ Additionally, you can pass `--interactive` to `make run` or `make run-local` to From there you can run tests with the `/functional_test/run.sh` command. This allows for finer-grained control over the test execution process as well as easier inspection of logs. - ##### API Functional Tests + You can run a specific test by name by running `make run -- -k `. Any arguments after `make run --` will be passed to the test runner [`test/api/functional/run.sh`](test/api/functional/run.sh). - - #### Setup We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation so diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 37e7bf93c..88f3b9e72 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -79,7 +79,7 @@ running the release 1. Follow [Docker Content Trust](#docker-content-trust) to setup a notary delegation for yourself 1. Follow [Sonatype Credentials](#sonatype-credentials) to setup the sonatype pgp signing key on your local 1. Make sure you're logged in to Docker with `docker login` -1. Run `bin/release.sh` _Note: the arg "skip-tests" will skip unit, integration and functional testing before a release_ +1. Run `utils/release.sh` _Note: the arg "skip-tests" will skip unit, integration and functional testing before a release_ 1. You will be asked to confirm the version which originally comes from `version.sbt`. _NOTE: if the version ends with `SNAPSHOT`, then the docker latest tag won't be applied and the core module will only be published to the sonatype staging repo._ diff --git a/README.md b/README.md index aea300f55..21856b1b3 100644 --- a/README.md +++ b/README.md @@ -48,9 +48,9 @@ To start up a local instance of VinylDNS on your machine with docker: 1. Ensure that you have [docker](https://docs.docker.com/install/) and [docker-compose](https://docs.docker.com/compose/install/) 1. Clone the repo: `git clone https://github.com/vinyldns/vinyldns.git` 1. Navigate to repo: `cd vinyldns` -1. Run `./bin/docker-up-vinyldns.sh`. This will start up the api at `localhost:9000` and the portal at `localhost:9001` +1. Run `./utils/quickstart-vinyldns.sh`. This will start up the api at `localhost:9000` and the portal at `localhost:9001` 1. See [Developer Guide](DEVELOPER_GUIDE.md#loading-test-data) for how to load a test DNS zone -1. To stop the local setup, run `./bin/remove-vinyl-containers.sh`. +1. To stop the local setup, run `./utils/clean-vinyldns-containers.sh`. There exist several clients at that can be used to make API requests, using the endpoint `http://localhost:9000` @@ -72,7 +72,7 @@ TTL = 300, IP Addressess = 1.1.1.1` 1. Upon connecting to a zone for the first time, a zone sync is executed to provide VinylDNS a copy of the records in the zone 1. Changes made via VinylDNS are made against the DNS backend, you do not need to sync the zone further to push those changes out 1. If changes to the zone are made outside of VinylDNS, then the zone will have to be re-synced to give VinylDNS a copy of those records -1. If you wish to modify the url used in the creation process from `http://localhost:9000`, to say `http://vinyldns.yourdomain.com:9000`, you can modify the `bin/.env` file before execution. +1. If you wish to modify the url used in the creation process from `http://localhost:9000`, to say `http://vinyldns.yourdomain.com:9000`, you can modify the `utils/.env` file before execution. 1. A similar `docker/.env.quickstart` can be modified to change the default ports for the Portal and API. You must also modify their config files with the new port: https://www.vinyldns.io/operator/config-portal & https://www.vinyldns.io/operator/config-api ## Code of Conduct diff --git a/bin/.env b/bin/.env deleted file mode 100644 index b6d3b2ffc..000000000 --- a/bin/.env +++ /dev/null @@ -1,2 +0,0 @@ -VINYLDNS_API_URL=http://localhost:9000 -VINYLDNS_PORTAL_URL=http://localhost:9001 diff --git a/bin/docker-up-vinyldns.sh b/bin/docker-up-vinyldns.sh deleted file mode 100755 index cf31c427d..000000000 --- a/bin/docker-up-vinyldns.sh +++ /dev/null @@ -1,113 +0,0 @@ -#!/usr/bin/env bash -##################################################################################################### -# Starts up the api, portal, and dependent services via -# docker-compose. The api will be available on localhost:9000 and the -# portal will be on localhost:9001 -# -# Relevant overrides can be found at ./.env and ../docker/.env -# -# Options: -# -t, --timeout seconds: overwrite ping timeout, default of 60 -# -a, --api-only: only starts up vinyldns-api and its dependencies, excludes vinyldns-portal -# -c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub -# -v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags -##################################################################################################### - -function wait_for_url { - echo "pinging ${URL} ..." - DATA="" - RETRY="$TIMEOUT" - while [ "$RETRY" -gt 0 ] - do - DATA=$(curl -I -s "${URL}" -o /dev/null -w "%{http_code}") - if [ $? -eq 0 ] - then - echo "Succeeded in connecting to ${URL}!" - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep 1 - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for ${URL} to be ready, failing" - exit 1 - fi - fi - done -} - -function usage { - printf "usage: docker-up-vinyldns.sh [OPTIONS]\n\n" - printf "starts up a local VinylDNS installation using docker compose\n\n" - printf "options:\n" - printf "\t-t, --timeout seconds: overwrite ping timeout of 60\n" - printf "\t-a, --api-only: do not start up vinyldns-portal\n" - printf "\t-c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub\n" - printf "\t-v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags\n" -} - -function clean_images { - if (( $CLEAN == 1 )); then - echo "cleaning docker images..." - docker rmi vinyldns/api:$VINYLDNS_VERSION - docker rmi vinyldns/portal:$VINYLDNS_VERSION - fi -} - -function wait_for_api { - echo "Waiting for api..." - URL="$VINYLDNS_API_URL" - wait_for_url -} - -function wait_for_portal { - # check if portal was skipped - if [ "$SERVICE" != "api" ]; then - echo "Waiting for portal..." - URL="$VINYLDNS_PORTAL_URL" - wait_for_url - fi -} - -# initial var setup -DIR=$( cd $(dirname $0) ; pwd -P ) -TIMEOUT=60 -DOCKER_COMPOSE_CONFIG="${DIR}/../docker/docker-compose-quick-start.yml" -# empty service starts up all docker services in compose file -SERVICE="" -# when CLEAN is set to 1, existing docker images are deleted so they are re-pulled -CLEAN=0 -# default to latest for docker versions -export VINYLDNS_VERSION=latest - -# source env before parsing args so vars can be overwritten -set -a # Required in order to source docker/.env -# Source customizable env files -source "$DIR"/.env -source "$DIR"/../docker/.env - -# parse args -while [ "$1" != "" ]; do - case "$1" in - -t | --timeout ) TIMEOUT="$2"; shift;; - -a | --api-only ) SERVICE="api";; - -c | --clean ) CLEAN=1;; - -v | --version ) export VINYLDNS_VERSION=$2; shift;; - * ) usage; exit;; - esac - shift -done - -clean_images - -echo "timeout is set to ${TIMEOUT}" -echo "vinyldns version is set to '${VINYLDNS_VERSION}'" - -echo "Starting vinyldns and all dependencies in the background..." -docker-compose -f "$DOCKER_COMPOSE_CONFIG" up -d ${SERVICE} - -wait_for_api -wait_for_portal diff --git a/bin/remove-vinyl-containers.sh b/bin/remove-vinyl-containers.sh deleted file mode 100755 index 5f5780564..000000000 --- a/bin/remove-vinyl-containers.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env bash -# -# The local vinyldns setup used for testing relies on the -# following docker images: -# mysql:5.7 -# s12v/elasticmq:0.13.8 -# vinyldns/bind9 -# vinyldns/api -# vinyldns/portal -# rroemhild/test-openldap -# localstack/localstack -# -# This script with kill and remove containers associated -# with these names and/or tags -# -# Note: this will not remove the actual images from your -# machine, just the running containers - -IDS=$(docker ps -a | grep -e 'mysql:5.7' -e 's12v/elasticmq:0.13.8' -e 'vinyldns' -e 'flaviovs/mock-smtp' -e 'localstack/localstack' -e 'rroemhild/test-openldap' | awk '{print $1}') - -echo "killing..." -echo $(echo "$IDS" | xargs -I {} docker kill {}) -echo - -echo "removing..." -echo $(echo "$IDS" | xargs -I {} docker rm -v {}) -echo - -echo "pruning network..." -docker network prune -f diff --git a/build.sbt b/build.sbt index 15cbf3b13..168cb7306 100644 --- a/build.sbt +++ b/build.sbt @@ -47,7 +47,7 @@ lazy val sharedSettings = Seq( lazy val testSettings = Seq( parallelExecution in Test := true, parallelExecution in IntegrationTest := false, - fork in IntegrationTest := false, + fork in IntegrationTest := true, testOptions in Test += Tests.Argument("-oDNCXEPQRMIK", "-l", "SkipCI"), logBuffered in Test := false, // Hide stack traces in tests @@ -67,13 +67,11 @@ lazy val apiSettings = Seq( ) lazy val apiAssemblySettings = Seq( - assemblyJarName in assembly := "vinyldns.jar", + assemblyOutputPath in assembly := file("assembly/vinyldns.jar"), test in assembly := {}, mainClass in assembly := Some("vinyldns.api.Boot"), mainClass in reStart := Some("vinyldns.api.Boot"), - // there are some odd things from dnsjava including update.java and dig.java that we don't use assemblyMergeStrategy in assembly := { - case "update.class" | "dig.class" => MergeStrategy.discard case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => MergeStrategy.discard case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => @@ -153,6 +151,7 @@ lazy val coreBuildSettings = Seq( // to write a crypto plugin so that we fall back to a noarg constructor scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params") ) ++ pbSettings + lazy val corePublishSettings = Seq( publishMavenStyle := true, publishArtifact in Test := false, @@ -266,11 +265,11 @@ lazy val portal = (project in file("modules/portal")) }, checkJsHeaders := { import scala.sys.process._ - "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" ! + "./utils/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" ! }, createJsHeaders := { import scala.sys.process._ - "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js" ! + "./utils/add-license-headers.sh -d=modules/portal/public/lib -f=js" ! }, // change the name of the output to portal.zip packageName in Universal := "portal" diff --git a/build/docker/api/application.conf b/build/docker/api/application.conf index 09eab0cf6..e3102e0d0 100644 --- a/build/docker/api/application.conf +++ b/build/docker/api/application.conf @@ -9,11 +9,17 @@ vinyldns { settings = { name = "vinyldns" + name = ${?JDBC_DB_NAME} driver = "org.mariadb.jdbc.Driver" - migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass" - url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass" + driver = ${?JDBC_DRIVER} + migration-url = "jdbc:mariadb://vinyldns-integration:19002/?user=root&password=pass" + migration-url = ${?JDBC_MIGRATION_URL} + url = "jdbc:mariadb://vinyldns-integration:19002/vinyldns?user=root&password=pass" + url = ${?JDBC_URL} user = "root" + user = ${?JDBC_USER} password = "pass" + password = ${?JDBC_PASSWORD} # see https://github.com/brettwooldridge/HikariCP connection-timeout-millis = 1000 @@ -50,11 +56,17 @@ vinyldns { # these must be overridden to use MYSQL for production use # assumes a docker or mysql instance running locally name = "vinyldns" + name = ${?JDBC_DB_NAME} driver = "org.mariadb.jdbc.Driver" - migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass" - url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass" + driver = ${?JDBC_DRIVER} + migration-url = "jdbc:mariadb://vinyldns-integration:19002/?user=root&password=pass" + migration-url = ${?JDBC_MIGRATION_URL} + url = "jdbc:mariadb://vinyldns-integration:19002/vinyldns?user=root&password=pass" + url = ${?JDBC_URL} user = "root" + user = ${?JDBC_USER} password = "pass" + password = ${?JDBC_PASSWORD} # see https://github.com/brettwooldridge/HikariCP connection-timeout-millis = 1000 idle-timeout = 10000 @@ -89,15 +101,21 @@ vinyldns { defaultZoneConnection { name = "vinyldns." keyName = "vinyldns." + keyName = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" - primaryServer = "vinyldns-bind9" + key = ${?DEFAULT_DNS_KEY_SECRET} + primaryServer = "vinyldns-integration:19001" + primaryServer = ${?DEFAULT_DNS_ADDRESS} } defaultTransferConnection { name = "vinyldns." keyName = "vinyldns." + keyName = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" - primaryServer = "vinyldns-bind9" + key = ${?DEFAULT_DNS_KEY_SECRET} + primaryServer = "vinyldns-integration:19001" + primaryServer = ${?DEFAULT_DNS_ADDRESS} } backends = [ @@ -106,14 +124,20 @@ vinyldns { zone-connection { name = "vinyldns." key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" - primary-server = "vinyldns-bind9" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "vinyldns-integration:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} } transfer-connection { name = "vinyldns." key-name = "vinyldns." + key-name = ${?DEFAULT_DNS_KEY_NAME} key = "nzisn+4G2ldMn0q1CV3vsg==" - primary-server = "vinyldns-bind9" + key = ${?DEFAULT_DNS_KEY_SECRET} + primary-server = "vinyldns-integration:19001" + primary-server = ${?DEFAULT_DNS_ADDRESS} } } ] diff --git a/build/docker/portal/Dockerfile b/build/docker/portal/Dockerfile index 62a0ebb28..fcdde8695 100644 --- a/build/docker/portal/Dockerfile +++ b/build/docker/portal/Dockerfile @@ -1,4 +1,4 @@ -FROM hseeberger/scala-sbt:11.0.8_1.3.13_2.11.12 as builder +FROM vinyldns/build:base-build-portal as builder ARG BRANCH=master ARG VINYLDNS_VERSION @@ -8,16 +8,6 @@ RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns # The default jvmopts are huge, meant for running everything, use a paired down version COPY .jvmopts /vinyldns -# Needed for preparePortal -RUN apt-get update \ - && apt-get install -y \ - apt-transport-https \ - curl \ - gnupg \ - && curl -sL https://deb.nodesource.com/setup_12.x | bash - \ - && apt-get install -y nodejs \ - && npm install -g grunt-cli - RUN cd /vinyldns ; sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"" portal/preparePortal universal:packageZipTarball FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine diff --git a/build/docker/portal/application.conf b/build/docker/portal/application.conf index 823a01668..674215541 100644 --- a/build/docker/portal/application.conf +++ b/build/docker/portal/application.conf @@ -21,7 +21,8 @@ LDAP { securityAuthentication = "simple" # Note: The following assumes a purely docker setup, using container_name = vinyldns-ldap - providerUrl = "ldap://vinyldns-ldap:389" + providerUrl = "ldap://vinyldns-ldap:19004" + providerUrl = ${?LDAP_PROVIDER_URL} } # This is only needed if keeping vinyldns user store in sync with ldap (to auto lock out users who left your @@ -41,7 +42,9 @@ http.port = 9000 data-stores = ["mysql"] -portal.vinyldns.backend.url = "http://vinyldns-api:9000" +portal.vinyldns.backend.url = "http://vinyldns-integration:9000" +portal.vinyldns.backend.url = ${?API_URL} + # Note: The default mysql settings assume a local docker compose setup with mysql named vinyldns-mysql # follow the configuration guide to point to your mysql @@ -53,10 +56,14 @@ mysql { # assumes a docker or mysql instance running locally name = "vinyldns" driver = "org.mariadb.jdbc.Driver" - migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass" - url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass" + migration-url = "jdbc:mariadb://vinyldns-integration:19002/?user=root&password=pass" + migration-url = ${?JDBC_MIGRATION_URL} + url = "jdbc:mariadb://vinyldns-integration:19002/vinyldns?user=root&password=pass" + url = ${?JDBC_URL} user = "root" + user = ${?JDBC_USER} password = "pass" + password = ${?JDBC_PASSWORD} # see https://github.com/brettwooldridge/HikariCP connection-timeout-millis = 1000 idle-timeout = 10000 diff --git a/docker/.env b/docker/.env deleted file mode 100644 index 9837a2470..000000000 --- a/docker/.env +++ /dev/null @@ -1,17 +0,0 @@ -REST_PORT=9000 -# Do not use quotes around the environment variables. -MYSQL_ROOT_PASSWORD=pass -# This is required as mysql is currently locked down to localhost -MYSQL_ROOT_HOST=% -# Host URL for queue -QUEUE_HOST=vinyldns-elasticmq - -# portal settings -PORTAL_PORT=9001 -PLAY_HTTP_SECRET_KEY=change-this-for-prod -VINYLDNS_BACKEND_URL=http://vinyldns-api:9000 -SQS_ENDPOINT=http://vinyldns-localstack:19007 -MYSQL_ENDPOINT=vinyldns-mysql:3306 -USER_TABLE_NAME=users -USER_CHANGE_TABLE_NAME=userChange -TEST_LOGIN=true diff --git a/docker/.env.quickstart b/docker/.env.quickstart deleted file mode 100644 index 9837a2470..000000000 --- a/docker/.env.quickstart +++ /dev/null @@ -1,17 +0,0 @@ -REST_PORT=9000 -# Do not use quotes around the environment variables. -MYSQL_ROOT_PASSWORD=pass -# This is required as mysql is currently locked down to localhost -MYSQL_ROOT_HOST=% -# Host URL for queue -QUEUE_HOST=vinyldns-elasticmq - -# portal settings -PORTAL_PORT=9001 -PLAY_HTTP_SECRET_KEY=change-this-for-prod -VINYLDNS_BACKEND_URL=http://vinyldns-api:9000 -SQS_ENDPOINT=http://vinyldns-localstack:19007 -MYSQL_ENDPOINT=vinyldns-mysql:3306 -USER_TABLE_NAME=users -USER_CHANGE_TABLE_NAME=userChange -TEST_LOGIN=true diff --git a/docker/api/.dockerignore b/docker/api/.dockerignore deleted file mode 100644 index f4ed141ba..000000000 --- a/docker/api/.dockerignore +++ /dev/null @@ -1,5 +0,0 @@ -.DS_Store -.dockerignore -.git -.gitignore -classes \ No newline at end of file diff --git a/docker/api/Dockerfile b/docker/api/Dockerfile deleted file mode 100644 index 9a0f47df3..000000000 --- a/docker/api/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -FROM adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine - -RUN apk add --update --no-cache netcat-openbsd bash - -# install the jar onto the server, asserts this Dockerfile is copied to target/scala-2.12 after a build -COPY vinyldns.jar /app/vinyldns-server.jar -COPY run.sh /app/run.sh -RUN chmod a+x /app/run.sh - -COPY docker.conf /app/docker.conf - -EXPOSE 9000 - -# set the entry point for the container to start vinyl, specify the config resource -ENTRYPOINT ["/app/run.sh"] - - diff --git a/docker/api/docker.conf b/docker/api/docker.conf deleted file mode 100644 index 02d4ecdd9..000000000 --- a/docker/api/docker.conf +++ /dev/null @@ -1,333 +0,0 @@ -################################################################################################################ -# This configuration is only used by docker. Environment variables are required in order to start -# up a docker cluster appropriately, so most of the values are passed in here. Defaults assume a local docker compose -# for vinyldns running. -# SQS_ENDPOINT is the SQS endpoint -# SQS_QUEUE_NAME is the queue name for the SQS queue -# SQS_REGION is the service region where the SQS queue lives (e.g. us-east-1) -# AWS_ACCESS_KEY is the AWS access key -# AWS_SECRET_ACCESS_KEY is the AWS secret access key -# JDBC_MIGRATION_URL - the URL for migations in the SQL database -# JDBC_URL - the full URL to the SQL database -# JDBC_USER - the SQL database user -# JDBC_PASSWORD - the SQL database password -# DEFAULT_DNS_ADDRESS - the server (and port if not 53) of the default DNS server -# DEFAULT_DNS_KEY_NAME - the default key name used to connect to the default DNS server -# DEFAULT_DNS_KEY_SECRET - the default key secret used to connect to the default DNS server -################################################################################################################ -vinyldns { - - # configured backend providers - backend { - # Use "default" when dns backend legacy = true - # otherwise, use the id of one of the connections in any of your backends - default-backend-id = "default" - - # this is where we can save additional backends - backend-providers = [ - { - class-name = "vinyldns.route53.backend.Route53BackendProviderLoader" - settings = { - backends = [ - { - id = "r53", - access-key = "test", - access-key = ${?AWS_ACCESS_KEY_ID} - secret-key = "test", - secret-key = ${?AWS_SECRET_ACCESS_KEY}, - service-endpoint = "http://vinyldns-localstack:19009", - service-endpoint = ${?AWS_ROUTE53_ENDPOINT}, - signing-region = "us-east-1" - signing-region = ${?AWS_DEFAULT_REGION} - } - ] - } - }, - { - class-name = "vinyldns.api.backend.dns.DnsBackendProviderLoader" - settings = { - legacy = false - backends = [ - { - id = "default" - zone-connection = { - name = "vinyldns." - key-name = "vinyldns." - key-name = ${?DEFAULT_DNS_KEY_NAME} - key = "nzisn+4G2ldMn0q1CV3vsg==" - key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "vinyldns-bind9" - primary-server = ${?DEFAULT_DNS_ADDRESS} - } - transfer-connection = { - name = "vinyldns." - key-name = "vinyldns." - key-name = ${?DEFAULT_DNS_KEY_NAME} - key = "nzisn+4G2ldMn0q1CV3vsg==" - key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "vinyldns-bind9" - primary-server = ${?DEFAULT_DNS_ADDRESS} - }, - tsig-usage = "always" - }, - { - id = "func-test-backend" - zone-connection = { - name = "vinyldns." - key-name = "vinyldns." - key-name = ${?DEFAULT_DNS_KEY_NAME} - key = "nzisn+4G2ldMn0q1CV3vsg==" - key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "vinyldns-bind9" - primary-server = ${?DEFAULT_DNS_ADDRESS} - } - transfer-connection = { - name = "vinyldns." - key-name = "vinyldns." - key-name = ${?DEFAULT_DNS_KEY_NAME} - key = "nzisn+4G2ldMn0q1CV3vsg==" - key = ${?DEFAULT_DNS_KEY_SECRET} - primary-server = "vinyldns-bind9" - primary-server = ${?DEFAULT_DNS_ADDRESS} - }, - tsig-usage = "always" - } - ] - } - } - ] - } - - queue { - class-name = "vinyldns.sqs.queue.SqsMessageQueueProvider" - - messages-per-poll = 10 - polling-interval = 250.millis - - settings { - # AWS access key and secret. - access-key = "test" - access-key = ${?AWS_ACCESS_KEY} - secret-key = "test" - secret-key = ${?AWS_SECRET_ACCESS_KEY} - - # Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed. - signing-region = "us-east-1" - signing-region = ${?SQS_REGION} - - # Endpoint to access queue - service-endpoint = "http://vinyldns-localstack:19007/" - service-endpoint = ${?SQS_ENDPOINT} - - # Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change. - queue-name = "vinyldns" - queue-name = ${?SQS_QUEUE_NAME} - } - } - - rest { - host = "0.0.0.0" - port = 9000 - } - - sync-delay = 10000 - - approved-name-servers = [ - "172.17.42.1.", - "ns1.parent.com." - "ns1.parent.com1." - "ns1.parent.com2." - "ns1.parent.com3." - "ns1.parent.com4." - ] - - crypto { - type = "vinyldns.core.crypto.NoOpCrypto" - } - - data-stores = ["mysql"] - - mysql { - settings { - # JDBC Settings, these are all values in scalikejdbc-config, not our own - # these must be overridden to use MYSQL for production use - # assumes a docker or mysql instance running locally - name = "vinyldns" - driver = "org.mariadb.jdbc.Driver" - migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass" - migration-url = ${?JDBC_MIGRATION_URL} - url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass" - url = ${?JDBC_URL} - user = "root" - user = ${?JDBC_USER} - password = "pass" - password = ${?JDBC_PASSWORD} - # see https://github.com/brettwooldridge/HikariCP - connection-timeout-millis = 1000 - idle-timeout = 10000 - max-lifetime = 600000 - maximum-pool-size = 20 - minimum-idle = 20 - register-mbeans = true - } - # Repositories that use this data store are listed here - repositories { - zone { - # no additional settings for now - } - batch-change { - # no additional settings for now - } - user { - - } - record-set { - - } - group { - - } - membership { - - } - group-change { - - } - zone-change { - - } - record-change { - - } - } - } - - backends = [] - - batch-change-limit = 1000 - - # FQDNs / IPs that cannot be modified via VinylDNS - # regex-list used for all record types except PTR - # ip-list used exclusively for PTR records - high-value-domains = { - regex-list = [ - "high-value-domain.*" # for testing - ] - ip-list = [ - # using reverse zones in the vinyldns/bind9 docker image for testing - "192.0.2.252", - "192.0.2.253", - "fd69:27cc:fe91:0:0:0:0:ffff", - "fd69:27cc:fe91:0:0:0:ffff:0" - ] - } - - # FQDNs / IPs / zone names that require manual review upon submission in batch change interface - # domain-list used for all record types except PTR - # ip-list used exclusively for PTR records - manual-review-domains = { - domain-list = [ - "needs-review.*" - ] - ip-list = [ - "192.0.1.254", - "192.0.1.255", - "192.0.2.254", - "192.0.2.255", - "192.0.3.254", - "192.0.3.255", - "192.0.4.254", - "192.0.4.255", - "fd69:27cc:fe91:0:0:0:ffff:1", - "fd69:27cc:fe91:0:0:0:ffff:2", - "fd69:27cc:fe92:0:0:0:ffff:1", - "fd69:27cc:fe92:0:0:0:ffff:2", - "fd69:27cc:fe93:0:0:0:ffff:1", - "fd69:27cc:fe93:0:0:0:ffff:2", - "fd69:27cc:fe94:0:0:0:ffff:1", - "fd69:27cc:fe94:0:0:0:ffff:2" - ] - zone-name-list = [ - "zone.requires.review." - "zone.requires.review1." - "zone.requires.review2." - "zone.requires.review3." - "zone.requires.review4." - ] - } - - # FQDNs / IPs that cannot be modified via VinylDNS - # regex-list used for all record types except PTR - # ip-list used exclusively for PTR records - high-value-domains = { - regex-list = [ - "high-value-domain.*" # for testing - ] - ip-list = [ - # using reverse zones in the vinyldns/bind9 docker image for testing - "192.0.1.252", - "192.0.1.253", - "192.0.2.252", - "192.0.2.253", - "192.0.3.252", - "192.0.3.253", - "192.0.4.252", - "192.0.4.253", - "fd69:27cc:fe91:0:0:0:0:ffff", - "fd69:27cc:fe91:0:0:0:ffff:0", - "fd69:27cc:fe92:0:0:0:0:ffff", - "fd69:27cc:fe92:0:0:0:ffff:0", - "fd69:27cc:fe93:0:0:0:0:ffff", - "fd69:27cc:fe93:0:0:0:ffff:0", - "fd69:27cc:fe94:0:0:0:0:ffff", - "fd69:27cc:fe94:0:0:0:ffff:0" - ] - } - - # types of unowned records that users can access in shared zones - shared-approved-types = ["A", "AAAA", "CNAME", "PTR", "TXT"] - - manual-batch-review-enabled = true - - scheduled-changes-enabled = true - - multi-record-batch-change-enabled = true - - global-acl-rules = [ - { - group-ids: ["global-acl-group-id"], - fqdn-regex-list: [".*shared[0-9]{1}."] - }, - { - group-ids: ["another-global-acl-group"], - fqdn-regex-list: [".*ok[0-9]{1}."] - } - ] -} - -akka { - loglevel = "INFO" - loggers = ["akka.event.slf4j.Slf4jLogger"] - logging-filter = "akka.event.slf4j.Slf4jLoggingFilter" - logger-startup-timeout = 30s - - actor { - provider = "akka.actor.LocalActorRefProvider" - } -} - -akka.http { - server { - # The time period within which the TCP binding process must be completed. - # Set to `infinite` to disable. - bind-timeout = 5s - - # Show verbose error messages back to the client - verbose-error-messages = on - } - - parsing { - # Spray doesn't like the AWS4 headers - illegal-header-warnings = on - } -} diff --git a/docker/api/logback.xml b/docker/api/logback.xml deleted file mode 100644 index 4146a79c5..000000000 --- a/docker/api/logback.xml +++ /dev/null @@ -1,12 +0,0 @@ - - - - - %d [test] %-5p | \(%logger{4}:%line\) | %msg %n - - - - - - - diff --git a/docker/api/run.sh b/docker/api/run.sh deleted file mode 100755 index 11e0a80eb..000000000 --- a/docker/api/run.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/usr/bin/env bash - -# gets the docker-ized ip address, sets it to an environment variable -export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'` - -export MYSQL_ADDRESS="vinyldns-mysql" -export MYSQL_PORT=3306 -export JDBC_USER=root -export JDBC_PASSWORD=pass -export JDBC_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/vinyldns?user=${JDBC_USER}&password=${JDBC_PASSWORD}" -export JDBC_MIGRATION_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/?user=${JDBC_USER}&password=${JDBC_PASSWORD}" - -# wait until mysql is ready... -echo 'Waiting for MYSQL to be ready...' -DATA="" -RETRY=40 -SLEEP_DURATION=1 -while [ "$RETRY" -gt 0 ] -do - DATA=$(nc -vzw1 ${MYSQL_ADDRESS} ${MYSQL_PORT}) - if [ $? -eq 0 ] - then - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep "$SLEEP_DURATION" - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for MYSQL to be ready, failing" - return 1 - fi - fi -done - -echo "Starting up Vinyl..." -sleep 2 -java -Djava.net.preferIPv4Stack=true -Dconfig.file=/app/docker.conf -Dakka.loglevel=INFO -Dlogback.configurationFile=/app/logback.xml -jar /app/vinyldns-server.jar vinyldns.api.Boot - diff --git a/docker/docker-compose-quick-start.yml b/docker/docker-compose-quick-start.yml deleted file mode 100644 index 39fcefda0..000000000 --- a/docker/docker-compose-quick-start.yml +++ /dev/null @@ -1,68 +0,0 @@ -version: "3.0" -services: - mysql: - image: "mysql:5.7" - env_file: - .env.quickstart - container_name: "vinyldns-mysql" - ports: - - "19002:3306" - - bind9: - image: "vinyldns/bind9:0.0.5" - env_file: - .env.quickstart - container_name: "vinyldns-bind9" - ports: - - "19001:53/udp" - - "19001:53" - volumes: - - ./bind9/etc:/var/cache/bind/config - - ./bind9/zones:/var/cache/bind/zones - - localstack: - image: localstack/localstack:0.10.4 - container_name: "vinyldns-localstack" - ports: - - "19006:19006" - - "19007:19007" - - "19009:19009" - environment: - - SERVICES=sns:19006,sqs:19007,route53:19009 - - START_WEB=0 - - HOSTNAME_EXTERNAL=vinyldns-localstack - - ldap: - image: rroemhild/test-openldap - container_name: "vinyldns-ldap" - ports: - - "19008:389" - - api: - image: "vinyldns/api:${VINYLDNS_VERSION}" - env_file: - .env.quickstart - container_name: "vinyldns-api" - ports: - - "${REST_PORT}:${REST_PORT}" - volumes: - - ./api/docker.conf:/opt/docker/conf/application.conf - - ./api/logback.xml:/opt/docker/conf/logback.xml - depends_on: - - mysql - - bind9 - - localstack - - portal: - image: "vinyldns/portal:${VINYLDNS_VERSION}" - env_file: - .env.quickstart - ports: - - "${PORTAL_PORT}:${PORTAL_PORT}" - container_name: "vinyldns-portal" - volumes: - - ./portal/application.ini:/opt/docker/conf/application.ini - - ./portal/application.conf:/opt/docker/conf/application.conf - depends_on: - - api - - ldap diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml deleted file mode 100644 index e11a52fba..000000000 --- a/docker/docker-compose.yml +++ /dev/null @@ -1,41 +0,0 @@ -version: "3.0" -services: - mysql: - image: mysql:5.7 - env_file: - .env - ports: - - "19002:3306" - - bind9: - image: vinyldns/bind9:0.0.5 - env_file: - .env - ports: - - "19001:53/udp" - - "19001:53" - volumes: - - ./bind9/etc:/var/cache/bind/config - - ./bind9/zones:/var/cache/bind/zones - - localstack: - image: localstack/localstack:0.10.4 - ports: - - "19006:19006" - - "19007:19007" - - "19009:19009" - environment: - - SERVICES=sns:19006,sqs:19007,route53:19009 - - START_WEB=0 - - mail: - image: flaviovs/mock-smtp:0.0.2 - ports: - - "19025:25" - volumes: - - ./email:/var/lib/mock-smtp - - ldap: - image: rroemhild/test-openldap:latest - ports: - - "19008:389" diff --git a/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala index 281939398..c04795003 100644 --- a/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala +++ b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneViewLoaderIntegrationSpec.scala @@ -1,4 +1,3 @@ - /* * Copyright 2018 Comcast Cable Communications Management, LLC * diff --git a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala index d0bc286e4..25c25b56b 100644 --- a/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala +++ b/modules/mysql/src/main/scala/vinyldns/mysql/MySqlConnector.scala @@ -22,6 +22,7 @@ import org.flywaydb.core.Flyway import org.slf4j.LoggerFactory import scala.collection.JavaConverters._ +import scala.util.{Failure, Success, Try} object MySqlConnector { @@ -44,20 +45,21 @@ object MySqlConnector { getDataSource(migrationConnectionSettings).map { migrationDataSource => logger.info("Running migrations to ready the databases") - val migration = new Flyway() - migration.setDataSource(migrationDataSource) + val placeholders = Map("dbName" -> config.name) + val migration = Flyway + .configure() + .dataSource(migrationDataSource) + .placeholders(placeholders.asJava) + .schemas(config.name) + // flyway changed the default schema table name in v5.0.0 // this allows to revert to an old naming convention if needed config.migrationSchemaTable.foreach { tableName => - migration.setTable(tableName) + migration.table(tableName) } - val placeholders = Map("dbName" -> config.name) - migration.setPlaceholders(placeholders.asJava) - migration.setSchemas(config.name) - // Runs flyway migrations - migration.migrate() + migration.load().migrate() logger.info("migrations complete") } } @@ -85,6 +87,20 @@ object MySqlConnector { case (k, v) => dsConfig.addDataSourceProperty(k, v) } - new HikariDataSource(dsConfig) + def retry[T](times: Int, delayMs: Int)(op: => T) = + Iterator + .range(0, times) + .map(_ => Try(op)) + .flatMap { + case Success(t) => Some(t) + case Failure(_) => + logger.warn("failed to startup database connection, retrying..") + Thread.sleep(delayMs) + None + } + .toSeq + .head + + retry(60, 1000) { new HikariDataSource(dsConfig) } } } diff --git a/modules/portal/README.md b/modules/portal/README.md index c87d05f80..faa7fb02a 100644 --- a/modules/portal/README.md +++ b/modules/portal/README.md @@ -1,4 +1,4 @@ -# Vinyl Portal +# VinylDNS Portal Supplies a UI for and offers authentication into Vinyl, a DNSaaS offering. # Running Unit Tests @@ -7,18 +7,7 @@ First, startup sbt: `sbt`. Next, you can run all tests by simply running `test`, or you can run an individual test by running `test-only *MySpec` # Running Frontend Tests -The frontend code is tested using Jasmine, spec files are stored in the same directory as the angular js files. -For example, the public/lib/controllers has both the controller files and the specs for those controllers. To run -these tests the command is `grunt unit` - -# Running Functional Tests -As of now, we have a functional testing harness that gets things set up, and a single test which tests if the login page -loads successfully. Run the following commands from the vinyl-portal folder as well, we are not using a VM for testing -at this time. - -`./run_all_tests.sh` will run the unit tests (`sbt clean coverage test`), and then set up and run the func tests - -`./run_func_tests.sh` will only set up and run the func tests +The front end tests can be run from the `test/portal/fuctional` directory by simply running `make`. # Building Locally @@ -42,8 +31,7 @@ available so that you can start the portal locally and test. `sbt -Djavax.net.ssl.trustStore="./private/trustStore.jks"` # Updating the trustStore Certificates -Sometime on or before May 05, 2020 the certificates securing the AD servers will need to be renewed and updated. -When this happens or some other event causes the LDAP lookup to fail because of SSL certificate issues, follow +When some event causes the LDAP lookup to fail because of SSL certificate issues, follow the following steps to update the trustStore with the new certificates. - Get the new certificate with `openssl s_client -connect :`. This will display the certificate on the screen. - Copy everything from `-----BEGIN CERTIFICATE-----` to `-----END CERTIFICATE-----` including the begin and end markers to the clipboard. @@ -56,10 +44,3 @@ the following steps to update the trustStore with the new certificates. - Answer yes to trust the certificate The trustStore is now updated with the new certificate. You can delete the certificate file it is no longer needed. - -# Credits - -* [logback-classic](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html) -* [logback-core](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html) -* [htmlunit](http://htmlunit.sourceforge.net/) - * [htmlunit-core-js](https://github.com/HtmlUnit/htmlunit-core-js) - [Mozilla Public License v2.0](https://www.mozilla.org/en-US/MPL/2.0/) diff --git a/modules/portal/dist/wait-for-dependencies.sh b/modules/portal/dist/wait-for-dependencies.sh deleted file mode 100755 index 4222ff00c..000000000 --- a/modules/portal/dist/wait-for-dependencies.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env bash - -# allow skipping with env var -if [ "$SKIP_MYSQL_WAIT" -eq "1" ]; then - exit 0 -fi - -# the mysql address, default to a local docker setup -MYSQL_ADDRESS=${MYSQL_ADDRESS:-vinyldns-mysql} -MYSQL_PORT=${MYSQL_PORT:-3306} -echo "Waiting for MYSQL to be ready on ${MYSQL_ADDRESS}:${MYSQL_PORT}" -DATA="" -RETRY=30 -while [ "$RETRY" -gt 0 ] -do - DATA=$(nc -vzw1 "$MYSQL_ADDRESS" "$MYSQL_PORT") - if [ $? -eq 0 ] - then - break - else - echo "Retrying" >&2 - - let RETRY-=1 - sleep .5 - - if [ "$RETRY" -eq 0 ] - then - echo "Exceeded retries waiting for MYSQL to be ready on ${MYSQL_ADDRESS}:${MYSQL_PORT}, failing" - return 1 - fi - fi -done diff --git a/modules/portal/prepare-portal.sh b/modules/portal/prepare-portal.sh index 757d3e114..83fc6ee9f 100755 --- a/modules/portal/prepare-portal.sh +++ b/modules/portal/prepare-portal.sh @@ -7,6 +7,6 @@ npm install -f npm install grunt -g -f grunt default -$DIR/../../bin/add-license-headers.sh -d=$DIR/public/lib -f=js +$DIR/../../utils/add-license-headers.sh -d=$DIR/public/lib -f=js cd - diff --git a/quickstart/.env b/quickstart/.env new file mode 100644 index 000000000..f58d13e4b --- /dev/null +++ b/quickstart/.env @@ -0,0 +1,17 @@ +REST_PORT=9000 + +# portal settings +PORTAL_PORT=9001 +PLAY_HTTP_SECRET_KEY=change-this-for-prod +VINYLDNS_BACKEND_URL=http://vinyldns-integration:9000 + +SQS_ENDPOINT=http://vinyldns-integration:19003 +MYSQL_ENDPOINT=vinyldns-integration:19002 +TEST_LOGIN=true + +JDBC_DRIVER=org.mariadb.jdbc.Driver +JDBC_URL=jdbc:mariadb://vinyldns-integration:19002/vinyldns?user=root&password=pass +JDBC_MIGRATION_URL=jdbc:mariadb://vinyldns-integration:19002/?user=root&password=pass +JDBC_USER=root +JDBC_PASSWORD=pass +DEFAULT_DNS_ADDRESS=127.0.0.1:19001 diff --git a/docker/bind9/README.md b/quickstart/bind9/README.md similarity index 100% rename from docker/bind9/README.md rename to quickstart/bind9/README.md diff --git a/docker/bind9/etc/_template/named.partition.conf b/quickstart/bind9/etc/_template/named.partition.conf similarity index 100% rename from docker/bind9/etc/_template/named.partition.conf rename to quickstart/bind9/etc/_template/named.partition.conf diff --git a/docker/bind9/etc/named.conf.local b/quickstart/bind9/etc/named.conf.local old mode 100755 new mode 100644 similarity index 100% rename from docker/bind9/etc/named.conf.local rename to quickstart/bind9/etc/named.conf.local diff --git a/docker/bind9/etc/named.conf.partition1 b/quickstart/bind9/etc/named.conf.partition1 similarity index 100% rename from docker/bind9/etc/named.conf.partition1 rename to quickstart/bind9/etc/named.conf.partition1 diff --git a/docker/bind9/etc/named.conf.partition2 b/quickstart/bind9/etc/named.conf.partition2 similarity index 100% rename from docker/bind9/etc/named.conf.partition2 rename to quickstart/bind9/etc/named.conf.partition2 diff --git a/docker/bind9/etc/named.conf.partition3 b/quickstart/bind9/etc/named.conf.partition3 similarity index 100% rename from docker/bind9/etc/named.conf.partition3 rename to quickstart/bind9/etc/named.conf.partition3 diff --git a/docker/bind9/etc/named.conf.partition4 b/quickstart/bind9/etc/named.conf.partition4 similarity index 100% rename from docker/bind9/etc/named.conf.partition4 rename to quickstart/bind9/etc/named.conf.partition4 diff --git a/docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/_template/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/_template/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/_template/10.10.in-addr.arpa b/quickstart/bind9/zones/_template/10.10.in-addr.arpa similarity index 100% rename from docker/bind9/zones/_template/10.10.in-addr.arpa rename to quickstart/bind9/zones/_template/10.10.in-addr.arpa diff --git a/docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa b/quickstart/bind9/zones/_template/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/_template/192^30.2.0.192.in-addr.arpa rename to quickstart/bind9/zones/_template/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/_template/2.0.192.in-addr.arpa b/quickstart/bind9/zones/_template/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/_template/2.0.192.in-addr.arpa rename to quickstart/bind9/zones/_template/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/_template/child.parent.com.hosts b/quickstart/bind9/zones/_template/child.parent.com.hosts similarity index 100% rename from docker/bind9/zones/_template/child.parent.com.hosts rename to quickstart/bind9/zones/_template/child.parent.com.hosts diff --git a/docker/bind9/zones/_template/dskey.example.com.hosts b/quickstart/bind9/zones/_template/dskey.example.com.hosts similarity index 100% rename from docker/bind9/zones/_template/dskey.example.com.hosts rename to quickstart/bind9/zones/_template/dskey.example.com.hosts diff --git a/docker/bind9/zones/_template/dummy.hosts b/quickstart/bind9/zones/_template/dummy.hosts similarity index 100% rename from docker/bind9/zones/_template/dummy.hosts rename to quickstart/bind9/zones/_template/dummy.hosts diff --git a/docker/bind9/zones/_template/example.com.hosts b/quickstart/bind9/zones/_template/example.com.hosts similarity index 100% rename from docker/bind9/zones/_template/example.com.hosts rename to quickstart/bind9/zones/_template/example.com.hosts diff --git a/docker/bind9/zones/_template/invalid-zone.hosts b/quickstart/bind9/zones/_template/invalid-zone.hosts similarity index 100% rename from docker/bind9/zones/_template/invalid-zone.hosts rename to quickstart/bind9/zones/_template/invalid-zone.hosts diff --git a/docker/bind9/zones/_template/list-records.hosts b/quickstart/bind9/zones/_template/list-records.hosts similarity index 100% rename from docker/bind9/zones/_template/list-records.hosts rename to quickstart/bind9/zones/_template/list-records.hosts diff --git a/docker/bind9/zones/_template/list-zones-test-searched-1.hosts b/quickstart/bind9/zones/_template/list-zones-test-searched-1.hosts similarity index 100% rename from docker/bind9/zones/_template/list-zones-test-searched-1.hosts rename to quickstart/bind9/zones/_template/list-zones-test-searched-1.hosts diff --git a/docker/bind9/zones/_template/list-zones-test-searched-2.hosts b/quickstart/bind9/zones/_template/list-zones-test-searched-2.hosts similarity index 100% rename from docker/bind9/zones/_template/list-zones-test-searched-2.hosts rename to quickstart/bind9/zones/_template/list-zones-test-searched-2.hosts diff --git a/docker/bind9/zones/_template/list-zones-test-searched-3.hosts b/quickstart/bind9/zones/_template/list-zones-test-searched-3.hosts similarity index 100% rename from docker/bind9/zones/_template/list-zones-test-searched-3.hosts rename to quickstart/bind9/zones/_template/list-zones-test-searched-3.hosts diff --git a/docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts b/quickstart/bind9/zones/_template/list-zones-test-unfiltered-1.hosts similarity index 100% rename from docker/bind9/zones/_template/list-zones-test-unfiltered-1.hosts rename to quickstart/bind9/zones/_template/list-zones-test-unfiltered-1.hosts diff --git a/docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts b/quickstart/bind9/zones/_template/list-zones-test-unfiltered-2.hosts similarity index 100% rename from docker/bind9/zones/_template/list-zones-test-unfiltered-2.hosts rename to quickstart/bind9/zones/_template/list-zones-test-unfiltered-2.hosts diff --git a/docker/bind9/zones/_template/non.test.shared.hosts b/quickstart/bind9/zones/_template/non.test.shared.hosts similarity index 100% rename from docker/bind9/zones/_template/non.test.shared.hosts rename to quickstart/bind9/zones/_template/non.test.shared.hosts diff --git a/docker/bind9/zones/_template/not.loaded.hosts b/quickstart/bind9/zones/_template/not.loaded.hosts similarity index 100% rename from docker/bind9/zones/_template/not.loaded.hosts rename to quickstart/bind9/zones/_template/not.loaded.hosts diff --git a/docker/bind9/zones/_template/ok.hosts b/quickstart/bind9/zones/_template/ok.hosts similarity index 100% rename from docker/bind9/zones/_template/ok.hosts rename to quickstart/bind9/zones/_template/ok.hosts diff --git a/docker/bind9/zones/_template/old-shared.hosts b/quickstart/bind9/zones/_template/old-shared.hosts similarity index 100% rename from docker/bind9/zones/_template/old-shared.hosts rename to quickstart/bind9/zones/_template/old-shared.hosts diff --git a/docker/bind9/zones/_template/old-vinyldns2.hosts b/quickstart/bind9/zones/_template/old-vinyldns2.hosts similarity index 100% rename from docker/bind9/zones/_template/old-vinyldns2.hosts rename to quickstart/bind9/zones/_template/old-vinyldns2.hosts diff --git a/docker/bind9/zones/_template/old-vinyldns3.hosts b/quickstart/bind9/zones/_template/old-vinyldns3.hosts similarity index 100% rename from docker/bind9/zones/_template/old-vinyldns3.hosts rename to quickstart/bind9/zones/_template/old-vinyldns3.hosts diff --git a/docker/bind9/zones/_template/one-time-shared.hosts b/quickstart/bind9/zones/_template/one-time-shared.hosts similarity index 100% rename from docker/bind9/zones/_template/one-time-shared.hosts rename to quickstart/bind9/zones/_template/one-time-shared.hosts diff --git a/docker/bind9/zones/_template/one-time.hosts b/quickstart/bind9/zones/_template/one-time.hosts similarity index 100% rename from docker/bind9/zones/_template/one-time.hosts rename to quickstart/bind9/zones/_template/one-time.hosts diff --git a/docker/bind9/zones/_template/open.hosts b/quickstart/bind9/zones/_template/open.hosts similarity index 100% rename from docker/bind9/zones/_template/open.hosts rename to quickstart/bind9/zones/_template/open.hosts diff --git a/docker/bind9/zones/_template/parent.com.hosts b/quickstart/bind9/zones/_template/parent.com.hosts similarity index 100% rename from docker/bind9/zones/_template/parent.com.hosts rename to quickstart/bind9/zones/_template/parent.com.hosts diff --git a/docker/bind9/zones/_template/shared.hosts b/quickstart/bind9/zones/_template/shared.hosts similarity index 100% rename from docker/bind9/zones/_template/shared.hosts rename to quickstart/bind9/zones/_template/shared.hosts diff --git a/docker/bind9/zones/_template/sync-test.hosts b/quickstart/bind9/zones/_template/sync-test.hosts similarity index 100% rename from docker/bind9/zones/_template/sync-test.hosts rename to quickstart/bind9/zones/_template/sync-test.hosts diff --git a/docker/bind9/zones/_template/system-test-history.hosts b/quickstart/bind9/zones/_template/system-test-history.hosts similarity index 100% rename from docker/bind9/zones/_template/system-test-history.hosts rename to quickstart/bind9/zones/_template/system-test-history.hosts diff --git a/docker/bind9/zones/_template/system-test.hosts b/quickstart/bind9/zones/_template/system-test.hosts similarity index 100% rename from docker/bind9/zones/_template/system-test.hosts rename to quickstart/bind9/zones/_template/system-test.hosts diff --git a/docker/bind9/zones/_template/vinyldns.hosts b/quickstart/bind9/zones/_template/vinyldns.hosts similarity index 100% rename from docker/bind9/zones/_template/vinyldns.hosts rename to quickstart/bind9/zones/_template/vinyldns.hosts diff --git a/docker/bind9/zones/_template/zone.requires.review.hosts b/quickstart/bind9/zones/_template/zone.requires.review.hosts similarity index 100% rename from docker/bind9/zones/_template/zone.requires.review.hosts rename to quickstart/bind9/zones/_template/zone.requires.review.hosts diff --git a/docker/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition1/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition1/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition1/10.10.in-addr.arpa b/quickstart/bind9/zones/partition1/10.10.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition1/10.10.in-addr.arpa rename to quickstart/bind9/zones/partition1/10.10.in-addr.arpa diff --git a/docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition1/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition1/2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition1/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition1/2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition1/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition1/child.parent.com.hosts b/quickstart/bind9/zones/partition1/child.parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition1/child.parent.com.hosts rename to quickstart/bind9/zones/partition1/child.parent.com.hosts diff --git a/docker/bind9/zones/partition1/dskey.example.com.hosts b/quickstart/bind9/zones/partition1/dskey.example.com.hosts similarity index 100% rename from docker/bind9/zones/partition1/dskey.example.com.hosts rename to quickstart/bind9/zones/partition1/dskey.example.com.hosts diff --git a/docker/bind9/zones/partition1/dummy.hosts b/quickstart/bind9/zones/partition1/dummy.hosts similarity index 100% rename from docker/bind9/zones/partition1/dummy.hosts rename to quickstart/bind9/zones/partition1/dummy.hosts diff --git a/docker/bind9/zones/partition1/example.com.hosts b/quickstart/bind9/zones/partition1/example.com.hosts similarity index 100% rename from docker/bind9/zones/partition1/example.com.hosts rename to quickstart/bind9/zones/partition1/example.com.hosts diff --git a/docker/bind9/zones/partition1/invalid-zone.hosts b/quickstart/bind9/zones/partition1/invalid-zone.hosts similarity index 100% rename from docker/bind9/zones/partition1/invalid-zone.hosts rename to quickstart/bind9/zones/partition1/invalid-zone.hosts diff --git a/docker/bind9/zones/partition1/list-records.hosts b/quickstart/bind9/zones/partition1/list-records.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-records.hosts rename to quickstart/bind9/zones/partition1/list-records.hosts diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-1.hosts b/quickstart/bind9/zones/partition1/list-zones-test-searched-1.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-zones-test-searched-1.hosts rename to quickstart/bind9/zones/partition1/list-zones-test-searched-1.hosts diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-2.hosts b/quickstart/bind9/zones/partition1/list-zones-test-searched-2.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-zones-test-searched-2.hosts rename to quickstart/bind9/zones/partition1/list-zones-test-searched-2.hosts diff --git a/docker/bind9/zones/partition1/list-zones-test-searched-3.hosts b/quickstart/bind9/zones/partition1/list-zones-test-searched-3.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-zones-test-searched-3.hosts rename to quickstart/bind9/zones/partition1/list-zones-test-searched-3.hosts diff --git a/docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts b/quickstart/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts rename to quickstart/bind9/zones/partition1/list-zones-test-unfiltered-1.hosts diff --git a/docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts b/quickstart/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts similarity index 100% rename from docker/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts rename to quickstart/bind9/zones/partition1/list-zones-test-unfiltered-2.hosts diff --git a/docker/bind9/zones/partition1/non.test.shared.hosts b/quickstart/bind9/zones/partition1/non.test.shared.hosts similarity index 100% rename from docker/bind9/zones/partition1/non.test.shared.hosts rename to quickstart/bind9/zones/partition1/non.test.shared.hosts diff --git a/docker/bind9/zones/partition1/not.loaded.hosts b/quickstart/bind9/zones/partition1/not.loaded.hosts similarity index 100% rename from docker/bind9/zones/partition1/not.loaded.hosts rename to quickstart/bind9/zones/partition1/not.loaded.hosts diff --git a/docker/bind9/zones/partition1/ok.hosts b/quickstart/bind9/zones/partition1/ok.hosts similarity index 100% rename from docker/bind9/zones/partition1/ok.hosts rename to quickstart/bind9/zones/partition1/ok.hosts diff --git a/docker/bind9/zones/partition1/old-shared.hosts b/quickstart/bind9/zones/partition1/old-shared.hosts similarity index 100% rename from docker/bind9/zones/partition1/old-shared.hosts rename to quickstart/bind9/zones/partition1/old-shared.hosts diff --git a/docker/bind9/zones/partition1/old-vinyldns2.hosts b/quickstart/bind9/zones/partition1/old-vinyldns2.hosts similarity index 100% rename from docker/bind9/zones/partition1/old-vinyldns2.hosts rename to quickstart/bind9/zones/partition1/old-vinyldns2.hosts diff --git a/docker/bind9/zones/partition1/old-vinyldns3.hosts b/quickstart/bind9/zones/partition1/old-vinyldns3.hosts similarity index 100% rename from docker/bind9/zones/partition1/old-vinyldns3.hosts rename to quickstart/bind9/zones/partition1/old-vinyldns3.hosts diff --git a/docker/bind9/zones/partition1/one-time-shared.hosts b/quickstart/bind9/zones/partition1/one-time-shared.hosts similarity index 100% rename from docker/bind9/zones/partition1/one-time-shared.hosts rename to quickstart/bind9/zones/partition1/one-time-shared.hosts diff --git a/docker/bind9/zones/partition1/one-time.hosts b/quickstart/bind9/zones/partition1/one-time.hosts similarity index 100% rename from docker/bind9/zones/partition1/one-time.hosts rename to quickstart/bind9/zones/partition1/one-time.hosts diff --git a/docker/bind9/zones/partition1/open.hosts b/quickstart/bind9/zones/partition1/open.hosts similarity index 100% rename from docker/bind9/zones/partition1/open.hosts rename to quickstart/bind9/zones/partition1/open.hosts diff --git a/docker/bind9/zones/partition1/parent.com.hosts b/quickstart/bind9/zones/partition1/parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition1/parent.com.hosts rename to quickstart/bind9/zones/partition1/parent.com.hosts diff --git a/docker/bind9/zones/partition1/shared.hosts b/quickstart/bind9/zones/partition1/shared.hosts similarity index 100% rename from docker/bind9/zones/partition1/shared.hosts rename to quickstart/bind9/zones/partition1/shared.hosts diff --git a/docker/bind9/zones/partition1/sync-test.hosts b/quickstart/bind9/zones/partition1/sync-test.hosts similarity index 100% rename from docker/bind9/zones/partition1/sync-test.hosts rename to quickstart/bind9/zones/partition1/sync-test.hosts diff --git a/docker/bind9/zones/partition1/system-test-history.hosts b/quickstart/bind9/zones/partition1/system-test-history.hosts similarity index 100% rename from docker/bind9/zones/partition1/system-test-history.hosts rename to quickstart/bind9/zones/partition1/system-test-history.hosts diff --git a/docker/bind9/zones/partition1/system-test.hosts b/quickstart/bind9/zones/partition1/system-test.hosts similarity index 100% rename from docker/bind9/zones/partition1/system-test.hosts rename to quickstart/bind9/zones/partition1/system-test.hosts diff --git a/docker/bind9/zones/partition1/vinyldns.hosts b/quickstart/bind9/zones/partition1/vinyldns.hosts similarity index 100% rename from docker/bind9/zones/partition1/vinyldns.hosts rename to quickstart/bind9/zones/partition1/vinyldns.hosts diff --git a/docker/bind9/zones/partition1/zone.requires.review.hosts b/quickstart/bind9/zones/partition1/zone.requires.review.hosts similarity index 100% rename from docker/bind9/zones/partition1/zone.requires.review.hosts rename to quickstart/bind9/zones/partition1/zone.requires.review.hosts diff --git a/docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition2/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition2/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition2/10.10.in-addr.arpa b/quickstart/bind9/zones/partition2/10.10.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition2/10.10.in-addr.arpa rename to quickstart/bind9/zones/partition2/10.10.in-addr.arpa diff --git a/docker/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition2/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition2/2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition2/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition2/2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition2/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition2/child.parent.com.hosts b/quickstart/bind9/zones/partition2/child.parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition2/child.parent.com.hosts rename to quickstart/bind9/zones/partition2/child.parent.com.hosts diff --git a/docker/bind9/zones/partition2/dskey.example.com.hosts b/quickstart/bind9/zones/partition2/dskey.example.com.hosts similarity index 100% rename from docker/bind9/zones/partition2/dskey.example.com.hosts rename to quickstart/bind9/zones/partition2/dskey.example.com.hosts diff --git a/docker/bind9/zones/partition2/dummy.hosts b/quickstart/bind9/zones/partition2/dummy.hosts similarity index 100% rename from docker/bind9/zones/partition2/dummy.hosts rename to quickstart/bind9/zones/partition2/dummy.hosts diff --git a/docker/bind9/zones/partition2/example.com.hosts b/quickstart/bind9/zones/partition2/example.com.hosts similarity index 100% rename from docker/bind9/zones/partition2/example.com.hosts rename to quickstart/bind9/zones/partition2/example.com.hosts diff --git a/docker/bind9/zones/partition2/invalid-zone.hosts b/quickstart/bind9/zones/partition2/invalid-zone.hosts similarity index 100% rename from docker/bind9/zones/partition2/invalid-zone.hosts rename to quickstart/bind9/zones/partition2/invalid-zone.hosts diff --git a/docker/bind9/zones/partition2/list-records.hosts b/quickstart/bind9/zones/partition2/list-records.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-records.hosts rename to quickstart/bind9/zones/partition2/list-records.hosts diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-1.hosts b/quickstart/bind9/zones/partition2/list-zones-test-searched-1.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-zones-test-searched-1.hosts rename to quickstart/bind9/zones/partition2/list-zones-test-searched-1.hosts diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-2.hosts b/quickstart/bind9/zones/partition2/list-zones-test-searched-2.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-zones-test-searched-2.hosts rename to quickstart/bind9/zones/partition2/list-zones-test-searched-2.hosts diff --git a/docker/bind9/zones/partition2/list-zones-test-searched-3.hosts b/quickstart/bind9/zones/partition2/list-zones-test-searched-3.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-zones-test-searched-3.hosts rename to quickstart/bind9/zones/partition2/list-zones-test-searched-3.hosts diff --git a/docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts b/quickstart/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts rename to quickstart/bind9/zones/partition2/list-zones-test-unfiltered-1.hosts diff --git a/docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts b/quickstart/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts similarity index 100% rename from docker/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts rename to quickstart/bind9/zones/partition2/list-zones-test-unfiltered-2.hosts diff --git a/docker/bind9/zones/partition2/non.test.shared.hosts b/quickstart/bind9/zones/partition2/non.test.shared.hosts similarity index 100% rename from docker/bind9/zones/partition2/non.test.shared.hosts rename to quickstart/bind9/zones/partition2/non.test.shared.hosts diff --git a/docker/bind9/zones/partition2/not.loaded.hosts b/quickstart/bind9/zones/partition2/not.loaded.hosts similarity index 100% rename from docker/bind9/zones/partition2/not.loaded.hosts rename to quickstart/bind9/zones/partition2/not.loaded.hosts diff --git a/docker/bind9/zones/partition2/ok.hosts b/quickstart/bind9/zones/partition2/ok.hosts similarity index 100% rename from docker/bind9/zones/partition2/ok.hosts rename to quickstart/bind9/zones/partition2/ok.hosts diff --git a/docker/bind9/zones/partition2/old-shared.hosts b/quickstart/bind9/zones/partition2/old-shared.hosts similarity index 100% rename from docker/bind9/zones/partition2/old-shared.hosts rename to quickstart/bind9/zones/partition2/old-shared.hosts diff --git a/docker/bind9/zones/partition2/old-vinyldns2.hosts b/quickstart/bind9/zones/partition2/old-vinyldns2.hosts similarity index 100% rename from docker/bind9/zones/partition2/old-vinyldns2.hosts rename to quickstart/bind9/zones/partition2/old-vinyldns2.hosts diff --git a/docker/bind9/zones/partition2/old-vinyldns3.hosts b/quickstart/bind9/zones/partition2/old-vinyldns3.hosts similarity index 100% rename from docker/bind9/zones/partition2/old-vinyldns3.hosts rename to quickstart/bind9/zones/partition2/old-vinyldns3.hosts diff --git a/docker/bind9/zones/partition2/one-time-shared.hosts b/quickstart/bind9/zones/partition2/one-time-shared.hosts similarity index 100% rename from docker/bind9/zones/partition2/one-time-shared.hosts rename to quickstart/bind9/zones/partition2/one-time-shared.hosts diff --git a/docker/bind9/zones/partition2/one-time.hosts b/quickstart/bind9/zones/partition2/one-time.hosts similarity index 100% rename from docker/bind9/zones/partition2/one-time.hosts rename to quickstart/bind9/zones/partition2/one-time.hosts diff --git a/docker/bind9/zones/partition2/open.hosts b/quickstart/bind9/zones/partition2/open.hosts similarity index 100% rename from docker/bind9/zones/partition2/open.hosts rename to quickstart/bind9/zones/partition2/open.hosts diff --git a/docker/bind9/zones/partition2/parent.com.hosts b/quickstart/bind9/zones/partition2/parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition2/parent.com.hosts rename to quickstart/bind9/zones/partition2/parent.com.hosts diff --git a/docker/bind9/zones/partition2/shared.hosts b/quickstart/bind9/zones/partition2/shared.hosts similarity index 100% rename from docker/bind9/zones/partition2/shared.hosts rename to quickstart/bind9/zones/partition2/shared.hosts diff --git a/docker/bind9/zones/partition2/sync-test.hosts b/quickstart/bind9/zones/partition2/sync-test.hosts similarity index 100% rename from docker/bind9/zones/partition2/sync-test.hosts rename to quickstart/bind9/zones/partition2/sync-test.hosts diff --git a/docker/bind9/zones/partition2/system-test-history.hosts b/quickstart/bind9/zones/partition2/system-test-history.hosts similarity index 100% rename from docker/bind9/zones/partition2/system-test-history.hosts rename to quickstart/bind9/zones/partition2/system-test-history.hosts diff --git a/docker/bind9/zones/partition2/system-test.hosts b/quickstart/bind9/zones/partition2/system-test.hosts similarity index 100% rename from docker/bind9/zones/partition2/system-test.hosts rename to quickstart/bind9/zones/partition2/system-test.hosts diff --git a/docker/bind9/zones/partition2/vinyldns.hosts b/quickstart/bind9/zones/partition2/vinyldns.hosts similarity index 100% rename from docker/bind9/zones/partition2/vinyldns.hosts rename to quickstart/bind9/zones/partition2/vinyldns.hosts diff --git a/docker/bind9/zones/partition2/zone.requires.review.hosts b/quickstart/bind9/zones/partition2/zone.requires.review.hosts similarity index 100% rename from docker/bind9/zones/partition2/zone.requires.review.hosts rename to quickstart/bind9/zones/partition2/zone.requires.review.hosts diff --git a/docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition3/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition3/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition3/10.10.in-addr.arpa b/quickstart/bind9/zones/partition3/10.10.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition3/10.10.in-addr.arpa rename to quickstart/bind9/zones/partition3/10.10.in-addr.arpa diff --git a/docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition3/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition3/2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition3/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition3/2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition3/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition3/child.parent.com.hosts b/quickstart/bind9/zones/partition3/child.parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition3/child.parent.com.hosts rename to quickstart/bind9/zones/partition3/child.parent.com.hosts diff --git a/docker/bind9/zones/partition3/dskey.example.com.hosts b/quickstart/bind9/zones/partition3/dskey.example.com.hosts similarity index 100% rename from docker/bind9/zones/partition3/dskey.example.com.hosts rename to quickstart/bind9/zones/partition3/dskey.example.com.hosts diff --git a/docker/bind9/zones/partition3/dummy.hosts b/quickstart/bind9/zones/partition3/dummy.hosts similarity index 100% rename from docker/bind9/zones/partition3/dummy.hosts rename to quickstart/bind9/zones/partition3/dummy.hosts diff --git a/docker/bind9/zones/partition3/example.com.hosts b/quickstart/bind9/zones/partition3/example.com.hosts similarity index 100% rename from docker/bind9/zones/partition3/example.com.hosts rename to quickstart/bind9/zones/partition3/example.com.hosts diff --git a/docker/bind9/zones/partition3/invalid-zone.hosts b/quickstart/bind9/zones/partition3/invalid-zone.hosts similarity index 100% rename from docker/bind9/zones/partition3/invalid-zone.hosts rename to quickstart/bind9/zones/partition3/invalid-zone.hosts diff --git a/docker/bind9/zones/partition3/list-records.hosts b/quickstart/bind9/zones/partition3/list-records.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-records.hosts rename to quickstart/bind9/zones/partition3/list-records.hosts diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-1.hosts b/quickstart/bind9/zones/partition3/list-zones-test-searched-1.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-zones-test-searched-1.hosts rename to quickstart/bind9/zones/partition3/list-zones-test-searched-1.hosts diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-2.hosts b/quickstart/bind9/zones/partition3/list-zones-test-searched-2.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-zones-test-searched-2.hosts rename to quickstart/bind9/zones/partition3/list-zones-test-searched-2.hosts diff --git a/docker/bind9/zones/partition3/list-zones-test-searched-3.hosts b/quickstart/bind9/zones/partition3/list-zones-test-searched-3.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-zones-test-searched-3.hosts rename to quickstart/bind9/zones/partition3/list-zones-test-searched-3.hosts diff --git a/docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts b/quickstart/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts rename to quickstart/bind9/zones/partition3/list-zones-test-unfiltered-1.hosts diff --git a/docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts b/quickstart/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts similarity index 100% rename from docker/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts rename to quickstart/bind9/zones/partition3/list-zones-test-unfiltered-2.hosts diff --git a/docker/bind9/zones/partition3/non.test.shared.hosts b/quickstart/bind9/zones/partition3/non.test.shared.hosts similarity index 100% rename from docker/bind9/zones/partition3/non.test.shared.hosts rename to quickstart/bind9/zones/partition3/non.test.shared.hosts diff --git a/docker/bind9/zones/partition3/not.loaded.hosts b/quickstart/bind9/zones/partition3/not.loaded.hosts similarity index 100% rename from docker/bind9/zones/partition3/not.loaded.hosts rename to quickstart/bind9/zones/partition3/not.loaded.hosts diff --git a/docker/bind9/zones/partition3/ok.hosts b/quickstart/bind9/zones/partition3/ok.hosts similarity index 100% rename from docker/bind9/zones/partition3/ok.hosts rename to quickstart/bind9/zones/partition3/ok.hosts diff --git a/docker/bind9/zones/partition3/old-shared.hosts b/quickstart/bind9/zones/partition3/old-shared.hosts similarity index 100% rename from docker/bind9/zones/partition3/old-shared.hosts rename to quickstart/bind9/zones/partition3/old-shared.hosts diff --git a/docker/bind9/zones/partition3/old-vinyldns2.hosts b/quickstart/bind9/zones/partition3/old-vinyldns2.hosts similarity index 100% rename from docker/bind9/zones/partition3/old-vinyldns2.hosts rename to quickstart/bind9/zones/partition3/old-vinyldns2.hosts diff --git a/docker/bind9/zones/partition3/old-vinyldns3.hosts b/quickstart/bind9/zones/partition3/old-vinyldns3.hosts similarity index 100% rename from docker/bind9/zones/partition3/old-vinyldns3.hosts rename to quickstart/bind9/zones/partition3/old-vinyldns3.hosts diff --git a/docker/bind9/zones/partition3/one-time-shared.hosts b/quickstart/bind9/zones/partition3/one-time-shared.hosts similarity index 100% rename from docker/bind9/zones/partition3/one-time-shared.hosts rename to quickstart/bind9/zones/partition3/one-time-shared.hosts diff --git a/docker/bind9/zones/partition3/one-time.hosts b/quickstart/bind9/zones/partition3/one-time.hosts similarity index 100% rename from docker/bind9/zones/partition3/one-time.hosts rename to quickstart/bind9/zones/partition3/one-time.hosts diff --git a/docker/bind9/zones/partition3/open.hosts b/quickstart/bind9/zones/partition3/open.hosts similarity index 100% rename from docker/bind9/zones/partition3/open.hosts rename to quickstart/bind9/zones/partition3/open.hosts diff --git a/docker/bind9/zones/partition3/parent.com.hosts b/quickstart/bind9/zones/partition3/parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition3/parent.com.hosts rename to quickstart/bind9/zones/partition3/parent.com.hosts diff --git a/docker/bind9/zones/partition3/shared.hosts b/quickstart/bind9/zones/partition3/shared.hosts similarity index 100% rename from docker/bind9/zones/partition3/shared.hosts rename to quickstart/bind9/zones/partition3/shared.hosts diff --git a/docker/bind9/zones/partition3/sync-test.hosts b/quickstart/bind9/zones/partition3/sync-test.hosts similarity index 100% rename from docker/bind9/zones/partition3/sync-test.hosts rename to quickstart/bind9/zones/partition3/sync-test.hosts diff --git a/docker/bind9/zones/partition3/system-test-history.hosts b/quickstart/bind9/zones/partition3/system-test-history.hosts similarity index 100% rename from docker/bind9/zones/partition3/system-test-history.hosts rename to quickstart/bind9/zones/partition3/system-test-history.hosts diff --git a/docker/bind9/zones/partition3/system-test.hosts b/quickstart/bind9/zones/partition3/system-test.hosts similarity index 100% rename from docker/bind9/zones/partition3/system-test.hosts rename to quickstart/bind9/zones/partition3/system-test.hosts diff --git a/docker/bind9/zones/partition3/vinyldns.hosts b/quickstart/bind9/zones/partition3/vinyldns.hosts similarity index 100% rename from docker/bind9/zones/partition3/vinyldns.hosts rename to quickstart/bind9/zones/partition3/vinyldns.hosts diff --git a/docker/bind9/zones/partition3/zone.requires.review.hosts b/quickstart/bind9/zones/partition3/zone.requires.review.hosts similarity index 100% rename from docker/bind9/zones/partition3/zone.requires.review.hosts rename to quickstart/bind9/zones/partition3/zone.requires.review.hosts diff --git a/docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition4/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/quickstart/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa similarity index 100% rename from docker/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa rename to quickstart/bind9/zones/partition4/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa diff --git a/docker/bind9/zones/partition4/10.10.in-addr.arpa b/quickstart/bind9/zones/partition4/10.10.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition4/10.10.in-addr.arpa rename to quickstart/bind9/zones/partition4/10.10.in-addr.arpa diff --git a/docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition4/192^30.2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition4/2.0.192.in-addr.arpa b/quickstart/bind9/zones/partition4/2.0.192.in-addr.arpa similarity index 100% rename from docker/bind9/zones/partition4/2.0.192.in-addr.arpa rename to quickstart/bind9/zones/partition4/2.0.192.in-addr.arpa diff --git a/docker/bind9/zones/partition4/child.parent.com.hosts b/quickstart/bind9/zones/partition4/child.parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition4/child.parent.com.hosts rename to quickstart/bind9/zones/partition4/child.parent.com.hosts diff --git a/docker/bind9/zones/partition4/dskey.example.com.hosts b/quickstart/bind9/zones/partition4/dskey.example.com.hosts similarity index 100% rename from docker/bind9/zones/partition4/dskey.example.com.hosts rename to quickstart/bind9/zones/partition4/dskey.example.com.hosts diff --git a/docker/bind9/zones/partition4/dummy.hosts b/quickstart/bind9/zones/partition4/dummy.hosts similarity index 100% rename from docker/bind9/zones/partition4/dummy.hosts rename to quickstart/bind9/zones/partition4/dummy.hosts diff --git a/docker/bind9/zones/partition4/example.com.hosts b/quickstart/bind9/zones/partition4/example.com.hosts similarity index 100% rename from docker/bind9/zones/partition4/example.com.hosts rename to quickstart/bind9/zones/partition4/example.com.hosts diff --git a/docker/bind9/zones/partition4/invalid-zone.hosts b/quickstart/bind9/zones/partition4/invalid-zone.hosts similarity index 100% rename from docker/bind9/zones/partition4/invalid-zone.hosts rename to quickstart/bind9/zones/partition4/invalid-zone.hosts diff --git a/docker/bind9/zones/partition4/list-records.hosts b/quickstart/bind9/zones/partition4/list-records.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-records.hosts rename to quickstart/bind9/zones/partition4/list-records.hosts diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-1.hosts b/quickstart/bind9/zones/partition4/list-zones-test-searched-1.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-zones-test-searched-1.hosts rename to quickstart/bind9/zones/partition4/list-zones-test-searched-1.hosts diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-2.hosts b/quickstart/bind9/zones/partition4/list-zones-test-searched-2.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-zones-test-searched-2.hosts rename to quickstart/bind9/zones/partition4/list-zones-test-searched-2.hosts diff --git a/docker/bind9/zones/partition4/list-zones-test-searched-3.hosts b/quickstart/bind9/zones/partition4/list-zones-test-searched-3.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-zones-test-searched-3.hosts rename to quickstart/bind9/zones/partition4/list-zones-test-searched-3.hosts diff --git a/docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts b/quickstart/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts rename to quickstart/bind9/zones/partition4/list-zones-test-unfiltered-1.hosts diff --git a/docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts b/quickstart/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts similarity index 100% rename from docker/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts rename to quickstart/bind9/zones/partition4/list-zones-test-unfiltered-2.hosts diff --git a/docker/bind9/zones/partition4/non.test.shared.hosts b/quickstart/bind9/zones/partition4/non.test.shared.hosts similarity index 100% rename from docker/bind9/zones/partition4/non.test.shared.hosts rename to quickstart/bind9/zones/partition4/non.test.shared.hosts diff --git a/docker/bind9/zones/partition4/not.loaded.hosts b/quickstart/bind9/zones/partition4/not.loaded.hosts similarity index 100% rename from docker/bind9/zones/partition4/not.loaded.hosts rename to quickstart/bind9/zones/partition4/not.loaded.hosts diff --git a/docker/bind9/zones/partition4/ok.hosts b/quickstart/bind9/zones/partition4/ok.hosts similarity index 100% rename from docker/bind9/zones/partition4/ok.hosts rename to quickstart/bind9/zones/partition4/ok.hosts diff --git a/docker/bind9/zones/partition4/old-shared.hosts b/quickstart/bind9/zones/partition4/old-shared.hosts similarity index 100% rename from docker/bind9/zones/partition4/old-shared.hosts rename to quickstart/bind9/zones/partition4/old-shared.hosts diff --git a/docker/bind9/zones/partition4/old-vinyldns2.hosts b/quickstart/bind9/zones/partition4/old-vinyldns2.hosts similarity index 100% rename from docker/bind9/zones/partition4/old-vinyldns2.hosts rename to quickstart/bind9/zones/partition4/old-vinyldns2.hosts diff --git a/docker/bind9/zones/partition4/old-vinyldns3.hosts b/quickstart/bind9/zones/partition4/old-vinyldns3.hosts similarity index 100% rename from docker/bind9/zones/partition4/old-vinyldns3.hosts rename to quickstart/bind9/zones/partition4/old-vinyldns3.hosts diff --git a/docker/bind9/zones/partition4/one-time-shared.hosts b/quickstart/bind9/zones/partition4/one-time-shared.hosts similarity index 100% rename from docker/bind9/zones/partition4/one-time-shared.hosts rename to quickstart/bind9/zones/partition4/one-time-shared.hosts diff --git a/docker/bind9/zones/partition4/one-time.hosts b/quickstart/bind9/zones/partition4/one-time.hosts similarity index 100% rename from docker/bind9/zones/partition4/one-time.hosts rename to quickstart/bind9/zones/partition4/one-time.hosts diff --git a/docker/bind9/zones/partition4/open.hosts b/quickstart/bind9/zones/partition4/open.hosts similarity index 100% rename from docker/bind9/zones/partition4/open.hosts rename to quickstart/bind9/zones/partition4/open.hosts diff --git a/docker/bind9/zones/partition4/parent.com.hosts b/quickstart/bind9/zones/partition4/parent.com.hosts similarity index 100% rename from docker/bind9/zones/partition4/parent.com.hosts rename to quickstart/bind9/zones/partition4/parent.com.hosts diff --git a/docker/bind9/zones/partition4/shared.hosts b/quickstart/bind9/zones/partition4/shared.hosts similarity index 100% rename from docker/bind9/zones/partition4/shared.hosts rename to quickstart/bind9/zones/partition4/shared.hosts diff --git a/docker/bind9/zones/partition4/sync-test.hosts b/quickstart/bind9/zones/partition4/sync-test.hosts similarity index 100% rename from docker/bind9/zones/partition4/sync-test.hosts rename to quickstart/bind9/zones/partition4/sync-test.hosts diff --git a/docker/bind9/zones/partition4/system-test-history.hosts b/quickstart/bind9/zones/partition4/system-test-history.hosts similarity index 100% rename from docker/bind9/zones/partition4/system-test-history.hosts rename to quickstart/bind9/zones/partition4/system-test-history.hosts diff --git a/docker/bind9/zones/partition4/system-test.hosts b/quickstart/bind9/zones/partition4/system-test.hosts similarity index 100% rename from docker/bind9/zones/partition4/system-test.hosts rename to quickstart/bind9/zones/partition4/system-test.hosts diff --git a/docker/bind9/zones/partition4/vinyldns.hosts b/quickstart/bind9/zones/partition4/vinyldns.hosts similarity index 100% rename from docker/bind9/zones/partition4/vinyldns.hosts rename to quickstart/bind9/zones/partition4/vinyldns.hosts diff --git a/docker/bind9/zones/partition4/zone.requires.review.hosts b/quickstart/bind9/zones/partition4/zone.requires.review.hosts similarity index 100% rename from docker/bind9/zones/partition4/zone.requires.review.hosts rename to quickstart/bind9/zones/partition4/zone.requires.review.hosts diff --git a/quickstart/docker-compose.yml b/quickstart/docker-compose.yml new file mode 100644 index 000000000..2422a916f --- /dev/null +++ b/quickstart/docker-compose.yml @@ -0,0 +1,45 @@ +version: "3.5" + +services: + ldap: + container_name: "vinyldns-ldap" + image: rroemhild/test-openldap + ports: + - "19004:389" + + integration: + container_name: "vinyldns-api-integration" + hostname: "vinyldns-integration" + image: "vinyldns-api-integration" + build: + context: ../ + dockerfile: test/api/integration/Dockerfile + environment: + RUN_SERVICES: "all tail-logs" + env_file: + .env + ports: + - "9000:9000" + - "19001-19003:19001-19003/tcp" + - "19001:19001/udp" + + portal: + container_name: "vinyldns-portal" + image: "vinyldns/portal:${VINYLDNS_VERSION}" + build: + context: .. + dockerfile: "" + env_file: + .env + ports: + - "${PORTAL_PORT}:${PORTAL_PORT}" + volumes: + - ./portal/application.ini:/opt/docker/conf/application.ini + - ./portal/application.conf:/opt/docker/conf/application.conf + depends_on: + - integration + - ldap + +networks: + default: + name: "vinyldns_net" diff --git a/quickstart/portal/Dockerfile b/quickstart/portal/Dockerfile new file mode 100644 index 000000000..a2b2302fc --- /dev/null +++ b/quickstart/portal/Dockerfile @@ -0,0 +1,34 @@ +FROM vinyldns/build:base-build-portal as builder +ARG VINYLDNS_VERSION="0.0.0-local-dev" + +COPY . /vinyldns + +WORKDIR /vinyldns +RUN cp /build/node_modules.tar.xz /vinyldns/modules/portal && \ + cd /vinyldns/modules/portal && tar Jxvf node_modules.tar.xz && \ + cd /vinyldns + +RUN sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"; project portal; preparePortal" +RUN sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"; project portal; universal:packageZipTarball" + +FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine + +RUN apk add --update --no-cache netcat-openbsd bash + +COPY --from=builder /vinyldns/modules/portal/target/universal/portal.tgz / + +RUN mkdir -p /opt && \ + tar -xzvf /portal.tgz && \ + mv /portal /opt/docker && \ + mkdir -p /opt/docker/lib_extra + +# This will set the vinyldns version, make sure to have this in config... version = ${?VINYLDNS_VERSION} +ENV VINYLDNS_VERSION=$VINYLDNS_VERSION + +# Mount the volume for config file and lib extras +# Note: These volume names are used in the build.sbt +VOLUME ["/opt/docker/lib_extra/", "/opt/docker/conf"] + +EXPOSE 9000 + +ENTRYPOINT ["/opt/docker/bin/portal"] diff --git a/quickstart/portal/Makefile b/quickstart/portal/Makefile new file mode 100644 index 000000000..8f0197073 --- /dev/null +++ b/quickstart/portal/Makefile @@ -0,0 +1,43 @@ +SHELL=bash +IMAGE_NAME=vinyldns/portal +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +# Extract arguments for `make run` +EXTRACT_ARGS=true +ifeq (run,$(firstword $(MAKECMDGOALS))) + EXTRACT_ARGS=true +endif +ifeq ($(EXTRACT_ARGS),true) + # use the rest as arguments for "run" + WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) +endif + +%: + @: + +.ONESHELL: + +.PHONY: all build run + +all: build run + +build: + @set -euo pipefail + cd ../.. + docker build -t $(IMAGE_NAME) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . + +run: + @set -euo pipefail + docker run -it --rm $(DOCKER_PARAMS) -p 9001:9001 $(IMAGE_NAME) -- $(WITH_ARGS) + +run-bg: + @set -euo pipefail + docker stop $(IMAGE_NAME) &> /dev/null || true + docker rm $(IMAGE_NAME) &> /dev/null || true + docker run -td --name $(IMAGE_NAME) --rm $(DOCKER_PARAMS) -p 9001:9001 $(IMAGE_NAME) -- /bin/bash diff --git a/docker/portal/application.conf b/quickstart/portal/application.conf similarity index 92% rename from docker/portal/application.conf rename to quickstart/portal/application.conf index 9b035a971..97347dfb7 100644 --- a/docker/portal/application.conf +++ b/quickstart/portal/application.conf @@ -11,7 +11,7 @@ LDAP { # This will be the name of the LDAP field that carries the user's login id (what they enter in the username in login form) userNameAttribute = "uid" - # For ogranization, leave empty for this demo, the domainName is what matters, and that is the LDAP structure + # For organization, leave empty for this demo, the domainName is what matters, and that is the LDAP structure # to search for users that require login searchBase = [ {organization = "", domainName = "ou=people,dc=planetexpress,dc=com"}, @@ -21,7 +21,8 @@ LDAP { securityAuthentication = "simple" # Note: The following assumes a purely docker setup, using container_name = vinyldns-ldap - providerUrl = "ldap://vinyldns-ldap:389" + providerUrl = "ldap://vinyldns-ldap:19004" + providerUrl = ${?LDAP_PROVIDER_URL} } # This is only needed if keeping vinyldns user store in sync with ldap (to auto lock out users who left your diff --git a/docker/portal/application.ini b/quickstart/portal/application.ini similarity index 100% rename from docker/portal/application.ini rename to quickstart/portal/application.ini diff --git a/test/api/functional/Dockerfile b/test/api/functional/Dockerfile index 5a01aefe1..3ea913b81 100644 --- a/test/api/functional/Dockerfile +++ b/test/api/functional/Dockerfile @@ -6,7 +6,8 @@ COPY . /build/ WORKDIR /build ## Run the build if we don't already have a vinyldns.jar -RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ +RUN if [ -f assembly/vinyldns.jar ]; then cp assembly/vinyldns.jar /opt/vinyldns; fi && \ + if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ diff --git a/test/api/functional/Makefile b/test/api/functional/Makefile index 80c25f552..810205023 100644 --- a/test/api/functional/Makefile +++ b/test/api/functional/Makefile @@ -17,9 +17,9 @@ ifeq (run,$(firstword $(MAKECMDGOALS))) endif ifeq ($(EXTRACT_ARGS),true) # use the rest as arguments for "run" - RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + WITH_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) # ...and turn them into do-nothing targets - $(eval $(RUN_ARGS):;@:) + $(eval $(WITH_ARGS):;@:) endif @@ -38,8 +38,8 @@ build: run: @set -euo pipefail - docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) -- $(WITH_ARGS) run-local: @set -euo pipefail - docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp -v "$$(pwd)/test:/functional_test" $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp -v "$$(pwd)/test:/functional_test" $(IMAGE_NAME) -- $(WITH_ARGS) diff --git a/test/api/integration/Dockerfile b/test/api/integration/Dockerfile index aa0080dda..0d07e335b 100644 --- a/test/api/integration/Dockerfile +++ b/test/api/integration/Dockerfile @@ -6,7 +6,8 @@ COPY . /build/ WORKDIR /build ## Run the build if we don't already have a vinyldns.jar -RUN if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ +RUN if [ -f assembly/vinyldns.jar ]; then cp assembly/vinyldns.jar /opt/vinyldns; fi && \ + if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ && cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ @@ -26,3 +27,5 @@ WORKDIR /build COPY docker/bind9/etc/named.conf.* /etc/bind/ COPY docker/bind9/zones/ /var/bind/ RUN named-checkconf + +ENV RUN_SERVICES="all" diff --git a/test/api/integration/Dockerfile.dockerignore b/test/api/integration/Dockerfile.dockerignore deleted file mode 100644 index e42085f51..000000000 --- a/test/api/integration/Dockerfile.dockerignore +++ /dev/null @@ -1,15 +0,0 @@ -**/.venv* -**/.virtualenv -**/target -**/docs -**/out -**/.log -**/.idea/ -**/.bsp -**/*cache* -**/*.png -**/.git -**/Dockerfile -**/*.dockerignore -**/.github -**/_template diff --git a/test/api/integration/Makefile b/test/api/integration/Makefile index 8817cf335..6e0fd3e0d 100644 --- a/test/api/integration/Makefile +++ b/test/api/integration/Makefile @@ -1,8 +1,7 @@ SHELL=bash -IMAGE_NAME=vinyldns-integraion +IMAGE_NAME=vinyldns-api-integration ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR)) -VINYLDNS_JAR_PATH?=modules/api/target/scala-2.12/vinyldns.jar # Check that the required version of make is being used REQ_MAKE_VER:=3.82 @@ -17,7 +16,7 @@ ifeq (run,$(firstword $(MAKECMDGOALS))) endif ifeq ($(EXTRACT_ARGS),true) # use the rest as arguments for "run" - RUN_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) endif %: @@ -31,21 +30,22 @@ all: build run build: @set -euo pipefail - trap 'if [ -f "$(ROOT_DIR)/vinyldns.jar" ]; then rm $(ROOT_DIR)/vinyldns.jar; fi' EXIT cd ../../.. - if [ -f modules/api/target/scala-2.12/vinyldns.jar ]; then cp modules/api/target/scala-2.12/vinyldns.jar $(ROOT_DIR)/vinyldns.jar; fi docker build -t $(IMAGE_NAME) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . run: @set -euo pipefail - docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) -- $(WITH_ARGS) run-bg: @set -euo pipefail - docker stop vinyldns-integration &> /dev/null || true - docker rm vinyldns-integration &> /dev/null || true - docker run -td --name vinyldns-integration --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp $(IMAGE_NAME) -- /bin/bash + docker stop $(IMAGE_NAME) &> /dev/null || true + docker run -td --name $(IMAGE_NAME) --rm $(DOCKER_PARAMS) -e RUN_SERVICES="deps-only tail-logs" -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) + +stop-bg: + @set -euo pipefail + docker stop $(IMAGE_NAME) &> /dev/null || true run-local: @set -euo pipefail - docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19003:19003 -p 19002:19002 -p 19001:19001/tcp -p 19001:19001/udp -v "$(ROOT_DIR)/../../..:/build" $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp -v "$(ROOT_DIR)/../../..:/build" $(IMAGE_NAME) -- $(WITH_ARGS) diff --git a/test/portal/functional/Dockerfile.dockerignore b/test/portal/functional/Dockerfile.dockerignore deleted file mode 100644 index e42085f51..000000000 --- a/test/portal/functional/Dockerfile.dockerignore +++ /dev/null @@ -1,15 +0,0 @@ -**/.venv* -**/.virtualenv -**/target -**/docs -**/out -**/.log -**/.idea/ -**/.bsp -**/*cache* -**/*.png -**/.git -**/Dockerfile -**/*.dockerignore -**/.github -**/_template diff --git a/test/portal/functional/Makefile b/test/portal/functional/Makefile index fe2a57d00..74ada4fd4 100644 --- a/test/portal/functional/Makefile +++ b/test/portal/functional/Makefile @@ -16,9 +16,9 @@ ifeq (run,$(firstword $(MAKECMDGOALS))) endif ifeq ($(EXTRACT_ARGS),true) # use the rest as arguments for "run" - RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + WITH_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) # ...and turn them into do-nothing targets - $(eval $(RUN_ARGS):;@:) + $(eval $(WITH_ARGS):;@:) endif @@ -35,8 +35,8 @@ build: run: @set -euo pipefail - docker run -it --rm $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm $(IMAGE_NAME) -- $(WITH_ARGS) run-local: @set -euo pipefail - docker run -it --rm -v "$$(pwd)/../../../modules/portal:/functional_test" $(IMAGE_NAME) -- $(RUN_ARGS) + docker run -it --rm -v "$$(pwd)/../../../modules/portal:/functional_test" $(IMAGE_NAME) -- $(WITH_ARGS) diff --git a/bin/add-license-headers.sh b/utils/add-license-headers.sh old mode 100755 new mode 100644 similarity index 100% rename from bin/add-license-headers.sh rename to utils/add-license-headers.sh diff --git a/docker/admin/Dockerfile b/utils/admin/Dockerfile similarity index 100% rename from docker/admin/Dockerfile rename to utils/admin/Dockerfile diff --git a/docker/admin/update-support-user.py b/utils/admin/update-support-user.py similarity index 100% rename from docker/admin/update-support-user.py rename to utils/admin/update-support-user.py diff --git a/utils/clean-vinyldns-containers.sh b/utils/clean-vinyldns-containers.sh new file mode 100644 index 000000000..9e2e90199 --- /dev/null +++ b/utils/clean-vinyldns-containers.sh @@ -0,0 +1,27 @@ +#!/usr/bin/env bash +# +# This script with kill and remove containers associated +# with VinylDNS +# +# Note: this will not remove the actual images from your +# machine, just the running containers + +ALL_IDS=$(docker ps -a | grep -e 'vinyldns' -e 'flaviovs/mock-smtp' -e 'rroemhild/test-openldap' | awk '{print $1}') +if [ "${ALL_IDS}" == "" ]; then + echo "Nothing to remove" + exit 0 +fi + +RUNNING_IDS=$(docker ps | grep -e 'vinyldns' -e 'flaviovs/mock-smtp' -e 'rroemhild/test-openldap' | awk '{print $1}') +if [ "${RUNNING_IDS}" != "" ]; then + echo "Killing running containers..." + echo "${RUNNING_IDS}" | xargs docker kill +fi + +ALL_IDS=$(docker ps -a | grep -e 'vinyldns' -e 'flaviovs/mock-smtp' -e 'rroemhild/test-openldap' | awk '{print $1}') +if [ "${ALL_IDS}" != "" ]; then + echo "Removing containers..." + echo "${ALL_IDS}" | xargs docker rm -v +fi + +docker network prune -f diff --git a/bin/func-test-api.sh b/utils/func-test-api.sh old mode 100755 new mode 100644 similarity index 100% rename from bin/func-test-api.sh rename to utils/func-test-api.sh diff --git a/bin/func-test-portal.sh b/utils/func-test-portal.sh old mode 100755 new mode 100644 similarity index 100% rename from bin/func-test-portal.sh rename to utils/func-test-portal.sh diff --git a/utils/quickstart-vinyldns.sh b/utils/quickstart-vinyldns.sh new file mode 100644 index 000000000..37dd5219a --- /dev/null +++ b/utils/quickstart-vinyldns.sh @@ -0,0 +1,141 @@ +#!/usr/bin/env bash +##################################################################################################### +# Starts up the api, portal, and dependent services via +# docker-compose. The api will be available on localhost:9000 and the +# portal will be on localhost:9001 +# +# Relevant overrides can be found in quickstart/.env +# +# Options: +# -t, --timeout seconds: overwrite ping timeout of 60 +# -a, --api-only: do not start up vinyldns-portal +# -s, --service: specify the service to run +# -c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub +# -b, --build: rebuild images when applicable +# -v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags +##################################################################################################### +set -eo pipefail + +function wait_for_url() { + echo -n "Checking ${URL}..." + RETRY="$TIMEOUT" + while [ "$RETRY" -gt 0 ]; do + if curl -I -s "${URL}" -o /dev/null -w "%{http_code}" &>/dev/null || false; then + echo "Succeeded in connecting to ${URL}!" + break + else + echo -n "." + + ((RETRY -= 1)) + sleep 1 + + if [ "$RETRY" -eq 0 ]; then + echo "Exceeded retries waiting for ${URL} to be ready, failing" + exit 1 + fi + fi + done +} + +function usage() { + printf "usage: quickstart-vinyldns.sh [OPTIONS]\n\n" + printf "Starts up a local VinylDNS installation using docker compose\n\n" + printf "options:\n" + printf "\t-t, --timeout seconds: overwrite ping timeout of 60\n" + printf "\t-a, --api-only: do not start up vinyldns-portal\n" + printf "\t-s, --service: specify the service to run\n" + printf "\t-c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub\n" + printf "\t-b, --build: rebuild images when applicable\n" + printf "\t-v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags\n" +} + +function clean_images() { + if [[ $CLEAN -eq 1 ]]; then + echo "cleaning docker images..." + docker rmi "vinyldns/api:${VINYLDNS_VERSION}" + docker rmi "vinyldns/portal:${VINYLDNS_VERSION}" + fi +} + +function wait_for_api() { + echo "Waiting for api..." + URL="$VINYLDNS_API_URL" + wait_for_url +} + +function wait_for_portal() { + # check if portal was skipped + if [ "$SERVICE" != "integration" ]; then + echo "Waiting for portal..." + URL="$VINYLDNS_PORTAL_URL" + wait_for_url + fi +} + +# initial var setup +DIR=$( + cd "$(dirname "$0")" + pwd -P +) +TIMEOUT=60 +DOCKER_COMPOSE_CONFIG="${DIR}/../quickstart/docker-compose.yml" +# empty service starts up all docker services in compose file +SERVICE="" +# when CLEAN is set to 1, existing docker images are deleted so they are re-pulled +CLEAN=0 +# default to latest for docker versions +export VINYLDNS_VERSION=latest + +# source env before parsing args so vars can be overwritten +set -a # Required in order to source docker/.env +# Source customizable env files +source "$DIR"/../quickstart/.env + +# parse args +BUILD="" +while [[ $# -gt 0 ]]; do + case "$1" in + -t | --timeout) + TIMEOUT="$2" + shift + shift + ;; + -a | --api-only) + SERVICE="integration" + shift + ;; + -s | --service) + SERVICE="$2" + shift + shift + ;; + -c | --clean) + CLEAN=1 + shift + ;; + -b | --build) + BUILD="--build" + shift + ;; + -v | --version) + export VINYLDNS_VERSION=$2 + shift + shift + ;; + *) + usage + exit + ;; + esac +done + +clean_images + +echo "timeout is set to ${TIMEOUT}" +echo "vinyldns version is set to '${VINYLDNS_VERSION}'" + +echo "Starting vinyldns and all dependencies in the background..." +docker-compose -f "$DOCKER_COMPOSE_CONFIG" up ${BUILD} -d "${SERVICE}" + +wait_for_api +wait_for_portal diff --git a/bin/release.sh b/utils/release.sh old mode 100755 new mode 100644 similarity index 89% rename from bin/release.sh rename to utils/release.sh index 002719882..da9ed02f2 --- a/bin/release.sh +++ b/utils/release.sh @@ -36,21 +36,21 @@ if [ "$1" != "skip-tests" ]; then printf "\nrunning api func tests... \n" if ! "$DIR"/func-test-api.sh then - printf "\nerror: bin/func-test-api.sh failed \n" + printf "\nerror: utils/func-test-api.sh failed \n" exit 1 fi printf "\nrunning portal func tests... \n" if ! "$DIR"/func-test-portal.sh then - printf "\nerror: bin/func-test-portal.sh failed \n" + printf "\nerror: utils/func-test-portal.sh failed \n" exit 1 fi printf "\nrunning verify... \n" if ! "$DIR"/verify.sh then - printf "\nerror: bin/verify.sh failed \n" + printf "\nerror: utils/verify.sh failed \n" exit 1 fi fi diff --git a/bin/update-support-user.sh b/utils/update-support-user.sh old mode 100755 new mode 100644 similarity index 100% rename from bin/update-support-user.sh rename to utils/update-support-user.sh diff --git a/bin/verify.sh b/utils/verify.sh old mode 100755 new mode 100644 similarity index 83% rename from bin/verify.sh rename to utils/verify.sh index ec4b8f57f..f77cd4a35 --- a/bin/verify.sh +++ b/utils/verify.sh @@ -5,7 +5,7 @@ DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) echo 'Running tests...' cd "$DIR/../test/api/integration" -make build && make run -- sbt ";validate;verify" +make build && make run WITH_ARGS="sbt ';validate;verify'" verify_result=$? if [ ${verify_result} -eq 0 ]; then From 3e5c179af3b2f03ca013c767e79b6d9ec4bb4a3c Mon Sep 17 00:00:00 2001 From: Ryan Emerle Date: Wed, 20 Oct 2021 09:27:40 -0400 Subject: [PATCH 16/82] Update permissions --- .github/workflows/clean.yml | 0 modules/docs/src/main/mdoc/api/create-group.md | 0 modules/docs/src/main/mdoc/api/create-recordset.md | 0 modules/docs/src/main/mdoc/api/create-zone.md | 0 modules/docs/src/main/mdoc/api/delete-group.md | 0 modules/docs/src/main/mdoc/api/delete-recordset.md | 0 modules/docs/src/main/mdoc/api/delete-zone.md | 0 modules/docs/src/main/mdoc/api/get-group.md | 0 modules/docs/src/main/mdoc/api/get-recordset-change.md | 0 modules/docs/src/main/mdoc/api/get-recordset.md | 0 modules/docs/src/main/mdoc/api/get-zone-by-id.md | 0 modules/docs/src/main/mdoc/api/index.md | 0 modules/docs/src/main/mdoc/api/list-group-activity.md | 0 modules/docs/src/main/mdoc/api/list-group-admins.md | 0 modules/docs/src/main/mdoc/api/list-group-members.md | 0 modules/docs/src/main/mdoc/api/list-groups.md | 0 modules/docs/src/main/mdoc/api/list-recordset-changes.md | 0 modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md | 0 modules/docs/src/main/mdoc/api/list-zone-changes.md | 0 modules/docs/src/main/mdoc/api/list-zones.md | 0 modules/docs/src/main/mdoc/api/membership-model.md | 0 modules/docs/src/main/mdoc/api/recordset-model.md | 0 modules/docs/src/main/mdoc/api/sync-zone.md | 0 modules/docs/src/main/mdoc/api/update-group.md | 0 modules/docs/src/main/mdoc/api/update-recordset.md | 0 modules/docs/src/main/mdoc/api/update-zone.md | 0 modules/docs/src/main/mdoc/api/zone-model.md | 0 modules/docs/src/main/mdoc/faq.md | 0 modules/docs/src/main/mdoc/getting-help.md | 0 modules/docs/src/main/mdoc/tools.md | 0 test/api/functional/test/.gitignore | 0 test/api/functional/test/pytest.sh | 0 test/api/functional/test/run.sh | 0 test/portal/functional/run.sh | 0 utils/add-license-headers.sh | 0 utils/clean-vinyldns-containers.sh | 0 utils/func-test-api.sh | 0 utils/func-test-portal.sh | 0 utils/quickstart-vinyldns.sh | 0 utils/release.sh | 0 utils/update-support-user.sh | 0 utils/verify.sh | 0 42 files changed, 0 insertions(+), 0 deletions(-) mode change 100644 => 100755 .github/workflows/clean.yml mode change 100755 => 100644 modules/docs/src/main/mdoc/api/create-group.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/create-recordset.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/create-zone.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/delete-group.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/delete-recordset.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/delete-zone.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/get-group.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/get-recordset-change.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/get-recordset.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/get-zone-by-id.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/index.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-group-activity.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-group-admins.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-group-members.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-groups.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-recordset-changes.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-zone-changes.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/list-zones.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/membership-model.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/recordset-model.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/sync-zone.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/update-group.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/update-recordset.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/update-zone.md mode change 100755 => 100644 modules/docs/src/main/mdoc/api/zone-model.md mode change 100755 => 100644 modules/docs/src/main/mdoc/faq.md mode change 100755 => 100644 modules/docs/src/main/mdoc/getting-help.md mode change 100755 => 100644 modules/docs/src/main/mdoc/tools.md mode change 100644 => 100755 test/api/functional/test/.gitignore mode change 100644 => 100755 test/api/functional/test/pytest.sh mode change 100644 => 100755 test/api/functional/test/run.sh mode change 100644 => 100755 test/portal/functional/run.sh mode change 100644 => 100755 utils/add-license-headers.sh mode change 100644 => 100755 utils/clean-vinyldns-containers.sh mode change 100644 => 100755 utils/func-test-api.sh mode change 100644 => 100755 utils/func-test-portal.sh mode change 100644 => 100755 utils/quickstart-vinyldns.sh mode change 100644 => 100755 utils/release.sh mode change 100644 => 100755 utils/update-support-user.sh mode change 100644 => 100755 utils/verify.sh diff --git a/.github/workflows/clean.yml b/.github/workflows/clean.yml old mode 100644 new mode 100755 diff --git a/modules/docs/src/main/mdoc/api/create-group.md b/modules/docs/src/main/mdoc/api/create-group.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/create-recordset.md b/modules/docs/src/main/mdoc/api/create-recordset.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/create-zone.md b/modules/docs/src/main/mdoc/api/create-zone.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/delete-group.md b/modules/docs/src/main/mdoc/api/delete-group.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/delete-recordset.md b/modules/docs/src/main/mdoc/api/delete-recordset.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/delete-zone.md b/modules/docs/src/main/mdoc/api/delete-zone.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/get-group.md b/modules/docs/src/main/mdoc/api/get-group.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/get-recordset-change.md b/modules/docs/src/main/mdoc/api/get-recordset-change.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/get-recordset.md b/modules/docs/src/main/mdoc/api/get-recordset.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/get-zone-by-id.md b/modules/docs/src/main/mdoc/api/get-zone-by-id.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/index.md b/modules/docs/src/main/mdoc/api/index.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-group-activity.md b/modules/docs/src/main/mdoc/api/list-group-activity.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-group-admins.md b/modules/docs/src/main/mdoc/api/list-group-admins.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-group-members.md b/modules/docs/src/main/mdoc/api/list-group-members.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-groups.md b/modules/docs/src/main/mdoc/api/list-groups.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-recordset-changes.md b/modules/docs/src/main/mdoc/api/list-recordset-changes.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md b/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-zone-changes.md b/modules/docs/src/main/mdoc/api/list-zone-changes.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/list-zones.md b/modules/docs/src/main/mdoc/api/list-zones.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/membership-model.md b/modules/docs/src/main/mdoc/api/membership-model.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/recordset-model.md b/modules/docs/src/main/mdoc/api/recordset-model.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/sync-zone.md b/modules/docs/src/main/mdoc/api/sync-zone.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/update-group.md b/modules/docs/src/main/mdoc/api/update-group.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/update-recordset.md b/modules/docs/src/main/mdoc/api/update-recordset.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/update-zone.md b/modules/docs/src/main/mdoc/api/update-zone.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/api/zone-model.md b/modules/docs/src/main/mdoc/api/zone-model.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/faq.md b/modules/docs/src/main/mdoc/faq.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/getting-help.md b/modules/docs/src/main/mdoc/getting-help.md old mode 100755 new mode 100644 diff --git a/modules/docs/src/main/mdoc/tools.md b/modules/docs/src/main/mdoc/tools.md old mode 100755 new mode 100644 diff --git a/test/api/functional/test/.gitignore b/test/api/functional/test/.gitignore old mode 100644 new mode 100755 diff --git a/test/api/functional/test/pytest.sh b/test/api/functional/test/pytest.sh old mode 100644 new mode 100755 diff --git a/test/api/functional/test/run.sh b/test/api/functional/test/run.sh old mode 100644 new mode 100755 diff --git a/test/portal/functional/run.sh b/test/portal/functional/run.sh old mode 100644 new mode 100755 diff --git a/utils/add-license-headers.sh b/utils/add-license-headers.sh old mode 100644 new mode 100755 diff --git a/utils/clean-vinyldns-containers.sh b/utils/clean-vinyldns-containers.sh old mode 100644 new mode 100755 diff --git a/utils/func-test-api.sh b/utils/func-test-api.sh old mode 100644 new mode 100755 diff --git a/utils/func-test-portal.sh b/utils/func-test-portal.sh old mode 100644 new mode 100755 diff --git a/utils/quickstart-vinyldns.sh b/utils/quickstart-vinyldns.sh old mode 100644 new mode 100755 diff --git a/utils/release.sh b/utils/release.sh old mode 100644 new mode 100755 diff --git a/utils/update-support-user.sh b/utils/update-support-user.sh old mode 100644 new mode 100755 diff --git a/utils/verify.sh b/utils/verify.sh old mode 100644 new mode 100755 From 9ce466aa0c683e93304793ed67b62be6fcf342ab Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Thu, 21 Oct 2021 11:44:53 -0400 Subject: [PATCH 17/82] Fix microsite broken dependencies and update docs --- AUTHORS.md | 10 +- build.sbt | 12 +- .../test/scala/vinyldns/api/CatsHelpers.scala | 8 +- .../api/backend/dns/DnsBackendSpec.scala | 17 ++- .../src/main/mdoc/api/approve-batchchange.md | 2 +- .../src/main/mdoc/api/batchchange-errors.md | 16 +-- .../src/main/mdoc/api/batchchange-model.md | 10 +- .../src/main/mdoc/api/create-batchchange.md | 14 +- .../docs/src/main/mdoc/api/create-group.md | 4 +- .../src/main/mdoc/api/create-recordset.md | 4 +- modules/docs/src/main/mdoc/api/create-zone.md | 4 +- .../docs/src/main/mdoc/api/delete-group.md | 2 +- .../src/main/mdoc/api/delete-recordset.md | 2 +- modules/docs/src/main/mdoc/api/delete-zone.md | 2 +- .../docs/src/main/mdoc/api/get-batchchange.md | 2 +- modules/docs/src/main/mdoc/api/get-group.md | 2 +- .../src/main/mdoc/api/get-recordset-change.md | 2 +- .../docs/src/main/mdoc/api/get-recordset.md | 2 +- .../docs/src/main/mdoc/api/get-zone-by-id.md | 2 +- .../src/main/mdoc/api/get-zone-by-name.md | 2 +- .../src/main/mdoc/api/list-batchchanges.md | 2 +- .../src/main/mdoc/api/list-group-activity.md | 2 +- .../src/main/mdoc/api/list-group-admins.md | 2 +- .../src/main/mdoc/api/list-group-members.md | 2 +- modules/docs/src/main/mdoc/api/list-groups.md | 2 +- .../main/mdoc/api/list-recordset-changes.md | 2 +- .../main/mdoc/api/list-recordsets-by-zone.md | 2 +- .../main/mdoc/api/list-recordsets-global.md | 4 +- .../src/main/mdoc/api/list-zone-changes.md | 2 +- modules/docs/src/main/mdoc/api/list-zones.md | 2 +- .../src/main/mdoc/api/membership-model.md | 4 +- .../docs/src/main/mdoc/api/recordset-model.md | 88 ++++++------ .../src/main/mdoc/api/reject-batchchange.md | 6 +- modules/docs/src/main/mdoc/api/sync-zone.md | 2 +- .../docs/src/main/mdoc/api/update-group.md | 4 +- .../src/main/mdoc/api/update-recordset.md | 4 +- modules/docs/src/main/mdoc/api/update-zone.md | 4 +- modules/docs/src/main/mdoc/api/zone-model.md | 56 ++++---- modules/docs/src/main/mdoc/faq.md | 6 +- modules/docs/src/main/mdoc/getting-help.md | 4 +- modules/docs/src/main/mdoc/index.md | 8 +- .../src/main/mdoc/operator/config-portal.md | 50 +------ modules/docs/src/main/mdoc/operator/pre.md | 136 ++++++++---------- .../docs/src/main/mdoc/operator/setup-api.md | 1 - .../src/main/mdoc/operator/setup-mysql.md | 6 +- .../docs/src/main/mdoc/operator/setup-sqs.md | 2 +- modules/docs/src/main/mdoc/permissions.md | 4 +- .../src/main/mdoc/portal/batch-changes.md | 6 +- .../docs/src/main/mdoc/portal/dns-changes.md | 6 +- .../src/main/mdoc/portal/manage-records.md | 2 +- .../docs/src/main/mdoc/portal/search-zones.md | 2 +- modules/docs/src/main/mdoc/tools.md | 2 +- project/plugins.sbt | 4 +- 53 files changed, 254 insertions(+), 294 deletions(-) diff --git a/AUTHORS.md b/AUTHORS.md index ba5f2dbad..bd5098d89 100644 --- a/AUTHORS.md +++ b/AUTHORS.md @@ -1,14 +1,16 @@ # Authors -This project would not be possible without the generous contributions of many people. -Thank you! If you have contributed in any way, but do not see your name here, please open a PR to add yourself (in alphabetical order by last name)! +This project would not be possible without the generous contributions of many people. Thank you! If you have contributed +in any way, but do not see your name here, please open a PR to add yourself (in alphabetical order by last name)! ## DNS SMEs + - Joe Crowe - David Back - Hong Ye ## Contributors + - Mike Ball - Tommy Barker - Robert Barrimond @@ -17,6 +19,7 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Maulon Byron - Shirlette Chambers - Varsha Chandrashekar +- Paul Cleary - Peter Cline - Kemar Cockburn - Luke Cori @@ -30,6 +33,7 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Krista Khare - Patrick Lee - Sheree Liu +- Michael Ly - Deepak Mohanakrishnan - Jon Moore - Palash Nigam @@ -41,6 +45,7 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Timo Schmid - Trent Schmidt - Ghafar Shah +- Rebecca Star - Jess Stodola - Juan Valencia - Anastasia Vishnyakova @@ -48,3 +53,4 @@ Thank you! If you have contributed in any way, but do not see your name here, pl - Fei Wan - Andrew Wang - Peter Willis +- Britney Wright diff --git a/build.sbt b/build.sbt index 168cb7306..966b1ced4 100644 --- a/build.sbt +++ b/build.sbt @@ -281,13 +281,13 @@ lazy val docSettings = Seq( micrositeGithubOwner := "vinyldns", micrositeGithubRepo := "vinyldns", micrositeName := "VinylDNS", - micrositeDescription := "DNS Governance", + micrositeDescription := "DNS Automation and Governance", micrositeAuthor := "VinylDNS", - micrositeHomepage := "http://vinyldns.io", + micrositeHomepage := "https://vinyldns.io", micrositeDocumentationUrl := "/api", - micrositeGitterChannelUrl := "vinyldns/Lobby", - micrositeTwitterCreator := "@vinyldns", micrositeDocumentationLabelDescription := "API Documentation", + micrositeHighlightLanguages ++= Seq("json"), + micrositeGitterChannel := false, micrositeExtraMdFiles := Map( file("CONTRIBUTING.md") -> ExtraMdFileConfig( "contributing.md", @@ -300,8 +300,6 @@ lazy val docSettings = Seq( ghpagesNoJekyll := false, fork in mdoc := true, mdocIn := (sourceDirectory in Compile).value / "mdoc", - micrositeCssDirectory := (resourceDirectory in Compile).value / "microsite" / "css", - micrositeCompilingDocsTool := WithMdoc, micrositeFavicons := Seq( MicrositeFavicon("favicon16x16.png", "16x16"), MicrositeFavicon("favicon32x32.png", "32x32") @@ -313,7 +311,7 @@ lazy val docSettings = Seq( ) ), micrositeFooterText := None, - micrositeHighlightTheme := "atom-one-light", + micrositeHighlightTheme := "hybrid", includeFilter in makeSite := "*.html" | "*.css" | "*.png" | "*.jpg" | "*.jpeg" | "*.gif" | "*.js" | "*.swf" | "*.md" | "*.webm" | "*.ico" | "CNAME" | "*.yml" | "*.svg" | "*.json" | "*.csv" ) diff --git a/modules/api/src/test/scala/vinyldns/api/CatsHelpers.scala b/modules/api/src/test/scala/vinyldns/api/CatsHelpers.scala index ee82bd253..c3608c754 100644 --- a/modules/api/src/test/scala/vinyldns/api/CatsHelpers.scala +++ b/modules/api/src/test/scala/vinyldns/api/CatsHelpers.scala @@ -32,7 +32,7 @@ trait CatsHelpers { private implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.global) - def await[E, T](f: => IO[T], duration: FiniteDuration = 1.second): T = { + def await[E, T](f: => IO[T], duration: FiniteDuration = 60.seconds): T = { val i: IO[Either[E, T]] = f.attempt.map { case Right(ok) => Right(ok.asInstanceOf[T]) case Left(e) => Left(e.asInstanceOf[E]) @@ -43,18 +43,18 @@ trait CatsHelpers { // Waits for the future to complete, then returns the value as an Either[Throwable, T] def awaitResultOf[E, T]( f: => IO[Either[E, T]], - duration: FiniteDuration = 1.second + duration: FiniteDuration = 60.seconds ): Either[E, T] = { val timeOut = IO.sleep(duration) *> IO(new RuntimeException("Timed out waiting for result")) IO.race(timeOut, f).unsafeRunSync().toOption.get } // Assumes that the result of the future operation will be successful, this will fail on a left disjunction - def rightResultOf[E, T](f: => IO[Either[E, T]], duration: FiniteDuration = 1.second): T = + def rightResultOf[E, T](f: => IO[Either[E, T]], duration: FiniteDuration = 60.seconds): T = rightValue(awaitResultOf[E, T](f, duration)) // Assumes that the result of the future operation will fail, this will error on a right disjunction - def leftResultOf[E, T](f: => IO[Either[E, T]], duration: FiniteDuration = 1.second): E = + def leftResultOf[E, T](f: => IO[Either[E, T]], duration: FiniteDuration = 60.seconds): E = leftValue(awaitResultOf(f, duration)) def leftValue[E, T](t: Either[E, T]): E = t match { diff --git a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala index d8cc2276e..a095c41f3 100644 --- a/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala +++ b/modules/api/src/test/scala/vinyldns/api/backend/dns/DnsBackendSpec.scala @@ -16,8 +16,6 @@ package vinyldns.api.backend.dns -import java.net.{InetAddress, SocketAddress} - import cats.scalatest.EitherMatchers import org.joda.time.DateTime import org.mockito.ArgumentCaptor @@ -36,6 +34,7 @@ import vinyldns.core.domain.record.RecordType._ import vinyldns.core.domain.record._ import vinyldns.core.domain.zone.{Zone, ZoneConnection} +import java.net.{InetAddress, SocketAddress} import scala.collection.JavaConverters._ class DnsBackendSpec @@ -93,7 +92,9 @@ class DnsBackendSpec ): Either[Throwable, DnsQuery] = name match { case "try-again" => - Right(new DnsQuery(new Lookup("try-again.vinyldns.", 0, 0), new Name(testZone.name))) + val lookup = new Lookup("try-again.vinyldns.", 0, 0) + lookup.setResolver(mockResolver) + Right(new DnsQuery(lookup, new Name(testZone.name))) case _ => Right(mockDnsQuery) } } @@ -101,7 +102,9 @@ class DnsBackendSpec override def beforeEach(): Unit = { doReturn(mockMessage).when(mockMessage).clone() - doReturn(new java.util.ArrayList[DNS.Record](0)).when(mockMessage).getSection(DNS.Section.ADDITIONAL) + doReturn(new java.util.ArrayList[DNS.Record](0)) + .when(mockMessage) + .getSection(DNS.Section.ADDITIONAL) doReturn(DNS.Rcode.NOERROR).when(mockMessage).getRcode doReturn(mockMessage).when(mockResolver).send(messageCaptor.capture()) doReturn(DNS.Lookup.SUCCESSFUL).when(mockDnsQuery).result @@ -609,6 +612,12 @@ class DnsBackendSpec "return an error if receiving TRY_AGAIN from lookup error" in { val rsc = addRsChange(rs = testA.copy(name = "try-again")) + val tryAgainMessage = mock[DNS.Message] + val mockHeader = mock[DNS.Header] + doReturn(mockHeader).when(tryAgainMessage).getHeader + doReturn(DNS.Rcode.NOTIMP).when(mockHeader).getRcode + doReturn(tryAgainMessage).when(mockResolver).send(any[DNS.Message]) + underTest .resolve(rsc.recordSet.name, rsc.zone.name, rsc.recordSet.typ) .attempt diff --git a/modules/docs/src/main/mdoc/api/approve-batchchange.md b/modules/docs/src/main/mdoc/api/approve-batchchange.md index 273e3ae74..eeb0d0feb 100644 --- a/modules/docs/src/main/mdoc/api/approve-batchchange.md +++ b/modules/docs/src/main/mdoc/api/approve-batchchange.md @@ -69,7 +69,7 @@ reviewTimestamp | date-time | The timestamp (UTC) of when the batch change was #### EXAMPLE RESPONSE -``` +```json { "userId": "vinyl", "userName": "vinyl201", diff --git a/modules/docs/src/main/mdoc/api/batchchange-errors.md b/modules/docs/src/main/mdoc/api/batchchange-errors.md index 8935b07f7..dd649eda5 100644 --- a/modules/docs/src/main/mdoc/api/batchchange-errors.md +++ b/modules/docs/src/main/mdoc/api/batchchange-errors.md @@ -26,7 +26,7 @@ in the DNS backend. #### EXAMPLE ERROR RESPONSE BY CHANGE -``` +```json [ { "changeType": "Add", @@ -46,7 +46,7 @@ in the DNS backend. "cname": "test.example.com." }, "errors": [ - "Record with name "duplicate.example.com." is not unique in the batch change. CNAME record cannot use duplicate name." + "Record with name \"duplicate.example.com.\" is not unique in the batch change. CNAME record cannot use duplicate name." ] }, { @@ -60,7 +60,7 @@ in the DNS backend. }, { "changeType": "Add", - "inputName": "bad-ttl-and-invalid-name$.sample.com.”, + "inputName": "bad-ttl-and-invalid-name$.sample.com.", "type": "A", "ttl": 29, "record": { @@ -143,7 +143,7 @@ Zone Discovery Failed: zone for "" does not exist in VinylDNS. If zone ex Given an inputName, VinylDNS will determine the record and zone name for the requested change. For most records, the record names are the same as the zone name (apex), or split at at the first '.', so the inputName 'rname.zone.name.com' will be split into record name 'rname' and zone name 'zone.name.com' (or 'rname.zone.name.com' for both the record and zone name if it's an apex record). -For PTR records, there is logic to determine the appropriate reverse zone from the given IP address. +For `PTR` records, there is logic to determine the appropriate reverse zone from the given IP address. If this logic cannot find a matching zone in VinylDNS, you will see this error. In that case, you need to connect to the zone in VinylDNS. @@ -350,7 +350,7 @@ CNAME conflict: CNAME record names must be unique. Existing record with name " @@ -445,7 +445,7 @@ CNAME cannot be the same name as zone "". ##### Details: -CNAME records cannot be `@` or the same name as the zone. +`CNAME` records cannot be `@` or the same name as the zone. ### FULL-REQUEST ERRORS @@ -505,7 +505,7 @@ If there are issues with the JSON provided in a batch change request, errors wil ##### EXAMPLE ERROR MESSAGES: -``` +```json { "errors": [ "Missing BatchChangeInput.changes" @@ -522,7 +522,7 @@ If there are issues with the JSON provided in a batch change request, errors wil { "errors": [ - “Invalid RecordType” + "Invalid RecordType"" ] } ``` diff --git a/modules/docs/src/main/mdoc/api/batchchange-model.md b/modules/docs/src/main/mdoc/api/batchchange-model.md index 1d902195d..6aff9bffd 100644 --- a/modules/docs/src/main/mdoc/api/batchchange-model.md +++ b/modules/docs/src/main/mdoc/api/batchchange-model.md @@ -20,7 +20,7 @@ Batch change is an alternative to submitting individual [RecordSet](recordset-mo - The ability to accept multiple changes in a single API call. - The ability to include records of multiple record types across multiple zones. -- Input names are entered as fully-qualified domain names (or IP addresses for **PTR** records), so users don't have to think in record/zone context. +- Input names are entered as fully-qualified domain names (or IP addresses for `PTR` records), so users don't have to think in record/zone context. - All record validations are processed simultaneously. [Fatal errors](batchchange-errors.html#fatal-errors) for any change in the batch will result in a **400** response and none will be applied. - Support for [manual review](../operator/config-api.html#additional-configuration-settings) if enabled in your VinylDNS instance. @@ -35,7 +35,7 @@ A batch change consists of multiple single changes which can be a combination of To update an existing record, you must delete the record first and add the record again with the updated changes. Batch changes are also susceptible to the following restrictions: -- Current supported record types for batch change are: **A**, **AAAA**, **CNAME**, and **PTR**. +- Current supported record types for batch change are: `A`, `AAAA`, `CNAME`, and `PTR`. - Batch change requests must contain at least one change. - The maximum number of single changes within a batch change depends on the instance of VinylDNS. Contact your VinylDNS administrators to find the batch change limit for your instance. - Access permissions will follow existing rules (admin group or ACL access). Note that an update (delete and add of the same record name, zone and record type combination) requires **Write** access. @@ -74,7 +74,7 @@ name | type | description | ------------ | :------------ | :---------- | changeType | ChangeInputType | Type of change input. Can either be an **Add** or **DeleteRecordSet**. [See more details](#changetype-values) about behavior of `changeType` interaction. | inputName | string | The fully-qualified domain name of the record which was provided in the create batch request. | -type | RecordType | Type of DNS record, supported records for batch changes are currently: **A**, **AAAA**, **CNAME**, and **PTR**. | +type | RecordType | Type of DNS record, supported records for batch changes are currently: `A`, `AAAA`, `CNAME`, and `PTR`. | ttl | long | The time-to-live in seconds. | record | [RecordData](recordset-model.html#record-data) | The data added for this record, which varies by record type. | status | SingleChangeStatus | Status for this change. Can be one of: **Pending**, **Complete**, **Failed**, **NeedsReview** or **Rejected**. | @@ -94,7 +94,7 @@ name | type | description | ------------ | :------------ | :---------- | changeType | ChangeInputType | Type of change input. Can either be an **Add** or **DeleteRecordSet**. [See more details](#changetype-values) about behavior of `changeType` interaction. | inputName | string | The fully-qualified domain name of the record which was provided in the create batch request. | -type | RecordType | Type of DNS record, supported records for batch changes are currently: **A**, **AAAA**, **CNAME**, and **PTR**. | +type | RecordType | Type of DNS record, supported records for batch changes are currently: `A`, `AAAA`, `CNAME`, and `PTR`. | record | [RecordData](recordset-model.html#record-data) | Optional. The data deleted for this record, which varies by record type. If not provided, the entire DNS recordset was deleted. | status | SingleChangeStatus | Status for this change. Can be one of: **Pending**, **Complete**, **Failed**, **NeedsReview** or **Rejected**. | recordName | string | The name of the record. Record names for the apex will be match the zone name (including terminating dot). | @@ -124,7 +124,7 @@ There are two valid `changeType`s for a `SingleChange`: **Add** and **DeleteReco Successful batch change response example with a [SingleAddChange](#singleaddchange-attributes) and a [SingleDeleteRRSetChange](#singledeleterrsetchange-attributes). -``` +```json { "userId": "vinyl", "userName": "vinyl201", diff --git a/modules/docs/src/main/mdoc/api/create-batchchange.md b/modules/docs/src/main/mdoc/api/create-batchchange.md index cad5b26f8..258ccc172 100644 --- a/modules/docs/src/main/mdoc/api/create-batchchange.md +++ b/modules/docs/src/main/mdoc/api/create-batchchange.md @@ -8,7 +8,7 @@ section: "api" Creates a batch change with [SingleAddChanges](batchchange-model.html#singleaddchange-attributes) and/or [SingleDeleteRRSetChanges](batchchange-model.html#singledeleterrsetchange-attributes) across different zones. A delete and add of the same record will be treated as an update on that record set. Regardless of the input order in the batch change, all deletes for the same recordset will be logically applied before the adds. -Current supported record types for creating a batch change are: **A**, **AAAA**, **CNAME**, **MX**, **PTR**, **TXT**. A batch must contain at least one change and no more than 20 changes. +Current supported record types for creating a batch change are: `A`, `AAAA`, `CNAME`, `MX`, `PTR`, `TXT`. A batch must contain at least one change and no more than 20 changes. Supported record types for records in shared zones may vary. Contact your VinylDNS administrators to find the allowed record types. This does not apply to zone administrators or users with specific ACL access rules. @@ -33,8 +33,8 @@ allowManualReview | boolean | no | Optional override to control wheth name | type | required? | description | ------------ | :------------ | ----------- | :---------- | changeType | ChangeInputType | yes | Type of change input. Must be set to **Add** for *AddChangeInput*. | -inputName | string | yes | The fully qualified domain name of the record being added. For **PTR**, the input name is a valid IPv4 or IPv6 address. | -type | RecordType | yes | Type of DNS record. Supported records for batch changes are currently: **A**, **AAAA**, **CNAME**, and **PTR**. | +inputName | string | yes | The fully qualified domain name of the record being added. For `PTR`, the input name is a valid IPv4 or IPv6 address. | +type | RecordType | yes | Type of DNS record. Supported records for batch changes are currently: `A`, `AAAA`, `CNAME`, and `PTR`. | ttl | long | no | The time-to-live in seconds. The minimum and maximum values are 30 and 2147483647, respectively. If excluded, this will be set to the system default for new adds, or the existing TTL for updates | record | [RecordData](recordset-model.html#record-data) | yes | The data for the record. | @@ -44,12 +44,12 @@ name | type | required? | description | ------------ | :------------ | ----------- | :---------- | changeType | ChangeInputType | yes | Type of change input. Must be **DeleteRecordSet** for *DeleteChangeInput*. | inputName | string | yes | The fully qualified domain name of the record being deleted. | -type | RecordType | yes | Type of DNS record. Supported records for batch changes are currently: **A**, **AAAA**, **CNAME**, and **PTR**. | +type | RecordType | yes | Type of DNS record. Supported records for batch changes are currently: `A`, `AAAA`, `CNAME`, and `PTR`. | record | [RecordData](recordset-model.html#record-data) | no | The data for the record. If specified, only this DNS entry for the existing DNS recordset will be deleted; if unspecified, the entire DNS recordset will be deleted. | #### EXAMPLE HTTP REQUEST -``` +```json { "comments": "this is optional", "ownerGroupId": "f42385e4-5675-38c0-b42f-64105e743bfe", @@ -98,7 +98,7 @@ record | [RecordData](recordset-model.html#record-data) | no | Th } ``` -The first two items in the changes list are SingleAddChanges of an **A** record and a **PTR** record. Note that for the **PTR** record, the *inputName* is a valid IP address. The third item is a delete of a **CNAME** record. The last two items represent an update (delete & add) of an **AAAA** record with the fully qualified domain name "update.another.example.com.". +The first two items in the changes list are SingleAddChanges of an `A` record and a `PTR` record. Note that for the `PTR` record, the *inputName* is a valid IP address. The third item is a delete of a `CNAME` record. The last two items represent an update (delete & add) of an `AAAA` record with the fully qualified domain name "update.another.example.com.". #### HTTP RESPONSE TYPES @@ -121,7 +121,7 @@ On success, the response from create batch change includes the fields the user i #### EXAMPLE RESPONSE -``` +```json { "userId": "vinyl", "userName": "vinyl201", diff --git a/modules/docs/src/main/mdoc/api/create-group.md b/modules/docs/src/main/mdoc/api/create-group.md index 680d695c3..a535e74ed 100644 --- a/modules/docs/src/main/mdoc/api/create-group.md +++ b/modules/docs/src/main/mdoc/api/create-group.md @@ -24,7 +24,7 @@ admins | Array of User id objects | yes | Set of User ids #### EXAMPLE HTTP REQUEST -``` +```json { "name": "some-group", "email": "test@example.com", @@ -67,7 +67,7 @@ admins | Array of User ID objects | IDs of admins of the group | #### EXAMPLE RESPONSE -``` +```json { "id": "6f8afcda-7529-4cad-9f2d-76903f4b1aca", "name": "some-group", diff --git a/modules/docs/src/main/mdoc/api/create-recordset.md b/modules/docs/src/main/mdoc/api/create-recordset.md index 368e4eed4..ba6781577 100644 --- a/modules/docs/src/main/mdoc/api/create-recordset.md +++ b/modules/docs/src/main/mdoc/api/create-recordset.md @@ -24,7 +24,7 @@ records | array of record data | yes | record data for recordset, see [Re ownerGroupId | string | no | Record ownership assignment, applicable if the recordset is in a [shared zone](zone-model.html#shared-zones) | #### EXAMPLE HTTP REQUEST -``` +```json { "name": "foo", "type": "A", @@ -66,7 +66,7 @@ singleBatchChangeIds | array of SingleBatchChange Id objects | If the recordse #### EXAMPLE RESPONSE -``` +```json { "zone": { "name": "vinyl.", diff --git a/modules/docs/src/main/mdoc/api/create-zone.md b/modules/docs/src/main/mdoc/api/create-zone.md index d30a8f6fa..c97c3da71 100644 --- a/modules/docs/src/main/mdoc/api/create-zone.md +++ b/modules/docs/src/main/mdoc/api/create-zone.md @@ -18,7 +18,7 @@ if no info is provided the default VinylDNS connections will be used **zone fields** - adminGroupId, name, and email are required - refer to [zone model](zone-model.html) | #### EXAMPLE HTTP REQUEST -``` +```json { "adminGroupId": "9b22b686-54bc-47fb-a8f8-cdc48e6d04ae", "name": "dummy.", @@ -49,7 +49,7 @@ id | string | The ID of the change. This is not the ID of the #### EXAMPLE RESPONSE -``` +```json { "status": "Pending", "zone": { diff --git a/modules/docs/src/main/mdoc/api/delete-group.md b/modules/docs/src/main/mdoc/api/delete-group.md index 9eaab294c..30cb3a1db 100644 --- a/modules/docs/src/main/mdoc/api/delete-group.md +++ b/modules/docs/src/main/mdoc/api/delete-group.md @@ -37,7 +37,7 @@ admins | Array of User ID objects | IDs of admins of the group | #### EXAMPLE RESPONSE -``` +```json { "id": "6f8afcda-7529-4cad-9f2d-76903f4b1aca", "name": "some-group", diff --git a/modules/docs/src/main/mdoc/api/delete-recordset.md b/modules/docs/src/main/mdoc/api/delete-recordset.md index 1d1242e59..aa4d05c7b 100644 --- a/modules/docs/src/main/mdoc/api/delete-recordset.md +++ b/modules/docs/src/main/mdoc/api/delete-recordset.md @@ -36,7 +36,7 @@ id | string | The ID of the change. This is not the ID of the #### EXAMPLE RESPONSE -``` +```json { "zone": { "name": "vinyl.", diff --git a/modules/docs/src/main/mdoc/api/delete-zone.md b/modules/docs/src/main/mdoc/api/delete-zone.md index a88517d65..3295fe2a2 100644 --- a/modules/docs/src/main/mdoc/api/delete-zone.md +++ b/modules/docs/src/main/mdoc/api/delete-zone.md @@ -40,7 +40,7 @@ status | string | The status of the zone change | #### EXAMPLE RESPONSE -``` +```json { "status": "Pending", "zone": { diff --git a/modules/docs/src/main/mdoc/api/get-batchchange.md b/modules/docs/src/main/mdoc/api/get-batchchange.md index f3bd10ded..a26570a48 100644 --- a/modules/docs/src/main/mdoc/api/get-batchchange.md +++ b/modules/docs/src/main/mdoc/api/get-batchchange.md @@ -50,7 +50,7 @@ cancelledTimestamp | date-time | Optional timestamp (UTC) if the batch change wa #### EXAMPLE RESPONSE -``` +```json { "userId": "vinyl", "userName": "vinyl201", diff --git a/modules/docs/src/main/mdoc/api/get-group.md b/modules/docs/src/main/mdoc/api/get-group.md index 9b6f60e0e..fa9725161 100644 --- a/modules/docs/src/main/mdoc/api/get-group.md +++ b/modules/docs/src/main/mdoc/api/get-group.md @@ -35,7 +35,7 @@ admins | Array of User Id objects | Ids of admins of the group | #### EXAMPLE RESPONSE -``` +```json { "id": "6f8afcda-7529-4cad-9f2d-76903f4b1aca", "name": "some-group", diff --git a/modules/docs/src/main/mdoc/api/get-recordset-change.md b/modules/docs/src/main/mdoc/api/get-recordset-change.md index 1ca56de50..a382a4fed 100644 --- a/modules/docs/src/main/mdoc/api/get-recordset-change.md +++ b/modules/docs/src/main/mdoc/api/get-recordset-change.md @@ -37,7 +37,7 @@ singleBatchChangeIds | array of SingleBatchChange ID objects | If the recordse #### EXAMPLE RESPONSE -``` +```json { "zone": { "name": "vinyl.", diff --git a/modules/docs/src/main/mdoc/api/get-recordset.md b/modules/docs/src/main/mdoc/api/get-recordset.md index bfa71aaac..5d1a10e46 100644 --- a/modules/docs/src/main/mdoc/api/get-recordset.md +++ b/modules/docs/src/main/mdoc/api/get-recordset.md @@ -42,7 +42,7 @@ ownerGroupName | string | Name of assigned owner group, if found | #### EXAMPLE RESPONSE -``` +```json { "type": "A", "zoneId": "2467dc05-68eb-4498-a9d5-78d24bb0893c", diff --git a/modules/docs/src/main/mdoc/api/get-zone-by-id.md b/modules/docs/src/main/mdoc/api/get-zone-by-id.md index ecb9605db..7d209d766 100644 --- a/modules/docs/src/main/mdoc/api/get-zone-by-id.md +++ b/modules/docs/src/main/mdoc/api/get-zone-by-id.md @@ -29,7 +29,7 @@ zone | map | refer to [zone model](zone-model.html) | #### EXAMPLE RESPONSE -``` +```json { "zone": { "status": "Active", diff --git a/modules/docs/src/main/mdoc/api/get-zone-by-name.md b/modules/docs/src/main/mdoc/api/get-zone-by-name.md index 6a3cdff08..3b8f76c95 100644 --- a/modules/docs/src/main/mdoc/api/get-zone-by-name.md +++ b/modules/docs/src/main/mdoc/api/get-zone-by-name.md @@ -29,7 +29,7 @@ zone | map | refer to [zone model](zone-model.html) | #### EXAMPLE RESPONSE -``` +```json { "zone": { "status": "Active", diff --git a/modules/docs/src/main/mdoc/api/list-batchchanges.md b/modules/docs/src/main/mdoc/api/list-batchchanges.md index c3034436a..fc18ef369 100644 --- a/modules/docs/src/main/mdoc/api/list-batchchanges.md +++ b/modules/docs/src/main/mdoc/api/list-batchchanges.md @@ -60,7 +60,7 @@ approvalStatus | BatchChangeApprovalStatus | Whether the batch change is cu #### EXAMPLE RESPONSE -``` +```json { "batchChanges": [ { diff --git a/modules/docs/src/main/mdoc/api/list-group-activity.md b/modules/docs/src/main/mdoc/api/list-group-activity.md index b05d85650..4821547f6 100644 --- a/modules/docs/src/main/mdoc/api/list-group-activity.md +++ b/modules/docs/src/main/mdoc/api/list-group-activity.md @@ -48,7 +48,7 @@ changeType | string | The type change, either Create, Update, or Delet #### EXAMPLE RESPONSE -``` +```json { "maxItems": 100, "changes": [ diff --git a/modules/docs/src/main/mdoc/api/list-group-admins.md b/modules/docs/src/main/mdoc/api/list-group-admins.md index 64a978807..a2b68c48e 100644 --- a/modules/docs/src/main/mdoc/api/list-group-admins.md +++ b/modules/docs/src/main/mdoc/api/list-group-admins.md @@ -28,7 +28,7 @@ admins | Array of Users | refer to [membership model](membership-model.ht #### EXAMPLE RESPONSE -``` +```json { "admins": [ { diff --git a/modules/docs/src/main/mdoc/api/list-group-members.md b/modules/docs/src/main/mdoc/api/list-group-members.md index 4d863838a..e864c7b11 100644 --- a/modules/docs/src/main/mdoc/api/list-group-members.md +++ b/modules/docs/src/main/mdoc/api/list-group-members.md @@ -38,7 +38,7 @@ maxItems | integer | maxItems sent in request, default is 100 | #### EXAMPLE RESPONSE -``` +```json { "members": [ { diff --git a/modules/docs/src/main/mdoc/api/list-groups.md b/modules/docs/src/main/mdoc/api/list-groups.md index b12458f61..43dadac01 100644 --- a/modules/docs/src/main/mdoc/api/list-groups.md +++ b/modules/docs/src/main/mdoc/api/list-groups.md @@ -41,7 +41,7 @@ ignoreAccess | boolean | The ignoreAccess parameter that was sent in the #### EXAMPLE RESPONSE -``` +```json { "maxItems": 100, "groups": [ diff --git a/modules/docs/src/main/mdoc/api/list-recordset-changes.md b/modules/docs/src/main/mdoc/api/list-recordset-changes.md index 2e3cbf999..ebd5c0397 100644 --- a/modules/docs/src/main/mdoc/api/list-recordset-changes.md +++ b/modules/docs/src/main/mdoc/api/list-recordset-changes.md @@ -44,7 +44,7 @@ status | string | The status of the change (Pending, Complete, Fai #### EXAMPLE RESPONSE -``` +```json { "recordSetChanges": [ { diff --git a/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md b/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md index cd0818669..329f26d55 100644 --- a/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md +++ b/modules/docs/src/main/mdoc/api/list-recordsets-by-zone.md @@ -45,7 +45,7 @@ nameSort | string | name sort order sent in request #### EXAMPLE RESPONSE -``` +```json { "recordSets": [ { diff --git a/modules/docs/src/main/mdoc/api/list-recordsets-global.md b/modules/docs/src/main/mdoc/api/list-recordsets-global.md index 979348dd2..7ab6bb8a6 100644 --- a/modules/docs/src/main/mdoc/api/list-recordsets-global.md +++ b/modules/docs/src/main/mdoc/api/list-recordsets-global.md @@ -45,7 +45,7 @@ nameSort | string | name sort order sent in request #### EXAMPLE RESPONSE -``` +```json { "recordSets": [ { @@ -66,7 +66,7 @@ nameSort | string | name sort order sent in request "zoneName": "example.com.", "zoneShared": true } - ] + ], "maxItems": 100, "recordNameFilter": "foo*", "recordTypeFilter": [ diff --git a/modules/docs/src/main/mdoc/api/list-zone-changes.md b/modules/docs/src/main/mdoc/api/list-zone-changes.md index 66776d88e..131faaeca 100644 --- a/modules/docs/src/main/mdoc/api/list-zone-changes.md +++ b/modules/docs/src/main/mdoc/api/list-zone-changes.md @@ -40,7 +40,7 @@ maxItems | int | The maxItems parameter that was sent in on the H #### EXAMPLE RESPONSE -``` +```json { "zoneId": "2467dc05-68eb-4498-a9d5-78d24bb0893c", "zoneChanges": [ diff --git a/modules/docs/src/main/mdoc/api/list-zones.md b/modules/docs/src/main/mdoc/api/list-zones.md index dc2ae7c50..ed4c7e4c4 100644 --- a/modules/docs/src/main/mdoc/api/list-zones.md +++ b/modules/docs/src/main/mdoc/api/list-zones.md @@ -41,7 +41,7 @@ ignoreAccess | boolean | The ignoreAccess parameter that was sent in the #### EXAMPLE RESPONSE -``` +```json { "zones": [ { diff --git a/modules/docs/src/main/mdoc/api/membership-model.md b/modules/docs/src/main/mdoc/api/membership-model.md index 878894b8b..78a67498c 100644 --- a/modules/docs/src/main/mdoc/api/membership-model.md +++ b/modules/docs/src/main/mdoc/api/membership-model.md @@ -41,7 +41,7 @@ the group, deleting users from the group, toggling other users' admin statuses ( #### GROUP EXAMPLE -``` +```json { "id": "dc4c7c79-5bbc-41bf-992e-8d6c4ec574c6", "name": "some-group", @@ -80,7 +80,7 @@ To get your access and secret keys, log into the VinylDNS portal and then with t #### USER EXAMPLE -``` +```json { "userName": "jdoe201", "firstName": "John", diff --git a/modules/docs/src/main/mdoc/api/recordset-model.md b/modules/docs/src/main/mdoc/api/recordset-model.md index 3cba2f89f..a0f055f23 100644 --- a/modules/docs/src/main/mdoc/api/recordset-model.md +++ b/modules/docs/src/main/mdoc/api/recordset-model.md @@ -18,7 +18,7 @@ field | type | description | ------------ | :---------- | :---------- | zoneId | string | the id of the zone to which this recordset belongs | name | string | The name of the RecordSet | -type | string | Type of DNS record, supported records are currently: A, AAAA, CNAME, DS, MX, NAPTR, NS, PTR, SOA, SRV, TXT, SSHFP, and SPF. Unsupported types will be given the type UNKNOWN | +type | string | Type of DNS record, supported records are currently: `A`, `AAAA`, `CNAME`, `DS`, `MX`, `NAPTR`, `NS`, `PTR`, `SOA`, `SRV`, `TXT`, `SSHFP`, and `SPF`. Unsupported types will be given the type `UNKNOWN` | ttl | long | the TTL in seconds for the recordset | status | string | *Active* - RecordSet is added is created and ready for use, *Inactive* - RecordSet effects are not applied, *Pending* - RecordSet is queued for creation, *PendingUpdate* - RecordSet is queued for update, *PendingDelete* - RecordSet is queued for delete | created | date-time | The timestamp (UTC) when the recordset was created | @@ -29,7 +29,7 @@ account | string | **DEPRECATED** The account that created the Record #### RecordSet EXAMPLE -``` +```json { "type": "A", "zoneId": "8f8f649f-998e-4428-a029-b4ba5f5bd4ca", @@ -54,72 +54,72 @@ account | string | **DEPRECATED** The account that created the Record ``` #### RECORD DATA INFORMATION -Current supported record types are: A, AAAA, CNAME, DS, MX, NAPTR, NS, PTR, SOA, SRV, TXT, SSHFP, and SPF. +Current supported record types are: `A`, `AAAA`, `CNAME`, `DS`, `MX`, `NAPTR`, `NS`, `PTR`, `SOA`, `SRV`, `TXT`, `SSHFP`, and `SPF`. Each individual record encodes its data in a record data object, in which each record type has different required attributes

-SOA records and NS origin records (record with the same name as the zone) are currently read-only and cannot be created, updated or deleted. -Non-origin NS records can be created or updated for [approved name servers](../operator/config-api.html#additional-configuration-settings) only. Any non-origin NS record can be deleted. +`SOA` records and `NS` origin records (record with the same name as the zone) are currently read-only and cannot be created, updated or deleted. +Non-origin `NS` records can be created or updated for [approved name servers](../operator/config-api.html#additional-configuration-settings) only. Any non-origin `NS` record can be deleted. record type | attribute | type | ------------ | :---------- | :---------- | -A | address | string | +`A` | `address` | `string` |
| | | -AAAA | address | string | +`AAAA` | `address` | `string` |
| | | -CNAME | cname | string | +`CNAME` | `cname` | `string` |
| | | -DS | keytag | integer | -DS | algorithm | integer | -DS | digesttype | integer | -DS | digest | string | +`DS` | `keytag` | `integer` | +`DS` | `algorithm` | `integer` | +`DS` | `digesttype` | `integer` | +`DS` | `digest` | `string` |
| | | -MX | preference | integer | -MX | exchange | string | +`MX` | `preference` | `integer` | +`MX` | `exchange` | `string` |
| | | -NAPTR | order | integer | -NAPTR | preference | integer | -NAPTR | flags | string | -NAPTR | service | string | -NAPTR | regexp | string | -NAPTR | replacement | string | +`NAPTR` | `order` | `integer` | +`NAPTR` | `preference` | `integer` | +`NAPTR` | `flags` | `string` | +`NAPTR` | `service` | `string` | +`NAPTR` | `regexp` | `string` | +`NAPTR` | `replacement` | `string` |
| | | -NS | nsdname | string | +`NS` | `nsdname` | `string` |
| | | -PTR | ptrdname | string | +`PTR` | `ptrdname` | `string` |
| | | -SOA | mname | string | -SOA | rname | string | -SOA | serial | long | -SOA | refresh | long | -SOA | retry | long | -SOA | expire | long | -SOA | minimum | long | +`SOA` | `mname` | `string` | +`SOA` | `rname` | `string` | +`SOA` | `serial` | `long` | +`SOA` | `refresh` | `long` | +`SOA` | `retry` | `long` | +`SOA` | `expire` | `long` | +`SOA` | `minimum` | `long` |
| | | -SPF | text | string | +`SPF` | `text` | `string` |
| | | -SRV | priority | integer | -SRV | weight | integer | -SRV | port | integer | -SRV | target | string | +`SRV` | `priority` | `integer` | +`SRV` | `weight` | `integer` | +`SRV` | `port` | `integer` | +`SRV` | `target` | `string` |
| | | -SSHFP | algorithm | integer | -SSHFP | type | integer | -SSHFP | fingerprint | string | +`SSHFP` | `algorithm` | `integer` | +`SSHFP` | `type` | `integer` | +`SSHFP` | `fingerprint` | `string` |
| | | -TXT | text | string | +`TXT` | `text` | `string` | #### RECORD DATA EXAMPLE Each record is a map that must include all attributes for the data type, the records are stored in the records field of the RecordSet. The records must be an array of at least one record map. All records in the records array must be of the type stored in the typ field of the RecordSet -Use the *@* symbol to point to the zone origin +Use the `@` symbol to point to the zone origin -**CNAME records cannot point to the zone origin, thus the RecordSet name cannot be @ nor the zone origin** +**`CNAME` records cannot point to the zone origin, thus the RecordSet name cannot be `@` nor the zone origin** -Individual SSHFP record: +Individual `SSHFP` record: -``` +```json { "type": "SSHFP", "zoneId": "8f8f649f-998e-4428-a029-b4ba5f5bd4ca", @@ -139,9 +139,9 @@ Individual SSHFP record: } ``` -Multiple SSHFP records: +Multiple `SSHFP` records: -``` +```json { "type": "SSHFP", "zoneId": "8f8f649f-998e-4428-a029-b4ba5f5bd4ca", diff --git a/modules/docs/src/main/mdoc/api/reject-batchchange.md b/modules/docs/src/main/mdoc/api/reject-batchchange.md index cbd2089f6..d99e08de0 100644 --- a/modules/docs/src/main/mdoc/api/reject-batchchange.md +++ b/modules/docs/src/main/mdoc/api/reject-batchchange.md @@ -7,7 +7,7 @@ section: "api" # Reject Batch Change Manually rejects a batch change in pending review status given the batch change ID, resulting in immediate failure. Only -system administrators (ie. support or super user) can manually review a batch change. +system administrators (i.e., support or super user) can manually review a batch change. Note: If [manual review is disabled](../operator/config-api.html#manual-review) in the VinylDNS instance, users trying to access this endpoint will encounter a **404 Not Found** response since it will not exist. @@ -27,7 +27,7 @@ reviewComment | string | no | Optional rejection explanation. | #### EXAMPLE HTTP REQUEST -``` +```json { "reviewComment": "Comments are optional." } @@ -64,7 +64,7 @@ reviewTimestamp | date-time | The timestamp (UTC) of when the batch change was #### EXAMPLE RESPONSE -``` +```json { "userId": "vinyl", "userName": "vinyl201", diff --git a/modules/docs/src/main/mdoc/api/sync-zone.md b/modules/docs/src/main/mdoc/api/sync-zone.md index 5289681d0..74c54c30a 100644 --- a/modules/docs/src/main/mdoc/api/sync-zone.md +++ b/modules/docs/src/main/mdoc/api/sync-zone.md @@ -49,7 +49,7 @@ id | string | The ID of the change. This is not the id of the #### EXAMPLE RESPONSE -``` +```json { "status": "Pending", "zone": { diff --git a/modules/docs/src/main/mdoc/api/update-group.md b/modules/docs/src/main/mdoc/api/update-group.md index 2d0aec938..951e6d2fb 100644 --- a/modules/docs/src/main/mdoc/api/update-group.md +++ b/modules/docs/src/main/mdoc/api/update-group.md @@ -27,7 +27,7 @@ admins | Array of User ID objects | yes | Set of User IDs that #### EXAMPLE HTTP REQUEST -``` +```json { "id": "6f8afcda-7529-4cad-9f2d-76903f4b1aca", "name": "some-group", @@ -76,7 +76,7 @@ admins | Array of User Id objects | Ids of admins of the group | #### EXAMPLE RESPONSE -``` +```json { "id": "6f8afcda-7529-4cad-9f2d-76903f4b1aca", "name": "some-group", diff --git a/modules/docs/src/main/mdoc/api/update-recordset.md b/modules/docs/src/main/mdoc/api/update-recordset.md index ab036a0e9..67cde4eb0 100644 --- a/modules/docs/src/main/mdoc/api/update-recordset.md +++ b/modules/docs/src/main/mdoc/api/update-recordset.md @@ -28,7 +28,7 @@ ownerGroupId | string | sometimes* | Record ownership assignmen *Note: If a recordset has an ownerGroupId you must include that value in the update request, otherwise the update will remove the ownerGroupId value #### EXAMPLE HTTP REQUEST -``` +```json { "id": "dd9c1120-0594-4e61-982e-8ddcbc8b2d21", "name": "already-exists", @@ -72,7 +72,7 @@ singleBatchChangeIds | array of SingleBatchChange ID objects | If the recordse #### EXAMPLE RESPONSE -``` +```json { "zone": { "name": "vinyl.", diff --git a/modules/docs/src/main/mdoc/api/update-zone.md b/modules/docs/src/main/mdoc/api/update-zone.md index 18ca2c0d2..ed3b92e51 100644 --- a/modules/docs/src/main/mdoc/api/update-zone.md +++ b/modules/docs/src/main/mdoc/api/update-zone.md @@ -18,7 +18,7 @@ Updates an existing zone that has already been connected to. Used to update the #### EXAMPLE HTTP REQUEST -``` +```json { "name": "vinyl.", "email": "update@update.com", @@ -63,7 +63,7 @@ status | string | The status of the zone change #### EXAMPLE RESPONSE -``` +```json { "zone": { "name": "vinyl.", diff --git a/modules/docs/src/main/mdoc/api/zone-model.md b/modules/docs/src/main/mdoc/api/zone-model.md index 07a613def..7c96ddd8d 100644 --- a/modules/docs/src/main/mdoc/api/zone-model.md +++ b/modules/docs/src/main/mdoc/api/zone-model.md @@ -41,7 +41,7 @@ accessLevel | string | Access level of the user requesting the zone. Curr #### ZONE EXAMPLE -``` +```json { "status": "Active", "updated": "2016-12-16T15:27:28Z", @@ -122,7 +122,7 @@ The priority of ACL Rules in descending precedence:
For conflicting rules, the rule that is more specific will take precedence. For example, if the account *jdoe201* was given Read access to all records in a zone through the rule: -``` +```json { "userId": "", "accessLevel": "Read", @@ -131,7 +131,7 @@ through the rule: and then Write access to only A records through the rule: -``` +```json { "userId": "", "accessLevel": "Write", @@ -141,7 +141,7 @@ and then Write access to only A records through the rule: and then Delete access to only A records that matched the expression \*dev\* through the rule: -``` +```json { "userId": "", "accessLevel": "Delete", @@ -154,10 +154,10 @@ then the rule with the recordMask will take precedence and give Delete access to take precedence and give Write access to all other A records, and the more broad rule will give Read access to all other record types in the zone #### ZONE ACL RULE EXAMPLES -**Grant read/write/delete access to www.* records of type A, AAAA, CNAME to one user** -Under this rule, the user specified will be able to view, create, edit, and delete records in the zone that match the expression `www.*` and are of type A, AAAA, or CNAME. +**Grant read/write/delete access to www.* records of type `A`, `AAAA`, `CNAME` to one user** +Under this rule, the user specified will be able to view, create, edit, and delete records in the zone that match the expression `www.*` and are of type `A`, `AAAA`, or `CNAME`. -``` +```json { "recordMask": "www.*", "accessLevel": "Delete", @@ -166,18 +166,18 @@ Under this rule, the user specified will be able to view, create, edit, and dele } ``` -**Grant read only access to all VinylDNS users to A, AAAA, CNAME records** +**Grant read only access to all VinylDNS users to `A`, `AAAA`, `CNAME` records** -``` +```json { "accessLevel": "Read", "recordTypes": ["A", "AAAA", "CNAME"] } ``` -**Grant read/write/delete access to records of type A, AAAA, CNAME to one group*** +**Grant read/write/delete access to records of type `A`, `AAAA`, `CNAME` to one group*** -``` +```json { "accessLevel": "Delete", "groupId": "", @@ -187,48 +187,48 @@ Under this rule, the user specified will be able to view, create, edit, and dele ### PTR ACL RULES WITH CIDR MASKS ACL rules can be applied to specific record types and can include record masks to further narrow down which records they -apply to. These record masks apply to record names, but because PTR record names are part their reverse zone ip, the use of regular +apply to. These record masks apply to record names, but because `PTR` record names are part their reverse zone ip, the use of regular expressions for record masks are not supported.

-Instead PTR record masks must be CIDR rules, which will denote a range of IP addresses that the rule will apply to. +Instead `PTR` record masks must be CIDR rules, which will denote a range of IP addresses that the rule will apply to. While more information and useful CIDR rule utility tools can be found online, CIDR rules describe how many bits of an ip address' binary representation must be the same for a match. ### PTR ACL RULES WITH CIDR MASKS EXAMPLE The ACL Rule -``` +```json { - recordTypes: ["PTR"], - accessLevel: "Read" + "recordTypes": ["PTR"], + "accessLevel": "Read" } ``` -Will give Read permissions to PTR Record Sets to all users in VinylDNS +Will give Read permissions to `PTR` Record Sets to all users in VinylDNS

The **IPv4** ACL Rule -``` +```json { - recordTypes: ["PTR"], - accessLevel: "Read", - recordMask: "100.100.100.100/16" + "recordTypes": ["PTR"], + "accessLevel": "Read", + "recordMask": "100.100.100.100/16" } ``` -Will give Read permissions to PTR Record Sets 100.100.000.000 to 100.100.255.255, as 16 bits is half of an IPv4 address +Will give Read permissions to `PTR` Record Sets 100.100.000.000 to 100.100.255.255, as 16 bits is half of an IPv4 address

The **IPv6** ACL Rule -``` +```json { - recordTypes: ["PTR"], - accessLevel: "Read", - recordMask: "1000:1000:1000:1000:1000:1000:1000:1000/64" + "recordTypes": ["PTR"], + "accessLevel": "Read", + "recordMask": "1000:1000:1000:1000:1000:1000:1000:1000/64" } ``` -Will give Read permissions to PTR Record Sets 1000:1000:1000:1000:0000:0000:0000:0000 to 1000:1000:1000:1000:FFFF:FFFF:FFFF:FFFF, as 64 bits is half of an IPv6 address. +Will give Read permissions to `PTR` Record Sets 1000:1000:1000:1000:0000:0000:0000:0000 to 1000:1000:1000:1000:FFFF:FFFF:FFFF:FFFF, as 64 bits is half of an IPv6 address. #### ZONE CONNECTION ATTRIBUTES In order for VinylDNS to make updates in DNS, it needs key information for every zone. There are 3 ways to specify that key information; ask your VinylDNS admin which is appropriate for your zone based on the configuration of the service: @@ -250,7 +250,7 @@ key | string | The TSIG secret key used to sign requests when com #### ZONE CONNECTION EXAMPLE -``` +```json { "primaryServer": "127.0.0.1:5301", "keyName": "vinyl.", diff --git a/modules/docs/src/main/mdoc/faq.md b/modules/docs/src/main/mdoc/faq.md index 043dc2512..0a4def4c0 100644 --- a/modules/docs/src/main/mdoc/faq.md +++ b/modules/docs/src/main/mdoc/faq.md @@ -32,9 +32,9 @@ of your zone. This ID is also present in the URL (if on that page it’s the ID To create a record with the same name as your zone, you have to use the special `@` character for the record name when you create your record set. -You cannot create CNAME records with *@* as those are not supported. While some DNS services like -Route 53 support an ALIAS record type that _does_ support a CNAME style *@*, ALIAS are not an official standard yet. -All other record types should be fine using the *@* symbol. +You cannot create `CNAME` records with `@` as those are not supported. While some DNS services like +Route 53 support an ALIAS record type that _does_ support a `CNAME` style `@`, ALIAS are not an official standard yet. +All other record types should be fine using the `@` symbol. ### 4. When I try to connect to my zone, I am seeing REFUSED When VinylDNS connects to a zone, it first validates that the zone is suitable diff --git a/modules/docs/src/main/mdoc/getting-help.md b/modules/docs/src/main/mdoc/getting-help.md index b1dcbc21a..86cc5fde0 100644 --- a/modules/docs/src/main/mdoc/getting-help.md +++ b/modules/docs/src/main/mdoc/getting-help.md @@ -6,8 +6,8 @@ position: 7 # Getting Help -- Gitter community: - +- VinylDNS Discussions: + - Contact the VinylDNS Core Team: vinyldns-core@googlegroups.com diff --git a/modules/docs/src/main/mdoc/index.md b/modules/docs/src/main/mdoc/index.md index f06568418..ef8cd8f3f 100644 --- a/modules/docs/src/main/mdoc/index.md +++ b/modules/docs/src/main/mdoc/index.md @@ -21,7 +21,7 @@ VinylDNS helps secure DNS management via: * Recording every change made to DNS records and zones Integration is simple with first-class language support including: -* java -* ruby -* python -* go-lang +* Java +* JavaScript +* Python +* Go diff --git a/modules/docs/src/main/mdoc/operator/config-portal.md b/modules/docs/src/main/mdoc/operator/config-portal.md index 7f119ae08..c61dc99d9 100644 --- a/modules/docs/src/main/mdoc/operator/config-portal.md +++ b/modules/docs/src/main/mdoc/operator/config-portal.md @@ -18,12 +18,9 @@ The portal configuration is much smaller than the API Server. - [Full Example Config](#full-example-config) ## Database Configuration -VinylDNS supports both DynamoDB and MySQL backends (see [API Database Configuration](config-api.html#database-configuration)). - -If using DynamoDB, follow the [AWS DynamoDB Setup Guide](setup-dynamodb.html) first to get the values you need to configure here. - -If using MySQL, follow the [MySQL Setup Guide](setup-mysql.html) first to get the values you need to configure here. +VinylDNS supports a MySQL backend (see [API Database Configuration](config-api.html#database-configuration)). +Follow the [MySQL Setup Guide](setup-mysql.html) first to get the values you need to configure here. The Portal uses the following tables: @@ -37,7 +34,7 @@ the same values in both configs: vinyldns { # this list should include only the datastores being used by your portal instance (user and userChange repo) - data-stores = ["dynamodb", "mysql"] + data-stores = ["mysql"] mysql { @@ -102,39 +99,6 @@ vinyldns { } } } - - dynamodb { - - # this is the path to the DynamoDB provider. This should not be edited - # from the default in reference.conf - class-name = "vinyldns.dynamodb.repository.DynamoDBDataStoreProvider" - - settings { - # AWS_ACCESS_KEY, credential needed to access the SQS queue - key = "x" - - # AWS_SECRET_ACCESS_KEY, credential needed to access the SQS queue - secret = "x" - - # DynamoDB url for the region you are running in, this example is in us-east-1 - endpoint = "https://dynamodb.us-east-1.amazonaws.com" - - # DynamoDB region - region = "us-east-1" - } - - repositories { - # all repositories with config sections here will be enabled in dynamodb - user-change { - # Name of the table where user changes are saved - table-name = "userChangeTest" - # Provisioned throughput for reads - provisioned-reads = 30 - # Provisioned throughput for writes - provisioned-writes = 20 - } - } - } } ``` @@ -216,7 +180,7 @@ links = [ title = "API Documentation" # the hyperlink address being linked to - href = "http://vinyldns.io" + href = "https://vinyldns.io" # a fa icon to display icon = "fa fa-file-text-o" @@ -230,7 +194,7 @@ links = [ The play secret must be set to a secret value, and should be an environment variable ```yaml -# See http://www.playframework.com/documentation/latest/ApplicationSecret for more details. +# See https://www.playframework.com/documentation/latest/ApplicationSecret for more details. play.http.secret.key = "vinyldnsportal-change-this-for-production" ``` @@ -277,7 +241,7 @@ Allows users to schedule changes to be run sometime in the future # # This must be changed for production, but we recommend not changing it in this file. # -# See http://www.playframework.com/documentation/latest/ApplicationSecret for more details. +# See https://www.playframework.com/documentation/latest/ApplicationSecret for more details. play.http.secret.key = "vinyldnsportal-change-this-for-production" # The application languages @@ -374,7 +338,7 @@ links = [ displayOnSidebar = true displayOnLoginScreen = true title = "API Documentation" - href = "http://vinyldns.io" + href = "https://vinyldns.io" icon = "fa fa-file-text-o" } ] diff --git a/modules/docs/src/main/mdoc/operator/pre.md b/modules/docs/src/main/mdoc/operator/pre.md index a3d1c7dfb..6f5492b48 100644 --- a/modules/docs/src/main/mdoc/operator/pre.md +++ b/modules/docs/src/main/mdoc/operator/pre.md @@ -1,137 +1,125 @@ --- -layout: docs -title: "Pre-requisites" +layout: docs title: "Pre-requisites" section: "operator_menu" --- # VinylDNS Pre-requisites -VinylDNS has the following external requirements that need to be setup so that VinylDNS can operate. Those include: + +VinylDNS has the following external requirements that need to be setup so that VinylDNS can operate. Those include: 1. [DNS](#dns) - your DNS servers VinylDNS will interact with 1. [Database](#database) - the database houses all of VinylDNS information including history, records, zones, and users -1. [Message Queue](#message-queues) - the message queue supports high-availability and throttling of commands to DNS backend servers -1. [LDAP](#ldap) - ldap supports both authentication as well as the source of truth for users that are managed inside the VinylDNS database +1. [Message Queue](#message-queues) - the message queue supports high-availability and throttling of commands to DNS + backend servers +1. [LDAP](#ldap) - ldap supports both authentication as well as the source of truth for users that are managed inside + the VinylDNS database ## DNS -VinylDNS is **not a DNS**, rather it integrates with your existing DNS installations to enable DNS self-service and streamline -DNS operations. + +VinylDNS is **not a DNS**, rather it integrates with your existing DNS installations to enable DNS self-service and +streamline DNS operations. VinylDNS communicates to your DNS via: + * `DDNS` - DDNS is used for all record updates * `AXFR` - Zone Transfers are used to load DNS records into the VinylDNS database. -VinylDNS communicates to your DNS using "connections". A connection allows you to specify: +VinylDNS communicates to your DNS using "connections". A connection allows you to specify: + 1. The TSIG key name 1. The TSIG key secret 1. The server (and optionally port) to communicate to DNS with -There are **2** connections, one for DDNS and another for zone transfers. This allows you to use a different DNS server / key -for zone transfers. +There are **2** connections, one for DDNS and another for zone transfers. This allows you to use a different DNS server +/ key for zone transfers. Connections (DDNS and Transfer) can be setup + * `per zone` - every zone can override the global default by specifying its own connections. -* `global default` - assuming you are managing a primary system, you can [configure default zone connections](config-api.html#default-zone-connections). -When no zone connection is specified on a zone, the global defaults will be used. +* `global default` - assuming you are managing a primary system, you + can [configure default zone connections](config-api.html#default-zone-connections). When no zone connection is + specified on a zone, the global defaults will be used. ## Database + [database]: #database -The VinylDNS database has a `NoSQL` / non-relational design to it. Instead of having a heavily normalized set of SQL tables -that surface in the system, VinylDNS relies on `Repositories` where each `Repository` is independent of each one another. -This allows implementers to best map each `Repository` into the data-store of choice. +The VinylDNS database has a `NoSQL` / non-relational design to it. Instead of having a heavily normalized set of SQL +tables that surface in the system, VinylDNS relies on `Repositories` where each `Repository` is independent of each one +another. This allows implementers to best map each `Repository` into the data-store of choice. -As `Repositories` are independent, there are no "transactions" that span repositories. Each `Repository` implementation +As `Repositories` are independent, there are no "transactions" that span repositories. Each `Repository` implementation can choose to use transactions if it maps to multiple tables within itself. -There are **links** across repositories, for example the `RecordSet.id` would be referenced in a `RecordSetChangeRepository`. +There are **links** across repositories, for example the `RecordSet.id` would be referenced in +a `RecordSetChangeRepository`. The following are the repositories presently used by VinylDNS: -* `RecordSetRepository` - Instead of individual DNS records, VinylDNS works at the `RRSet`. The unique key for RecordSet is -the `record name` + `record type` -* `RecordChangeRepository` - The history of all changes to all records in VinylDNS. In general, some kind of pruning strategy -should be implemented otherwise this could get quite large +* `RecordSetRepository` - Instead of individual DNS records, VinylDNS works at the `RRSet`. The unique key for RecordSet + is the `record name` + `record type` +* `RecordChangeRepository` - The history of all changes to all records in VinylDNS. In general, some kind of pruning + strategy should be implemented otherwise this could get quite large * `ZoneRepository` - DNS Zones and managing access to zones -* `ZoneChangeRepository` - The history of all changes made to _zones_ in VinylDNS. Zone changes can including syncs, -updating ACL rules, changing zone ownership, etc. +* `ZoneChangeRepository` - The history of all changes made to _zones_ in VinylDNS. Zone changes can including syncs, + updating ACL rules, changing zone ownership, etc. * `GroupRepository` - VinylDNS Groups -* `UserRepository` - VinylDNS Users. These users are typically created the first time the user logs into the portal. -The user information will be pulled from LDAP, and inserted into the VinylDNS UserRepository +* `UserRepository` - VinylDNS Users. These users are typically created the first time the user logs into the portal. The + user information will be pulled from LDAP, and inserted into the VinylDNS UserRepository * `MembershipRepository` - Holds a link from users to groups * `GroupChangeRepository` - Holds changes to groups and membership -* `BatchChangeRepository` - VinylDNS allows users to submit multiple record changes _across_ DNS zones at the same time within a `Batch` -The `BatchChangeRepository` holds the batch itself and all individual changes that executed in the batch. +* `BatchChangeRepository` - VinylDNS allows users to submit multiple record changes _across_ DNS zones at the same time + within a `Batch` + The `BatchChangeRepository` holds the batch itself and all individual changes that executed in the batch. * `UserChangeRepository` - Holds changes to users. Currently only used in the portal. ## Database Types + ### MySQL -VinylDNS has implemented MySQL for all repositories so a MySQL-only instance of VinylDNS is possible. Furthermore, there are two -repositories that have _only_ been implemented in MySQL: - -1. ZoneRepository -1. BatchChangeRepository - -Originally, the `ZoneRepository` lived in DynamoDB. However, the access controls in VinylDNS made it very difficult -to use DynamoDB as the query interface is limited. A SQL interface with `JOIN`s was required. - -It should also be noted that all of the repositories have also been implemented in MySQL despite most currently running -in DynamoDB in our VinylDNS instance. Review the [Setup MySQL Guide](setup-mysql.html) for more information. -### AWS DynamoDB -VinylDNS has gone through several architecture evolutions. Along the way, DynamoDB was chosen as the data store due to -the volume of data at Comcast. It is an excellent key-value store with extremely high performance characteristics. - -VinylDNS has implemented DynamoDB for the following repositories: - -1. RecordSetRepository -1. RecordChangeRepository -1. ZoneChangeRepository -1. GroupRepository -1. UserRepository -1. MembershipRepository -1. GroupChangeRepository -1. UserChangeRepository - -Currently using DynamoDB would also require the user to either use MySQL for the batch change and zone repositories or also provide -an implementation for those repositories in a different data store. - -Review the [Setup AWS DynamoDB Guide](setup-dynamodb.html) for more information. - ## Message Queues -Most operations that take place in VinylDNS use a message queue. These operations require high-availability, fault-tolerance -with retry, and throttling. The message queue supports these characteristics in VinylDNS. + +Most operations that take place in VinylDNS use a message queue. These operations require high-availability, +fault-tolerance with retry, and throttling. The message queue supports these characteristics in VinylDNS. Some operations do not use the message queue, these include user and group changes as they do not carry the same fault-tolerance and throttling requirements. ## Message Queue Types + ### AWS SQS -Our VinylDNS instance uses AWS SQS to fulfill its message queue service needs. SQS has the following characteristics: + +Our VinylDNS instance uses AWS SQS to fulfill its message queue service needs. SQS has the following characteristics: 1. High-Availability -1. Retry - in the event that a message cannot be processed, or if a node fails midstream processing, it will be automatically -made available for another node to process -1. Back-pressure - SQS is a _pull based_ system, meaning that if VinylDNS is currently busy, new messages will not be pulled for processing. -As soon as a node becomes available, the message will be pulled. This is much preferable to a _push_ based system, where -bottlenecks in processing could cause an increase in heap pressure in the API nodes themselves. -1. Price - SQS is very reasonably priced. Comcast operates multiple message queues for different environments (dev, staging, prod, etc). -The price to use SQS is in the single digit dollars per month. VinylDNS can be tuned to run exclusively in the _free tier_. +1. Retry - in the event that a message cannot be processed, or if a node fails midstream processing, it will be + automatically made available for another node to process +1. Back-pressure - SQS is a _pull based_ system, meaning that if VinylDNS is currently busy, new messages will not be + pulled for processing. As soon as a node becomes available, the message will be pulled. This is much preferable to + a _push_ based system, where bottlenecks in processing could cause an increase in heap pressure in the API nodes + themselves. +1. Price - SQS is very reasonably priced. Comcast operates multiple message queues for different environments (dev, + staging, prod, etc). The price to use SQS is in the single digit dollars per month. VinylDNS can be tuned to run + exclusively in the _free tier_. Review the [Setup AWS SQS Guide](setup-sqs.html) for more information. ### MySQL -VinylDNS has also implemented a message queue using MySQL, which incorporates the features that we currently utilize through AWS SQS -such as changing visibility timeout and re-queuing operations. + +VinylDNS has also implemented a message queue using MySQL, which incorporates the features that we currently utilize +through AWS SQS such as changing visibility timeout and re-queuing operations. Review the [Setup MySQL Guide](setup-mysql.html) for more information. ## LDAP -VinylDNS uses LDAP in order to authenticate users in the **Portal**. LDAP is **not** used in the API, instead the API uses -its own user and group database for authentication. -When a user first logs into VinylDNS, their user information (first name, last name, user name, email) will be pulled from -LDAP, and stored in the `UserRepository`. Credentials will also be generated for the user and stored encrypted in the `UserRepository`. +VinylDNS uses LDAP in order to authenticate users in the **Portal**. LDAP is **not** used in the API, instead the API +uses its own user and group database for authentication. + +When a user first logs into VinylDNS, their user information (first name, last name, user name, email) will be pulled +from LDAP, and stored in the `UserRepository`. Credentials will also be generated for the user and stored encrypted in +the `UserRepository`. Review the [Setup LDAP Guide](setup-ldap.html) for more information diff --git a/modules/docs/src/main/mdoc/operator/setup-api.md b/modules/docs/src/main/mdoc/operator/setup-api.md index 923b54fe7..a88cc5ce7 100644 --- a/modules/docs/src/main/mdoc/operator/setup-api.md +++ b/modules/docs/src/main/mdoc/operator/setup-api.md @@ -8,7 +8,6 @@ section: "operator_menu" The API Server is the main run-time for VinylDNS. To setup the API server, follow these steps: 1. [Pre-requisites](pre.html) -1. [Setup AWS DynamoDB](setup-dynamodb.html) 1. [Setup MySQL](setup-mysql.html) 1. [Setup AWS SQS](setup-sqs.html) 1. [Configure API Server](config-api.html) diff --git a/modules/docs/src/main/mdoc/operator/setup-mysql.md b/modules/docs/src/main/mdoc/operator/setup-mysql.md index f001c4e7d..4bd147923 100644 --- a/modules/docs/src/main/mdoc/operator/setup-mysql.md +++ b/modules/docs/src/main/mdoc/operator/setup-mysql.md @@ -5,11 +5,7 @@ section: "operator_menu" --- # Setup MySQL -Our instance of VinylDNS currently stores some tables in MySQL, though all tables and a queue implementation are available in MySQL. Note -that the `batch_change` and `zone` tables are _only_ available in MySQL. - -The motivation to split databases was due to the query limitations available in AWS DynamoDB. Currently, the following tables are present in -our instance: +Our instance of VinylDNS currently stores data in MySQL. * `zone` - holds zones * `zone_access` - holds user or group identifiers that have access to zones diff --git a/modules/docs/src/main/mdoc/operator/setup-sqs.md b/modules/docs/src/main/mdoc/operator/setup-sqs.md index 3f1e73020..1a5180767 100644 --- a/modules/docs/src/main/mdoc/operator/setup-sqs.md +++ b/modules/docs/src/main/mdoc/operator/setup-sqs.md @@ -14,7 +14,7 @@ You must setup an SQS queue before you can start working with VinylDNS. An [AWS provides the information you need to setup your queue. ## Setting up AWS SQS -As opposed to DynamoDB and MySQL where everything is created when the application starts up, the SQS queue needs to be setup by hand. +As opposed to MySQL where everything is created when the application starts up, the SQS queue needs to be setup by hand. This section goes through those settings that are required. The traffic with AWS SQS is rather low. Presently, Comcast operates multiple SQS queues across multiple environments (dev, staging, prod), diff --git a/modules/docs/src/main/mdoc/permissions.md b/modules/docs/src/main/mdoc/permissions.md index 2f0d37ee1..473646439 100644 --- a/modules/docs/src/main/mdoc/permissions.md +++ b/modules/docs/src/main/mdoc/permissions.md @@ -6,7 +6,7 @@ position: 6 # VinylDNS Permissions Guide -Vinyldns is about making DNS self-service _safe_. There are a number of ways that you can govern access to your DNS infrastucture, from extremely restrictive, to extremely lax, and anywhere in between. +Vinyldns is about making DNS self-service _safe_. There are a number of ways that you can govern access to your DNS infrastructure, from extremely restrictive, to extremely lax, and anywhere in between. This guide attempts to explain the various options available for governing access to your VinylDNS installation. @@ -49,7 +49,7 @@ The original way to govern access is via Zone Ownership and Zone ACLs. When con _Zone Owners_ have full rights on a zone. They can manage the zone, abandon it, change connection information, and assign ACLs. -A `Zone ACL Rule` is a record level control that allows VinylDNS users who are **not** Zone Owners privileges to perform certain actions in the zone. For example, you can **grant access to A, AAAA, CNAME records in Zone foo.baz.com to user Josh** +A `Zone ACL Rule` is a record level control that allows VinylDNS users who are **not** Zone Owners privileges to perform certain actions in the zone. For example, you can **grant access to `A`, `AAAA`, `CNAME` records in Zone foo.baz.com to user Josh** ACL rules provide an extremely flexible way to grant access to DNS records. Each ACL Rule consists of the following: diff --git a/modules/docs/src/main/mdoc/portal/batch-changes.md b/modules/docs/src/main/mdoc/portal/batch-changes.md index 5cdd08366..1b41d6fb4 100644 --- a/modules/docs/src/main/mdoc/portal/batch-changes.md +++ b/modules/docs/src/main/mdoc/portal/batch-changes.md @@ -8,15 +8,15 @@ section: "portal_menu" Batch Changes is an alternative to submitting individual RecordSet changes and provides the following: * The ability to include records of multiple record types across multiple zones. -* Input names are entered as fully-qualified domain names (or IP addresses for **PTR** records), so users don't have to think in record/zone context. +* Input names are entered as fully-qualified domain names (or IP addresses for `PTR` records), so users don't have to think in record/zone context. #### Access * Access permissions will follow existing rules (admin group or ACL access). Note that an update (delete and add of the same record name, zone and record type combination) requires **Write** or **Delete** access. * **NEW** **Records in shared zones.** All users are permitted to create new records or update unowned records in shared zones. #### Supported record types -* Current supported record types for Batch Change are: **A**, **AAAA**, **CNAME**, **PTR**, **TXT**, and **MX**. -* Additionally, there are **A+PTR** and **AAAA+PTR** types that will be processed as separate A (or AAAA) and PTR changes in the VinylDNS backend. Deletes for **A+PTR** and **AAAA+PTR** require Input Name and Record Data. +* Current supported record types for Batch Change are: `A`, `AAAA`, `CNAME`, `PTR`, `TXT`, and `MX`. +* Additionally, there are `A+PTR` and `AAAA+PTR` types that will be processed as separate `A` (or `AAAA`) and `PTR` changes in the VinylDNS backend. Deletes for `A+PTR` and `AAAA+PTR` require Input Name and Record Data. * Supported record types for records in shared zones may vary. Contact your VinylDNS administrators to find the allowed record types. This does not apply to zone administrators or users with specific ACL access rules. diff --git a/modules/docs/src/main/mdoc/portal/dns-changes.md b/modules/docs/src/main/mdoc/portal/dns-changes.md index 0568c1812..5aa556ffc 100644 --- a/modules/docs/src/main/mdoc/portal/dns-changes.md +++ b/modules/docs/src/main/mdoc/portal/dns-changes.md @@ -8,7 +8,7 @@ section: "portal_menu" DNS Changes is an alternative to submitting individual RecordSet changes and provides the following: * The ability to include records of multiple record types across multiple zones. -* Input names are entered as fully-qualified domain names (or IP addresses for **PTR** records), so users don't have to think in record/zone context. +* Input names are entered as fully-qualified domain names (or IP addresses for `PTR` records), so users don't have to think in record/zone context. **Note**: DNS Change is portal-only terminology. The API equivalent is [batch change](../api/batchchange-model.html). @@ -17,8 +17,8 @@ DNS Changes is an alternative to submitting individual RecordSet changes and pro * **NEW** **Records in shared zones.** All users are permitted to create new records or update unowned records in shared zones. #### Supported record types -* Current supported record types for DNS change are: **A**, **AAAA**, **CNAME**, **PTR**, **TXT**, and **MX**. -* Additionally, there are **A+PTR** and **AAAA+PTR** types that will be processed as separate A (or AAAA) and PTR changes in the VinylDNS backend. Deletes for **A+PTR** and **AAAA+PTR** require Input Name and Record Data. +* Current supported record types for DNS change are: `A`, `AAAA`, `CNAME`, `PTR`, `TXT`, and `MX`. +* Additionally, there are `A+PTR` and `AAAA+PTR` types that will be processed as separate `A` (or `AAAA`) and `PTR` changes in the VinylDNS backend. Deletes for `A+PTR` and `AAAA+PTR` require Input Name and Record Data. * Supported record types for records in shared zones may vary. Contact your VinylDNS administrators to find the allowed record types. This does not apply to zone administrators or users with specific ACL access rules. diff --git a/modules/docs/src/main/mdoc/portal/manage-records.md b/modules/docs/src/main/mdoc/portal/manage-records.md index cc2e0ffe9..db9101e7a 100644 --- a/modules/docs/src/main/mdoc/portal/manage-records.md +++ b/modules/docs/src/main/mdoc/portal/manage-records.md @@ -10,7 +10,7 @@ There are currently two ways to manage records in the VinylDNS portal. This cove Only zone administrators and users with ACL rules can manage records this way. #### Supported record types -A, AAAA, CNAME, DS, MX, NAPTR, NS, PTR, SRV, SSHFP, and TXT +`A`, `AAAA`, `CNAME`, `DS`, `MX`, `NAPTR`, `NS`, `PTR`, `SRV`, `SSHFP`, and `TXT` --- diff --git a/modules/docs/src/main/mdoc/portal/search-zones.md b/modules/docs/src/main/mdoc/portal/search-zones.md index e8ab8e4c7..1b9777efa 100644 --- a/modules/docs/src/main/mdoc/portal/search-zones.md +++ b/modules/docs/src/main/mdoc/portal/search-zones.md @@ -19,6 +19,6 @@ Search `test*` returns: test.com., test.net. Search `*example` returns: example.com., another.example.com. Search `*e*` returns: another.example.com., example.com., test.com., test.net., xyz.efg. -[![Seach zones My Zones tab](../img/portal/search-zones-my-zones.png){:.screenshot}](../img/portal/search-zones-my-zones.png) +[![Search zones My Zones tab](../img/portal/search-zones-my-zones.png){:.screenshot}](../img/portal/search-zones-my-zones.png) [![Search zones All Zones tab](../img/portal/search-zones-all-zones.png){:.screenshot}](../img/portal/search-zones-all-zones.png) diff --git a/modules/docs/src/main/mdoc/tools.md b/modules/docs/src/main/mdoc/tools.md index 588714566..dbd348831 100644 --- a/modules/docs/src/main/mdoc/tools.md +++ b/modules/docs/src/main/mdoc/tools.md @@ -18,7 +18,7 @@ There are a few existing tools for working with the VinylDNS API. ## Integrations -- [external-dns](https://github.com/kubernetes-incubator/external-dns) - DNS provider-agnostic syncronization of Cloud Foundry and Kubernetes resources, including VinylDNS +- [external-dns](https://github.com/kubernetes-incubator/external-dns) - DNS provider-agnostic synchronization of Cloud Foundry and Kubernetes resources, including VinylDNS ## Coming Soon - [vinyldns-ansible](https://github.com/vinyldns/vinyldns-ansible) - Ansible integration with VinylDNS diff --git a/project/plugins.sbt b/project/plugins.sbt index d5804b67b..7278e5599 100644 --- a/project/plugins.sbt +++ b/project/plugins.sbt @@ -26,10 +26,10 @@ addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.3.4") addSbtPlugin("com.typesafe.sbt" % "sbt-license-report" % "1.2.0") -addSbtPlugin("com.47deg" % "sbt-microsites" % "1.1.5") +addSbtPlugin("com.47deg" % "sbt-microsites" % "1.3.4") addSbtPlugin("org.xerial.sbt" % "sbt-sonatype" % "2.3") addSbtPlugin("io.crashbox" % "sbt-gpg" % "0.2.0") -addSbtPlugin("org.scalameta" % "sbt-mdoc" % "2.2.10" ) +addSbtPlugin("org.scalameta" % "sbt-mdoc" % "2.2.24" ) From 5fe33eee220b896aa5196e081d95e28f1adb04e9 Mon Sep 17 00:00:00 2001 From: "Emerle, Ryan" Date: Thu, 21 Oct 2021 14:21:37 -0400 Subject: [PATCH 18/82] Update docs - Fix broken links - Fix formatting - Add Makefile for running via docker - Move README.md from `modules/docs/src/main/mdoc` to `modules/docs` to be consistent with `modules/portal` --- build.sbt | 4 +- modules/docs/Makefile | 33 +++++++++ modules/docs/README.md | 68 +++++++++++++++++++ modules/docs/src/main/mdoc/README.md | 34 ---------- .../src/main/mdoc/api/approve-batchchange.md | 2 +- .../docs/src/main/mdoc/api/auth-mechanism.md | 25 ++++--- .../src/main/mdoc/api/batchchange-errors.md | 48 ++++++------- .../src/main/mdoc/api/list-group-activity.md | 4 +- modules/docs/src/main/mdoc/operator/pre.md | 3 +- modules/docs/src/main/mdoc/permissions.md | 6 +- modules/docs/src/main/mdoc/tools.md | 4 -- 11 files changed, 149 insertions(+), 82 deletions(-) create mode 100644 modules/docs/Makefile create mode 100644 modules/docs/README.md delete mode 100644 modules/docs/src/main/mdoc/README.md diff --git a/build.sbt b/build.sbt index 966b1ced4..35b13e408 100644 --- a/build.sbt +++ b/build.sbt @@ -34,10 +34,8 @@ lazy val sharedSettings = Seq( ) else Seq.empty ), - // scala format scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", false), - // coverage options coverageMinimum := 85, coverageFailOnMinimum := true, @@ -286,7 +284,7 @@ lazy val docSettings = Seq( micrositeHomepage := "https://vinyldns.io", micrositeDocumentationUrl := "/api", micrositeDocumentationLabelDescription := "API Documentation", - micrositeHighlightLanguages ++= Seq("json"), + micrositeHighlightLanguages ++= Seq("json", "yaml", "bnf", "plaintext"), micrositeGitterChannel := false, micrositeExtraMdFiles := Map( file("CONTRIBUTING.md") -> ExtraMdFileConfig( diff --git a/modules/docs/Makefile b/modules/docs/Makefile new file mode 100644 index 000000000..4756129c2 --- /dev/null +++ b/modules/docs/Makefile @@ -0,0 +1,33 @@ +SHELL=bash +IMAGE_NAME=vinyldns-build-docs +ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) + +# Check that the required version of make is being used +REQ_MAKE_VER:=3.82 +ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER)))) + $(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION)) +endif + +# Extract arguments for `make run` +EXTRACT_ARGS=true +ifeq (run,$(firstword $(MAKECMDGOALS))) + EXTRACT_ARGS=true +endif +ifeq ($(EXTRACT_ARGS),true) + # use the rest as arguments for "run" + WITH_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) + # ...and turn them into do-nothing targets + $(eval $(WITH_ARGS):;@:) +endif + + +.ONESHELL: + +.PHONY: all run + +all: run + +run: + @set -euo pipefail + cd ../.. + docker run -it --rm -p "4000:4000" -v "$$(pwd):/build" vinyldns/build:base-build-docs /bin/bash diff --git a/modules/docs/README.md b/modules/docs/README.md new file mode 100644 index 000000000..754e5d278 --- /dev/null +++ b/modules/docs/README.md @@ -0,0 +1,68 @@ +# VinylDNS documentation site + +https://www.vinyldns.io/ + +## Publication + +The VinylDNS documentation is published to the `gh-pages` branch after each successful master branch build. This is +configured through Travis CI. + +## Documentation Structure + +- The documentation site is built with the [sbt-microsites](https://47deg.github.io/sbt-microsites/) plugin. +- The [docs module](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/src/main) contains most content for + the documentation site: + - The text content is in the `src/main/mdoc` directory + - The primary menu is built through setting a position value in the linked file `src/main/modc/index.md` + - The sidebar menu is maintained in the `src/main/resources/microsite/data/menu.yml` file + - Images are stored in `/src/main/resources/microsite/img/` directory + - Custom CSS is stored in the `src/main/resources/microsite/css/custom.css` file +- The [Contributing Guide](https://www.vinyldns.io/contributing.html) is + the [CONTRIBUTING.md](https://github.com/vinyldns/vinyldns/blob/master/CONTRIBUTING.md) file at the root of the + VinylDNS project. +- The sbt-microsite configuration is in the docSettings section of + the [build.sbt](https://github.com/vinyldns/vinyldns/blob/master/build.sbt) in the root of the VinylDNS project. + +## Build with Docker + +To build with Docker, from the `modules/docs` director you can run `make`. This will provide you with a prompt in a +container that is configured with all of the prerequisites and the `/build` directory will be mapped to the VinylDNS +root directory. From there you can follow the [steps below](#Build Locally). + +Example: + +```bash +$ make +root@1e7375bec453:/build# sbt +sbt:root> project docs +[info] set current project to docs (in build file:/build/) +sbt:docs> makeMicrosite +``` + +## Build Locally + +To build the documentation site you will need `Jekyll 4.0+` installed. This is installed by default in +the [Docker container](#Build with Docker). + +In the terminal enter: + +1. `sbt` +1. `project docs` +1. `makeMicrosite` + +In a separate tab enter: + +1. `cd modules/docs/target/site` +2. `jekyll serve --host 0.0.0.0` + - By default `jekyll` listens on `127.0.0.1` which will cause problems whe using Docker, so we specify that it + should listen on interfaces by providing `--host 0.0.0.0` +3. View in the browser at http://localhost:4000/ + - Note: port 4000 is mapped to localhost by the Docker container as well + +Tips: + +- If you make any changes to the documentation you'll need to run `makeMicrosite` again. You don't need to restart + Jekyll. +- If you only need to build the microsite once you can run `sbt ";project docs ;makeMicrosite"` then follow the jekyll + steps from the same tab. -If you delete files you may need to stop Jekyll and delete the target directory before + running `makeMicrosite` again to see the site as expected locally. diff --git a/modules/docs/src/main/mdoc/README.md b/modules/docs/src/main/mdoc/README.md deleted file mode 100644 index 35c424539..000000000 --- a/modules/docs/src/main/mdoc/README.md +++ /dev/null @@ -1,34 +0,0 @@ -# VinylDNS documentation site - -https://www.vinyldns.io/ - -## Publication -The VinylDNS documentation is published to the `gh-pages` branch after each successful master branch build. This is configured through Travis CI. - -## Documentation Structure -- The documentation site is built with the [sbt-microsites](https://47deg.github.io/sbt-microsites/) plugin. -- The [docs module](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/src/main) contains most content for the documentation site: - - The text content is in the [docs](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/docs/) directory - - The primary menu is built through setting a position value in the linked file ([example](https://github.com/vinyldns/vinyldns/blob/master/modules/docs/src/main/tut/index.md)) or in [build.sbt](https://github.com/vinyldns/vinyldns/blob/master/build.sbt) if the target link is not a file in the docs module. - - The sidebar menu is maintained in the [menu.yml](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/docs/menu.yml) - - Images are stored in the [img](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/src/main/resources/microsite/img/) directory. - - Custom CSS is stored in the [custom.css](https://github.com/vinyldns/vinyldns/tree/master/modules/docs/src/main/resources/microsite/css/custom.css) file. -- The [Contributing Guide](https://www.vinyldns.io/contributing.html) is the [CONTRIBUTING.md](https://github.com/vinyldns/vinyldns/blob/master/CONTRIBUTING.md) file at the root of the VinylDNS project. -- The sbt-microsite configuration is in the docSettings section of the [build.sbt](https://github.com/vinyldns/vinyldns/blob/master/build.sbt) in the root of the VinylDNS project. - -## Build Locally -In the terminal enter: -1. `sbt` -1. `project docs` -1. `makeMicrosite` - -In a separate tab enter: -1. `cd modules/docs/target/site` -1. `jekyll serve` -1. View in the browser at http://localhost:4000/ - -Tips: -* If you make any changes to the documentation you'll need to run `makeMicrosite` again. -You don't need to restart Jekyll. -* If you only need to build the microsite once you can run `sbt ";project docs ;makeMicrosite"` then follow the jekyll steps from the same tab. -* If you delete files you may need to stop Jekyll and delete the target directory before running `makeMicrosite` again to see the site as expected locally. diff --git a/modules/docs/src/main/mdoc/api/approve-batchchange.md b/modules/docs/src/main/mdoc/api/approve-batchchange.md index eeb0d0feb..722c30830 100644 --- a/modules/docs/src/main/mdoc/api/approve-batchchange.md +++ b/modules/docs/src/main/mdoc/api/approve-batchchange.md @@ -29,7 +29,7 @@ reviewComment | string | no | Optional approval explanation. | #### EXAMPLE HTTP REQUEST -``` +```json { "reviewComment": "Comments are optional." } diff --git a/modules/docs/src/main/mdoc/api/auth-mechanism.md b/modules/docs/src/main/mdoc/api/auth-mechanism.md index b41dabd57..0b7b50a45 100644 --- a/modules/docs/src/main/mdoc/api/auth-mechanism.md +++ b/modules/docs/src/main/mdoc/api/auth-mechanism.md @@ -1,22 +1,25 @@ --- -layout: docs +layout: docs title: "Authentication" section: "api" --- # API Authentication -The API Authentication for VinylDNS is modeled after the AWS Signature Version 4 Signing process. The AWS documentation for it can be found -[here](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). Similar to how the AWS Signature Version 4 signing -process adds authentication information to AWS requests, VinylDNS's API Authenticator also adds authentication information to every API request. - +The API Authentication for VinylDNS is modeled after the AWS Signature Version 4 Signing process. The AWS documentation +for it can be found +[here](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). Similar to how the AWS Signature Version +4 signing process adds authentication information to AWS requests, VinylDNS's API Authenticator also adds authentication +information to every API request. #### VinylDNS API Authentication Process 1. Retrieve the Authorization HTTP Header (Auth Header) from the HTTP Request Context. -2. Parse the retrieved Auth Header into an AWS *[String to Sign](https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html)* structure which should be in the form: +2. Parse the retrieved Auth Header into an + AWS *[String to Sign](https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html)* structure + which should be in the form: -``` +```plaintext StringToSign = Algorithm + \n + RequestDateTime + \n + @@ -25,20 +28,22 @@ StringToSign = ``` *String to Sign* Example: -``` + +```plaintext AWS4-HMAC-SHA256 20150830T123600Z 20150830/us-east-1/iam/aws4_request f536975d06c0309214f805bb90ccff089219ecd68b2577efef23edd43b7e1a59 ``` + 3. Extract the access key from the Auth Header and search for the account associated with the access key. 4. Validate the signature of the request. 5. Build the authentication information, which essentially contains all the authorized accounts for the signed in user. - #### Authentication Failure Response -If any these validations fail, a 401 (Unauthorized) or a 403 (Forbidden) error will be thrown; otherwise unanticipated exceptions will simply bubble out and result as 500s or 503s. +If any these validations fail, a 401 (Unauthorized) or a 403 (Forbidden) error will be thrown; otherwise unanticipated +exceptions will simply bubble out and result as 500s or 503s. 1. If the Auth Header is not found, then a 401 (Unauthorized) error is returned. 2. If the Auth Header cannot be parsed, then a 403 (Forbidden) error is returned. diff --git a/modules/docs/src/main/mdoc/api/batchchange-errors.md b/modules/docs/src/main/mdoc/api/batchchange-errors.md index dd649eda5..a30347d29 100644 --- a/modules/docs/src/main/mdoc/api/batchchange-errors.md +++ b/modules/docs/src/main/mdoc/api/batchchange-errors.md @@ -127,14 +127,14 @@ the VinylDNS instance is configured to have manual review disabled. - [Missing Owner Group Id](#MissingOwnerGroupId) - [Not a Member of Owner Group](#NotAMemberOfOwnerGroup) - [High Value Domain](#HighValueDomain) -- [CNAME Cannot be the Same Name as Zone Name]("CnameApexError") +- [CNAME Cannot be the Same Name as Zone Name](#CnameApexError) ### Non-Fatal Errors #### Zone Discovery Failed ##### Error Message: -``` +```plaintext Zone Discovery Failed: zone for "" does not exist in VinylDNS. If zone exists, then it must be connected to in VinylDNS. ``` @@ -155,7 +155,7 @@ this error could indicate that a zone needs to be created outside of VinylDNS an ##### Error Message: -``` +```plaintext Record set with name requires manual review. ``` @@ -168,7 +168,7 @@ Based on a [configurable list](../operator/config-api.html#manual-review-domains ##### Error Message: -``` +```plaintext Invalid domain name: "", valid domain names must be letters, numbers, underscores, and hyphens, joined by dots, and terminate with a dot. ``` @@ -180,7 +180,7 @@ They must also be absolute, which means they end with a dot. Syntax: -``` +```bnf ::= | " " ::=