Init: mediaserver

This commit is contained in:
2023-02-08 12:13:28 +01:00
parent 848bc9739c
commit f7c23d4ba9
31914 changed files with 6175775 additions and 0 deletions

View File

@@ -0,0 +1,143 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2016 Matt Davis, <mdavis@ansible.com>
# Copyright: (c) 2016 Chris Houseknecht, <house@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
# Azure doc fragment
DOCUMENTATION = r'''
options:
ad_user:
description:
- Active Directory username. Use when authenticating with an Active Directory user rather than service
principal.
type: str
password:
description:
- Active Directory user password. Use when authenticating with an Active Directory user rather than service
principal.
type: str
profile:
description:
- Security profile found in ~/.azure/credentials file.
type: str
subscription_id:
description:
- Your Azure subscription Id.
type: str
client_id:
description:
- Azure client ID. Use when authenticating with a Service Principal.
type: str
secret:
description:
- Azure client secret. Use when authenticating with a Service Principal.
type: str
tenant:
description:
- Azure tenant ID. Use when authenticating with a Service Principal.
type: str
cloud_environment:
description:
- For cloud environments other than the US public cloud, the environment name (as defined by Azure Python SDK, eg, C(AzureChinaCloud),
C(AzureUSGovernment)), or a metadata discovery endpoint URL (required for Azure Stack). Can also be set via credential file profile or
the C(AZURE_CLOUD_ENVIRONMENT) environment variable.
type: str
default: AzureCloud
version_added: '0.0.1'
adfs_authority_url:
description:
- Azure AD authority url. Use when authenticating with Username/password, and has your own ADFS authority.
type: str
version_added: '0.0.1'
cert_validation_mode:
description:
- Controls the certificate validation behavior for Azure endpoints. By default, all modules will validate the server certificate, but
when an HTTPS proxy is in use, or against Azure Stack, it may be necessary to disable this behavior by passing C(ignore). Can also be
set via credential file profile or the C(AZURE_CERT_VALIDATION) environment variable.
type: str
choices: [ ignore, validate ]
version_added: '0.0.1'
auth_source:
description:
- Controls the source of the credentials to use for authentication.
- Can also be set via the C(ANSIBLE_AZURE_AUTH_SOURCE) environment variable.
- When set to C(auto) (the default) the precedence is module parameters -> C(env) -> C(credential_file) -> C(cli).
- When set to C(env), the credentials will be read from the environment variables
- When set to C(credential_file), it will read the profile from C(~/.azure/credentials).
- When set to C(cli), the credentials will be sources from the Azure CLI profile. C(subscription_id) or the environment variable
C(AZURE_SUBSCRIPTION_ID) can be used to identify the subscription ID if more than one is present otherwise the default
az cli subscription is used.
- When set to C(msi), the host machine must be an azure resource with an enabled MSI extension. C(subscription_id) or the
environment variable C(AZURE_SUBSCRIPTION_ID) can be used to identify the subscription ID if the resource is granted
access to more than one subscription, otherwise the first subscription is chosen.
- The C(msi) was added in Ansible 2.6.
type: str
default: auto
choices:
- auto
- cli
- credential_file
- env
- msi
version_added: '0.0.1'
api_profile:
description:
- Selects an API profile to use when communicating with Azure services. Default value of C(latest) is appropriate for public clouds;
future values will allow use with Azure Stack.
type: str
default: latest
version_added: '0.0.1'
log_path:
description:
- Parent argument.
type: str
log_mode:
description:
- Parent argument.
type: str
x509_certificate_path:
description:
- Path to the X509 certificate used to create the service principal in PEM format.
- The certificate must be appended to the private key.
- Use when authenticating with a Service Principal.
type: path
version_added: '1.14.0'
thumbprint:
description:
- The thumbprint of the private key specified in I(x509_certificate_path).
- Use when authenticating with a Service Principal.
- Required if I(x509_certificate_path) is defined.
type: str
version_added: '1.14.0'
requirements:
- python >= 2.7
- The host that executes this module must have the azure.azcollection collection installed via galaxy
- All python packages listed in collection's requirements-azure.txt must be installed via pip on the host that executes modules from azure.azcollection
- Full installation instructions may be found https://galaxy.ansible.com/azure/azcollection
notes:
- For authentication with Azure you can pass parameters, set environment variables, use a profile stored
in ~/.azure/credentials, or log in before you run your tasks or playbook with C(az login).
- Authentication is also possible using a service principal or Active Directory user.
- To authenticate via service principal, pass subscription_id, client_id, secret and tenant or set environment
variables AZURE_SUBSCRIPTION_ID, AZURE_CLIENT_ID, AZURE_SECRET and AZURE_TENANT.
- To authenticate via Active Directory user, pass ad_user and password, or set AZURE_AD_USER and
AZURE_PASSWORD in the environment.
- "Alternatively, credentials can be stored in ~/.azure/credentials. This is an ini file containing
a [default] section and the following keys: subscription_id, client_id, secret and tenant or
subscription_id, ad_user and password. It is also possible to add additional profiles. Specify the profile
by passing profile or setting AZURE_PROFILE in the environment."
seealso:
- name: Sign in with Azure CLI
link: https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest
description: How to authenticate using the C(az login) command.
'''

View File

@@ -0,0 +1,94 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2016 Matt Davis, <mdavis@ansible.com>
# Copyright: (c) 2016 Chris Houseknecht, <house@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
# Azure doc fragment
DOCUMENTATION = r'''
options:
plugin:
description: marks this as an instance of the 'azure_rm' plugin
required: true
choices: ['azure_rm', 'azure.azcollection.azure_rm']
include_vm_resource_groups:
description: A list of resource group names to search for virtual machines. '\*' will include all resource
groups in the subscription.
default: ['*']
include_vmss_resource_groups:
description: A list of resource group names to search for virtual machine scale sets (VMSSs). '\*' will
include all resource groups in the subscription.
default: []
fail_on_template_errors:
description: When false, template failures during group and filter processing are silently ignored (eg,
if a filter or group expression refers to an undefined host variable)
choices: [True, False]
default: True
keyed_groups:
description: Creates groups based on the value of a host variable. Requires a list of dictionaries,
defining C(key) (the source dictionary-typed variable), C(prefix) (the prefix to use for the new group
name), and optionally C(separator) (which defaults to C(_))
conditional_groups:
description: A mapping of group names to Jinja2 expressions. When the mapped expression is true, the host
is added to the named group.
hostvar_expressions:
description: A mapping of hostvar names to Jinja2 expressions. The value for each host is the result of the
Jinja2 expression (which may refer to any of the host's existing variables at the time this inventory
plugin runs).
exclude_host_filters:
description: Excludes hosts from the inventory with a list of Jinja2 conditional expressions. Each
expression in the list is evaluated for each host; when the expression is true, the host is excluded
from the inventory.
default: []
batch_fetch:
description: To improve performance, results are fetched using an unsupported batch API. Disabling
C(batch_fetch) uses a much slower serial fetch, resulting in many more round-trips. Generally only
useful for troubleshooting.
default: true
default_host_filters:
description: A default set of filters that is applied in addition to the conditions in
C(exclude_host_filters) to exclude powered-off and not-fully-provisioned hosts. Set this to a different
value or empty list if you need to include hosts in these states.
default: ['powerstate != "running"', 'provisioning_state != "succeeded"']
use_contrib_script_compatible_sanitization:
description:
- By default this plugin is using a general group name sanitization to create safe and usable group names for use in Ansible.
This option allows you to override that, in efforts to allow migration from the old inventory script and
matches the sanitization of groups when the script's ``replace_dash_in_groups`` option is set to ``False``.
To replicate behavior of ``replace_dash_in_groups = True`` with constructed groups,
you will need to replace hyphens with underscores via the regex_replace filter for those entries.
- For this to work you should also turn off the TRANSFORM_INVALID_GROUP_CHARS setting,
otherwise the core engine will just use the standard sanitization on top.
- This is not the default as such names break certain functionality as not all characters are valid Python identifiers
which group names end up being used as.
type: bool
default: False
version_added: '0.0.1'
plain_host_names:
description:
- By default this plugin will use globally unique host names.
This option allows you to override that, and use the name that matches the old inventory script naming.
- This is not the default, as these names are not truly unique, and can conflict with other hosts.
The default behavior will add extra hashing to the end of the hostname to prevent such conflicts.
type: bool
default: False
version_added: '0.0.1'
hostnames:
description:
- A list of Jinja2 expressions in order of precedence to compose inventory_hostname.
- Ignores expression if result is an empty string or None value.
- By default, inventory_hostname is generated to be globally unique based on the VM host name.
See C(plain_host_names) for more details on the default.
- An expression of 'default' will force using the default hostname generator if no previous hostname expression
resulted in a valid hostname.
- Use ``default_inventory_hostname`` to access the default hostname generator's value in any of the Jinja2 expressions.
type: list
elements: str
default: [default]
'''

View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Matt Davis, <mdavis@ansible.com>
# Copyright: (c) 2016, Chris Houseknecht, <house@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
# Azure doc fragment
DOCUMENTATION = r'''
options:
tags:
description:
- Dictionary of string:string pairs to assign as metadata to the object.
- Metadata tags on the object will be updated with any provided values.
- To remove tags set append_tags option to false.
- Currently, Azure DNS zones and Traffic Manager services also don't allow the use of spaces in the tag.
- Azure Front Door doesn't support the use of # in the tag name.
- Azure Automation and Azure CDN only support 15 tags on resources.
type: dict
append_tags:
description:
- Use to control if tags field is canonical or just appends to existing tags.
- When canonical, any tags not found in the tags parameter will be removed from the object's metadata.
type: bool
default: yes
'''

View File

@@ -0,0 +1,649 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
name: azure_rm
short_description: Azure Resource Manager inventory plugin
extends_documentation_fragment:
- azure.azcollection.azure
- azure.azcollection.azure_rm
- constructed
description:
- Query VM details from Azure Resource Manager
- Requires a YAML configuration file whose name ends with 'azure_rm.(yml|yaml)'
- By default, sets C(ansible_host) to the first public IP address found (preferring the primary NIC). If no
public IPs are found, the first private IP (also preferring the primary NIC). The default may be overridden
via C(hostvar_expressions); see examples.
'''
EXAMPLES = '''
# The following host variables are always available:
# public_ipv4_addresses: all public IP addresses, with the primary IP config from the primary NIC first
# public_dns_hostnames: all public DNS hostnames, with the primary IP config from the primary NIC first
# private_ipv4_addresses: all private IP addressses, with the primary IP config from the primary NIC first
# id: the VM's Azure resource ID, eg /subscriptions/00000000-0000-0000-1111-1111aaaabb/resourceGroups/my_rg/providers/Microsoft.Compute/virtualMachines/my_vm
# location: the VM's Azure location, eg 'westus', 'eastus'
# name: the VM's resource name, eg 'myvm'
# os_profile: The VM OS properties, a dictionary, only system is currently available, eg 'os_profile.system not in ['linux']'
# powerstate: the VM's current power state, eg: 'running', 'stopped', 'deallocated'
# provisioning_state: the VM's current provisioning state, eg: 'succeeded'
# tags: dictionary of the VM's defined tag values
# resource_type: the VM's resource type, eg: 'Microsoft.Compute/virtualMachine', 'Microsoft.Compute/virtualMachineScaleSets/virtualMachines'
# vmid: the VM's internal SMBIOS ID, eg: '36bca69d-c365-4584-8c06-a62f4a1dc5d2'
# vmss: if the VM is a member of a scaleset (vmss), a dictionary including the id and name of the parent scaleset
# availability_zone: availability zone in which VM is deployed, eg '1','2','3'
#
# The following host variables are sometimes availble:
# computer_name: the Operating System's hostname. Will not be available if azure agent is not available and picking it up.
# sample 'myazuresub.azure_rm.yaml'
# required for all azure_rm inventory plugin configs
plugin: azure.azcollection.azure_rm
# forces this plugin to use a CLI auth session instead of the automatic auth source selection (eg, prevents the
# presence of 'ANSIBLE_AZURE_RM_X' environment variables from overriding CLI auth)
auth_source: cli
# fetches VMs from an explicit list of resource groups instead of default all (- '*')
include_vm_resource_groups:
- myrg1
- myrg2
# fetches VMs from VMSSs in all resource groups (defaults to no VMSS fetch)
include_vmss_resource_groups:
- '*'
# places a host in the named group if the associated condition evaluates to true
conditional_groups:
# since this will be true for every host, every host sourced from this inventory plugin config will be in the
# group 'all_the_hosts'
all_the_hosts: true
# if the VM's "name" variable contains "dbserver", it will be placed in the 'db_hosts' group
db_hosts: "'dbserver' in name"
# adds variables to each host found by this inventory plugin, whose values are the result of the associated expression
hostvar_expressions:
my_host_var:
# A statically-valued expression has to be both single and double-quoted, or use escaped quotes, since the outer
# layer of quotes will be consumed by YAML. Without the second set of quotes, it interprets 'staticvalue' as a
# variable instead of a string literal.
some_statically_valued_var: "'staticvalue'"
# overrides the default ansible_host value with a custom Jinja2 expression, in this case, the first DNS hostname, or
# if none are found, the first public IP address.
ansible_host: (public_dns_hostnames + public_ipv4_addresses) | first
# change how inventory_hostname is generated. Each item is a jinja2 expression similar to hostvar_expressions.
hostnames:
- tags.vm_name
- default # special var that uses the default hashed name
# places hosts in dynamically-created groups based on a variable value.
keyed_groups:
# places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM.
- prefix: tag
key: tags
# places each host in a group named 'azure_loc_(location name)', depending on the VM's location
- prefix: azure_loc
key: location
# places host in a group named 'some_tag_X' using the value of the 'sometag' tag on a VM as X, and defaulting to the
# value 'none' (eg, the group 'some_tag_none') if the 'sometag' tag is not defined for a VM.
- prefix: some_tag
key: tags.sometag | default('none')
# excludes a host from the inventory when any of these expressions is true, can refer to any vars defined on the host
exclude_host_filters:
# excludes hosts in the eastus region
- location in ['eastus']
- tags['tagkey'] is defined and tags['tagkey'] == 'tagkey'
- tags['tagkey2'] is defined and tags['tagkey2'] == 'tagkey2'
# excludes hosts that are powered off
- powerstate != 'running'
'''
# FUTURE: do we need a set of sane default filters, separate from the user-defineable ones?
# eg, powerstate==running, provisioning_state==succeeded
import hashlib
import json
import re
import uuid
try:
from queue import Queue, Empty
except ImportError:
from Queue import Queue, Empty
from collections import namedtuple
from ansible import release
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible.module_utils.six import iteritems
from ansible_collections.azure.azcollection.plugins.module_utils.azure_rm_common import AzureRMAuth
from ansible.errors import AnsibleParserError, AnsibleError
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils._text import to_native, to_bytes, to_text
from itertools import chain
try:
from msrest import ServiceClient, Serializer, Deserializer
from msrestazure import AzureConfiguration
from msrestazure.polling.arm_polling import ARMPolling
from msrestazure.tools import parse_resource_id
except ImportError:
AzureConfiguration = object
ARMPolling = object
parse_resource_id = object
ServiceClient = object
Serializer = object
Deserializer = object
pass
class AzureRMRestConfiguration(AzureConfiguration):
def __init__(self, credentials, subscription_id, base_url=None):
if credentials is None:
raise ValueError("Parameter 'credentials' must not be None.")
if subscription_id is None:
raise ValueError("Parameter 'subscription_id' must not be None.")
if not base_url:
base_url = 'https://management.azure.com'
super(AzureRMRestConfiguration, self).__init__(base_url)
self.add_user_agent('ansible-dynamic-inventory/{0}'.format(release.__version__))
self.credentials = credentials
self.subscription_id = subscription_id
UrlAction = namedtuple('UrlAction', ['url', 'api_version', 'handler', 'handler_args'])
# FUTURE: add Cacheable support once we have a sane serialization format
class InventoryModule(BaseInventoryPlugin, Constructable):
NAME = 'azure.azcollection.azure_rm'
def __init__(self):
super(InventoryModule, self).__init__()
self._serializer = Serializer()
self._deserializer = Deserializer()
self._hosts = []
self._filters = None
# FUTURE: use API profiles with defaults
self._compute_api_version = '2017-03-30'
self._network_api_version = '2015-06-15'
self._default_header_parameters = {'Content-Type': 'application/json; charset=utf-8'}
self._request_queue = Queue()
self.azure_auth = None
self._batch_fetch = False
def verify_file(self, path):
'''
:param loader: an ansible.parsing.dataloader.DataLoader object
:param path: the path to the inventory config file
:return the contents of the config file
'''
if super(InventoryModule, self).verify_file(path):
if re.match(r'.{0,}azure_rm\.y(a)?ml$', path):
return True
# display.debug("azure_rm inventory filename must end with 'azure_rm.yml' or 'azure_rm.yaml'")
return False
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self._read_config_data(path)
if self.get_option('use_contrib_script_compatible_sanitization'):
self._sanitize_group_name = self._legacy_script_compatible_group_sanitization
self._batch_fetch = self.get_option('batch_fetch')
self._legacy_hostnames = self.get_option('plain_host_names')
self._filters = self.get_option('exclude_host_filters') + self.get_option('default_host_filters')
try:
self._credential_setup()
self._get_hosts()
except Exception:
raise
def _credential_setup(self):
auth_options = dict(
auth_source=self.get_option('auth_source'),
profile=self.get_option('profile'),
subscription_id=self.get_option('subscription_id'),
client_id=self.get_option('client_id'),
secret=self.get_option('secret'),
tenant=self.get_option('tenant'),
ad_user=self.get_option('ad_user'),
password=self.get_option('password'),
cloud_environment=self.get_option('cloud_environment'),
cert_validation_mode=self.get_option('cert_validation_mode'),
api_profile=self.get_option('api_profile'),
adfs_authority_url=self.get_option('adfs_authority_url')
)
self.azure_auth = AzureRMAuth(**auth_options)
self._clientconfig = AzureRMRestConfiguration(self.azure_auth.azure_credentials, self.azure_auth.subscription_id,
self.azure_auth._cloud_environment.endpoints.resource_manager)
self._client = ServiceClient(self._clientconfig.credentials, self._clientconfig)
def _enqueue_get(self, url, api_version, handler, handler_args=None):
if not handler_args:
handler_args = {}
self._request_queue.put_nowait(UrlAction(url=url, api_version=api_version, handler=handler, handler_args=handler_args))
def _enqueue_vm_list(self, rg='*'):
if not rg or rg == '*':
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines'
else:
url = '/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines'
url = url.format(subscriptionId=self._clientconfig.subscription_id, rg=rg)
self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vm_page_response)
def _enqueue_vmss_list(self, rg=None):
if not rg or rg == '*':
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachineScaleSets'
else:
url = '/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachineScaleSets'
url = url.format(subscriptionId=self._clientconfig.subscription_id, rg=rg)
self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vmss_page_response)
def _get_hosts(self):
for vm_rg in self.get_option('include_vm_resource_groups'):
self._enqueue_vm_list(vm_rg)
for vmss_rg in self.get_option('include_vmss_resource_groups'):
self._enqueue_vmss_list(vmss_rg)
if self._batch_fetch:
self._process_queue_batch()
else:
self._process_queue_serial()
constructable_config_strict = boolean(self.get_option('fail_on_template_errors'))
constructable_config_compose = self.get_option('hostvar_expressions')
constructable_config_groups = self.get_option('conditional_groups')
constructable_config_keyed_groups = self.get_option('keyed_groups')
constructable_hostnames = self.get_option('hostnames')
for h in self._hosts:
# FUTURE: track hostnames to warn if a hostname is repeated (can happen for legacy and for composed inventory_hostname)
inventory_hostname = self._get_hostname(h, hostnames=constructable_hostnames, strict=constructable_config_strict)
if self._filter_host(inventory_hostname, h.hostvars):
continue
self.inventory.add_host(inventory_hostname)
# FUTURE: configurable default IP list? can already do this via hostvar_expressions
self.inventory.set_variable(inventory_hostname, "ansible_host",
next(chain(h.hostvars['public_ipv4_addresses'], h.hostvars['private_ipv4_addresses']), None))
for k, v in iteritems(h.hostvars):
# FUTURE: configurable hostvar prefix? Makes docs harder...
self.inventory.set_variable(inventory_hostname, k, v)
# constructable delegation
self._set_composite_vars(constructable_config_compose, h.hostvars, inventory_hostname, strict=constructable_config_strict)
self._add_host_to_composed_groups(constructable_config_groups, h.hostvars, inventory_hostname, strict=constructable_config_strict)
self._add_host_to_keyed_groups(constructable_config_keyed_groups, h.hostvars, inventory_hostname, strict=constructable_config_strict)
# FUTURE: fix underlying inventory stuff to allow us to quickly access known groupvars from reconciled host
def _filter_host(self, inventory_hostname, hostvars):
self.templar.available_variables = hostvars
for condition in self._filters:
# FUTURE: should warn/fail if conditional doesn't return True or False
conditional = "{{% if {0} %}} True {{% else %}} False {{% endif %}}".format(condition)
try:
if boolean(self.templar.template(conditional)):
return True
except Exception as e:
if boolean(self.get_option('fail_on_template_errors')):
raise AnsibleParserError("Error evaluating filter condition '{0}' for host {1}: {2}".format(condition, inventory_hostname, to_native(e)))
continue
return False
def _get_hostname(self, host, hostnames=None, strict=False):
hostname = None
errors = []
for preference in hostnames:
if preference == 'default':
return host.default_inventory_hostname
try:
hostname = self._compose(preference, host.hostvars)
except Exception as e: # pylint: disable=broad-except
if strict:
raise AnsibleError("Could not compose %s as hostnames - %s" % (preference, to_native(e)))
else:
errors.append(
(preference, str(e))
)
if hostname:
return to_text(hostname)
raise AnsibleError(
'Could not template any hostname for host, errors for each preference: %s' % (
', '.join(['%s: %s' % (pref, err) for pref, err in errors])
)
)
def _process_queue_serial(self):
try:
while True:
item = self._request_queue.get_nowait()
resp = self.send_request(item.url, item.api_version)
item.handler(resp, **item.handler_args)
except Empty:
pass
def _on_vm_page_response(self, response, vmss=None):
next_link = response.get('nextLink')
if next_link:
self._enqueue_get(url=next_link, api_version=self._compute_api_version, handler=self._on_vm_page_response)
if 'value' in response:
for h in response['value']:
# FUTURE: add direct VM filtering by tag here (performance optimization)?
self._hosts.append(AzureHost(h, self, vmss=vmss, legacy_name=self._legacy_hostnames))
def _on_vmss_page_response(self, response):
next_link = response.get('nextLink')
if next_link:
self._enqueue_get(url=next_link, api_version=self._compute_api_version, handler=self._on_vmss_page_response)
# FUTURE: add direct VMSS filtering by tag here (performance optimization)?
for vmss in response['value']:
url = '{0}/virtualMachines'.format(vmss['id'])
# VMSS instances look close enough to regular VMs that we can share the handler impl...
self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vm_page_response, handler_args=dict(vmss=vmss))
# use the undocumented /batch endpoint to bulk-send up to 500 requests in a single round-trip
#
def _process_queue_batch(self):
while True:
batch_requests = []
batch_item_index = 0
batch_response_handlers = dict()
try:
while batch_item_index < 100:
item = self._request_queue.get_nowait()
name = str(uuid.uuid4())
query_parameters = {'api-version': item.api_version}
req = self._client.get(item.url, query_parameters)
batch_requests.append(dict(httpMethod="GET", url=req.url, name=name))
batch_response_handlers[name] = item
batch_item_index += 1
except Empty:
pass
if not batch_requests:
break
batch_resp = self._send_batch(batch_requests)
key_name = None
if 'responses' in batch_resp:
key_name = 'responses'
elif 'value' in batch_resp:
key_name = 'value'
else:
raise AnsibleError("didn't find expected key responses/value in batch response")
for idx, r in enumerate(batch_resp[key_name]):
status_code = r.get('httpStatusCode')
returned_name = r['name']
result = batch_response_handlers[returned_name]
if status_code != 200:
# FUTURE: error-tolerant operation mode (eg, permissions)
raise AnsibleError("a batched request failed with status code {0}, url {1}".format(status_code, result.url))
# FUTURE: store/handle errors from individual handlers
result.handler(r['content'], **result.handler_args)
def _send_batch(self, batched_requests):
url = '/batch'
query_parameters = {'api-version': '2015-11-01'}
body_obj = dict(requests=batched_requests)
body_content = self._serializer.body(body_obj, 'object')
header = {'x-ms-client-request-id': str(uuid.uuid4())}
header.update(self._default_header_parameters)
request = self._client.post(url, query_parameters)
initial_response = self._client.send(request, header, body_content)
# FUTURE: configurable timeout?
poller = ARMPolling(timeout=2)
poller.initialize(client=self._client,
initial_response=initial_response,
deserialization_callback=lambda r: self._deserializer('object', r))
poller.run()
return poller.resource()
def send_request(self, url, api_version):
query_parameters = {'api-version': api_version}
req = self._client.get(url, query_parameters)
resp = self._client.send(req, self._default_header_parameters, stream=False)
resp.raise_for_status()
content = resp.content
return json.loads(content)
@staticmethod
def _legacy_script_compatible_group_sanitization(name):
# note that while this mirrors what the script used to do, it has many issues with unicode and usability in python
regex = re.compile(r"[^A-Za-z0-9\_\-]")
return regex.sub('_', name)
# VM list (all, N resource groups): VM -> InstanceView, N NICs, N PublicIPAddress)
# VMSS VMs (all SS, N specific SS, N resource groups?): SS -> VM -> InstanceView, N NICs, N PublicIPAddress)
class AzureHost(object):
_powerstate_regex = re.compile('^PowerState/(?P<powerstate>.+)$')
def __init__(self, vm_model, inventory_client, vmss=None, legacy_name=False):
self._inventory_client = inventory_client
self._vm_model = vm_model
self._vmss = vmss
self._instanceview = None
self._powerstate = "unknown"
self.nics = []
if legacy_name:
self.default_inventory_hostname = vm_model['name']
else:
# Azure often doesn't provide a globally-unique filename, so use resource name + a chunk of ID hash
self.default_inventory_hostname = '{0}_{1}'.format(vm_model['name'], hashlib.sha1(to_bytes(vm_model['id'])).hexdigest()[0:4])
self._hostvars = {}
inventory_client._enqueue_get(url="{0}/instanceView".format(vm_model['id']),
api_version=self._inventory_client._compute_api_version,
handler=self._on_instanceview_response)
nic_refs = vm_model['properties']['networkProfile']['networkInterfaces']
for nic in nic_refs:
# single-nic instances don't set primary, so figure it out...
is_primary = nic.get('properties', {}).get('primary', len(nic_refs) == 1)
inventory_client._enqueue_get(url=nic['id'], api_version=self._inventory_client._network_api_version,
handler=self._on_nic_response,
handler_args=dict(is_primary=is_primary))
@property
def hostvars(self):
if self._hostvars != {}:
return self._hostvars
system = "unknown"
if 'osProfile' in self._vm_model['properties']:
if 'linuxConfiguration' in self._vm_model['properties']['osProfile']:
system = 'linux'
if 'windowsConfiguration' in self._vm_model['properties']['osProfile']:
system = 'windows'
else:
osType = self._vm_model['properties']['storageProfile']['osDisk']['osType']
if osType == 'Linux':
system = 'linux'
if osType == 'Windows':
system = 'windows'
av_zone = None
if 'zones' in self._vm_model:
av_zone = self._vm_model['zones']
new_hostvars = dict(
network_interface=[],
mac_address=[],
network_interface_id=[],
security_group_id=[],
security_group=[],
public_ipv4_addresses=[],
public_dns_hostnames=[],
private_ipv4_addresses=[],
id=self._vm_model['id'],
location=self._vm_model['location'],
name=self._vm_model['name'],
computer_name=self._vm_model['properties'].get('osProfile', {}).get('computerName'),
availability_zone=av_zone,
powerstate=self._powerstate,
provisioning_state=self._vm_model['properties']['provisioningState'].lower(),
tags=self._vm_model.get('tags', {}),
resource_type=self._vm_model.get('type', "unknown"),
vmid=self._vm_model['properties']['vmId'],
os_profile=dict(
system=system,
),
vmss=dict(
id=self._vmss['id'],
name=self._vmss['name'],
) if self._vmss else {},
virtual_machine_size=self._vm_model['properties']['hardwareProfile']['vmSize'] if self._vm_model['properties'].get('hardwareProfile') else None,
plan=self._vm_model['properties']['plan']['name'] if self._vm_model['properties'].get('plan') else None,
resource_group=parse_resource_id(self._vm_model['id']).get('resource_group').lower(),
default_inventory_hostname=self.default_inventory_hostname,
)
# set nic-related values from the primary NIC first
for nic in sorted(self.nics, key=lambda n: n.is_primary, reverse=True):
# and from the primary IP config per NIC first
for ipc in sorted(nic._nic_model['properties']['ipConfigurations'], key=lambda i: i['properties'].get('primary', False), reverse=True):
private_ip = ipc['properties'].get('privateIPAddress')
if private_ip:
new_hostvars['private_ipv4_addresses'].append(private_ip)
pip_id = ipc['properties'].get('publicIPAddress', {}).get('id')
if pip_id:
new_hostvars['public_ip_id'] = pip_id
pip = nic.public_ips[pip_id]
new_hostvars['public_ip_name'] = pip._pip_model['name']
new_hostvars['public_ipv4_addresses'].append(pip._pip_model['properties'].get('ipAddress', None))
pip_fqdn = pip._pip_model['properties'].get('dnsSettings', {}).get('fqdn')
if pip_fqdn:
new_hostvars['public_dns_hostnames'].append(pip_fqdn)
new_hostvars['mac_address'].append(nic._nic_model['properties'].get('macAddress'))
new_hostvars['network_interface'].append(nic._nic_model['name'])
new_hostvars['network_interface_id'].append(nic._nic_model['id'])
new_hostvars['security_group_id'].append(nic._nic_model['properties']['networkSecurityGroup']['id']) \
if nic._nic_model['properties'].get('networkSecurityGroup') else None
new_hostvars['security_group'].append(parse_resource_id(nic._nic_model['properties']['networkSecurityGroup']['id'])['resource_name']) \
if nic._nic_model['properties'].get('networkSecurityGroup') else None
# set image and os_disk
new_hostvars['image'] = {}
new_hostvars['os_disk'] = {}
new_hostvars['data_disks'] = []
storageProfile = self._vm_model['properties'].get('storageProfile')
if storageProfile:
imageReference = storageProfile.get('imageReference')
if imageReference:
if imageReference.get('publisher'):
new_hostvars['image'] = dict(
sku=imageReference.get('sku'),
publisher=imageReference.get('publisher'),
version=imageReference.get('version'),
offer=imageReference.get('offer')
)
elif imageReference.get('id'):
new_hostvars['image'] = dict(
id=imageReference.get('id')
)
osDisk = storageProfile.get('osDisk')
new_hostvars['os_disk'] = dict(
name=osDisk.get('name'),
operating_system_type=osDisk.get('osType').lower() if osDisk.get('osType') else None,
id=osDisk.get('managedDisk', {}).get('id')
)
new_hostvars['data_disks'] = [
dict(
name=dataDisk.get('name'),
lun=dataDisk.get('lun'),
id=dataDisk.get('managedDisk', {}).get('id')
) for dataDisk in storageProfile.get('dataDisks', [])
]
self._hostvars = new_hostvars
return self._hostvars
def _on_instanceview_response(self, vm_instanceview_model):
self._instanceview = vm_instanceview_model
self._powerstate = next((self._powerstate_regex.match(s.get('code', '')).group('powerstate')
for s in vm_instanceview_model.get('statuses', []) if self._powerstate_regex.match(s.get('code', ''))), 'unknown')
def _on_nic_response(self, nic_model, is_primary=False):
nic = AzureNic(nic_model=nic_model, inventory_client=self._inventory_client, is_primary=is_primary)
self.nics.append(nic)
class AzureNic(object):
def __init__(self, nic_model, inventory_client, is_primary=False):
self._nic_model = nic_model
self.is_primary = is_primary
self._inventory_client = inventory_client
self.public_ips = {}
if nic_model.get('properties', {}).get('ipConfigurations'):
for ipc in nic_model['properties']['ipConfigurations']:
pip = ipc['properties'].get('publicIPAddress')
if pip:
self._inventory_client._enqueue_get(url=pip['id'], api_version=self._inventory_client._network_api_version, handler=self._on_pip_response)
def _on_pip_response(self, pip_model):
self.public_ips[pip_model['id']] = AzurePip(pip_model)
class AzurePip(object):
def __init__(self, pip_model):
self._pip_model = pip_model

View File

@@ -0,0 +1,200 @@
# Copyright (c) 2022 Hai Cao, <t-haicao@microsoft.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: azure_keyvault_secret
author:
- Hai Cao <t-haicao@microsoft.com>
version_added: '1.12.0'
requirements:
- requests
- azure
- msrest
short_description: Read secret from Azure Key Vault.
description:
- This lookup returns the content of secret saved in Azure Key Vault.
- When ansible host is MSI enabled Azure VM, user don't need provide any credential to access to Azure Key Vault.
options:
_terms:
description: Secret name, version can be included like secret_name/secret_version.
required: True
vault_url:
description: Url of Azure Key Vault.
required: True
client_id:
description: Client id of service principal that has access to the Azure Key Vault
secret:
description: Secret of the service principal.
tenant_id:
description: Tenant id of service principal.
notes:
- If version is not provided, this plugin will return the latest version of the secret.
- If ansible is running on Azure Virtual Machine with MSI enabled, client_id, secret and tenant isn't required.
- For enabling MSI on Azure VM, please refer to this doc https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/
- After enabling MSI on Azure VM, remember to grant access of the Key Vault to the VM by adding a new Acess Policy in Azure Portal.
- If MSI is not enabled on ansible host, it's required to provide a valid service principal which has access to the key vault.
- To use a plugin from a collection, please reference the full namespace, collection name, and lookup plugin name that you want to use.
"""
EXAMPLE = """
- name: Look up secret when ansible host is MSI enabled Azure VM
debug:
msg: "the value of this secret is {{
lookup(
'azure.azcollection.azure_keyvault_secret',
'testSecret/version',
vault_url='https://yourvault.vault.azure.net'
)
}}"
- name: Look up secret when ansible host is general VM
vars:
url: 'https://yourvault.vault.azure.net'
secretname: 'testSecret/version'
client_id: '123456789'
secret: 'abcdefg'
tenant: 'uvwxyz'
debug:
msg: "the value of this secret is {{
lookup(
'azure.azcollection.azure_keyvault_secret',
secretname,
vault_url=url,
client_id=client_id,
secret=secret,
tenant_id=tenant
)
}}"
# Example below creates an Azure Virtual Machine with SSH public key from key vault using 'azure_keyvault_secret' lookup plugin.
- name: Create Azure VM
hosts: localhost
connection: local
no_log: True
vars:
resource_group: myResourceGroup
vm_name: testvm
location: eastus
ssh_key: "{{ lookup('azure.azcollection.azure_keyvault_secret','myssh_key') }}"
- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: "{{ ssh_key }}"
network_interfaces: "{{ vm_name }}"
image:
offer: UbuntuServer
publisher: Canonical
sku: 16.04-LTS
version: latest
"""
RETURN = """
_raw:
description: secret content string
"""
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
try:
import requests
import logging
import os
from azure.common.credentials import ServicePrincipalCredentials
from azure.keyvault import KeyVaultClient
from msrest.exceptions import AuthenticationError, ClientRequestError
from azure.keyvault.models.key_vault_error import KeyVaultErrorException
except ImportError:
pass
display = Display()
TOKEN_ACQUIRED = False
token_params = {
'api-version': '2018-02-01',
'resource': 'https://vault.azure.net'
}
token_headers = {
'Metadata': 'true'
}
token = None
try:
token_res = requests.get('http://169.254.169.254/metadata/identity/oauth2/token', params=token_params, headers=token_headers, timeout=(3.05, 27))
if token_res.ok:
token = token_res.json().get("access_token")
if token is not None:
TOKEN_ACQUIRED = True
else:
display.v('Successfully called MSI endpoint, but no token was available. Will use service principal if provided.')
else:
display.v("Unable to query MSI endpoint, Error Code %s. Will use service principal if provided" % token_res.status_code)
except Exception:
display.v('Unable to fetch MSI token. Will use service principal if provided.')
TOKEN_ACQUIRED = False
def lookup_secret_non_msi(terms, vault_url, kwargs):
logging.getLogger('msrestazure.azure_active_directory').addHandler(logging.NullHandler())
logging.getLogger('msrest.service_client').addHandler(logging.NullHandler())
client_id = kwargs['client_id'] if kwargs.get('client_id') else os.environ.get('AZURE_CLIENT_ID')
secret = kwargs['secret'] if kwargs.get('secret') else os.environ.get('AZURE_SECRET')
tenant_id = kwargs['tenant_id'] if kwargs.get('tenant_id') else os.environ.get('AZURE_TENANT')
try:
credentials = ServicePrincipalCredentials(
client_id=client_id,
secret=secret,
tenant=tenant_id
)
client = KeyVaultClient(credentials)
except AuthenticationError:
raise AnsibleError('Invalid credentials provided.')
ret = []
for term in terms:
try:
secret_val = client.get_secret(vault_url, term, '').value
ret.append(secret_val)
except ClientRequestError:
raise AnsibleError('Error occurred in request')
except KeyVaultErrorException:
raise AnsibleError('Failed to fetch secret ' + term + '.')
return ret
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
ret = []
vault_url = kwargs.pop('vault_url', None)
if vault_url is None:
raise AnsibleError('Failed to get valid vault url.')
if TOKEN_ACQUIRED:
secret_params = {'api-version': '2016-10-01'}
secret_headers = {'Authorization': 'Bearer ' + token}
for term in terms:
try:
secret_res = requests.get(vault_url + '/secrets/' + term, params=secret_params, headers=secret_headers)
ret.append(secret_res.json()["value"])
except KeyError:
raise AnsibleError('Failed to fetch secret ' + term + '.')
except Exception:
raise AnsibleError('Failed to fetch secret: ' + term + ' via MSI endpoint.')
return ret
else:
return lookup_secret_non_msi(terms, vault_url, kwargs)

View File

@@ -0,0 +1,215 @@
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible_collections.azure.azcollection.plugins.module_utils.azure_rm_common import AzureRMModuleBase
import re
from ansible.module_utils.common.dict_transformations import _camel_to_snake, _snake_to_camel
from ansible.module_utils.six import string_types
class AzureRMModuleBaseExt(AzureRMModuleBase):
def inflate_parameters(self, spec, body, level):
if isinstance(body, list):
for item in body:
self.inflate_parameters(spec, item, level)
return
for name in spec.keys():
# first check if option was passed
param = body.get(name)
if param is None:
if spec[name].get('purgeIfNone', False):
body.pop(name, None)
continue
# check if pattern needs to be used
pattern = spec[name].get('pattern', None)
if pattern:
if pattern == 'camelize':
param = _snake_to_camel(param, True)
elif isinstance(pattern, list):
normalized = None
for p in pattern:
normalized = self.normalize_resource_id(param, p)
body[name] = normalized
if normalized is not None:
break
else:
param = self.normalize_resource_id(param, pattern)
body[name] = param
disposition = spec[name].get('disposition', '*')
if level == 0 and not disposition.startswith('/'):
continue
if disposition == '/':
disposition = '/*'
parts = disposition.split('/')
if parts[0] == '':
# should fail if level is > 0?
parts.pop(0)
target_dict = body
elem = body.pop(name)
while len(parts) > 1:
target_dict = target_dict.setdefault(parts.pop(0), {})
targetName = parts[0] if parts[0] != '*' else name
target_dict[targetName] = elem
if spec[name].get('options'):
self.inflate_parameters(spec[name].get('options'), target_dict[targetName], level + 1)
def normalize_resource_id(self, value, pattern):
'''
Return a proper resource id string..
:param resource_id: It could be a resource name, resource id or dict containing parts from the pattern.
:param pattern: pattern of resource is, just like in Azure Swagger
'''
value_dict = {}
if isinstance(value, string_types):
value_parts = value.split('/')
if len(value_parts) == 1:
value_dict['name'] = value
else:
pattern_parts = pattern.split('/')
if len(value_parts) != len(pattern_parts):
return None
for i in range(len(value_parts)):
if pattern_parts[i].startswith('{'):
value_dict[pattern_parts[i][1:-1]] = value_parts[i]
elif value_parts[i].lower() != pattern_parts[i].lower():
return None
elif isinstance(value, dict):
value_dict = value
else:
return None
if not value_dict.get('subscription_id'):
value_dict['subscription_id'] = self.subscription_id
if not value_dict.get('resource_group'):
value_dict['resource_group'] = self.resource_group
# check if any extra values passed
for k in value_dict:
if not ('{' + k + '}') in pattern:
return None
# format url
return pattern.format(**value_dict)
def idempotency_check(self, old_params, new_params):
'''
Return True if something changed. Function will use fields from module_arg_spec to perform dependency checks.
:param old_params: old parameters dictionary, body from Get request.
:param new_params: new parameters dictionary, unpacked module parameters.
'''
modifiers = {}
result = {}
self.create_compare_modifiers(self.module.argument_spec, '', modifiers)
self.results['modifiers'] = modifiers
return self.default_compare(modifiers, new_params, old_params, '', self.results)
def create_compare_modifiers(self, arg_spec, path, result):
for k in arg_spec.keys():
o = arg_spec[k]
updatable = o.get('updatable', True)
comparison = o.get('comparison', 'default')
disposition = o.get('disposition', '*')
if disposition == '/':
disposition = '/*'
p = (path +
('/' if len(path) > 0 else '') +
disposition.replace('*', k) +
('/*' if o['type'] == 'list' else ''))
if comparison != 'default' or not updatable:
result[p] = {'updatable': updatable, 'comparison': comparison}
if o.get('options'):
self.create_compare_modifiers(o.get('options'), p, result)
def default_compare(self, modifiers, new, old, path, result):
'''
Default dictionary comparison.
This function will work well with most of the Azure resources.
It correctly handles "location" comparison.
Value handling:
- if "new" value is None, it will be taken from "old" dictionary if "incremental_update"
is enabled.
List handling:
- if list contains "name" field it will be sorted by "name" before comparison is done.
- if module has "incremental_update" set, items missing in the new list will be copied
from the old list
Warnings:
If field is marked as non-updatable, appropriate warning will be printed out and
"new" structure will be updated to old value.
:modifiers: Optional dictionary of modifiers, where key is the path and value is dict of modifiers
:param new: New version
:param old: Old version
Returns True if no difference between structures has been detected.
Returns False if difference was detected.
'''
if new is None:
return True
elif isinstance(new, dict):
comparison_result = True
if not isinstance(old, dict):
result['compare'].append('changed [' + path + '] old dict is null')
comparison_result = False
else:
for k in set(new.keys()) | set(old.keys()):
new_item = new.get(k, None)
old_item = old.get(k, None)
if new_item is None:
if isinstance(old_item, dict):
new[k] = old_item
result['compare'].append('new item was empty, using old [' + path + '][ ' + k + ' ]')
elif not self.default_compare(modifiers, new_item, old_item, path + '/' + k, result):
comparison_result = False
return comparison_result
elif isinstance(new, list):
comparison_result = True
if not isinstance(old, list) or len(new) != len(old):
result['compare'].append('changed [' + path + '] length is different or old value is null')
comparison_result = False
elif len(old) > 0:
if isinstance(old[0], dict):
key = None
if 'id' in old[0] and 'id' in new[0]:
key = 'id'
elif 'name' in old[0] and 'name' in new[0]:
key = 'name'
else:
key = next(iter(old[0]))
new = sorted(new, key=lambda x: x.get(key, None))
old = sorted(old, key=lambda x: x.get(key, None))
else:
new = sorted(new)
old = sorted(old)
for i in range(len(new)):
if not self.default_compare(modifiers, new[i], old[i], path + '/*', result):
comparison_result = False
return comparison_result
else:
updatable = modifiers.get(path, {}).get('updatable', True)
comparison = modifiers.get(path, {}).get('comparison', 'default')
if comparison == 'ignore':
return True
elif comparison == 'default' or comparison == 'sensitive':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.lower()
old = old.lower()
elif comparison == 'location':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.replace(' ', '').lower()
old = old.replace(' ', '').lower()
if str(new) != str(old):
result['compare'].append('changed [' + path + '] ' + str(new) + ' != ' + str(old) + ' - ' + str(comparison))
if updatable:
return False
else:
self.module.warn("property '" + path + "' cannot be updated (" + str(old) + "->" + str(new) + ")")
return True
else:
return True

View File

@@ -0,0 +1,104 @@
# Copyright (c) 2018 Zim Kalinowski, <zikalino@microsoft.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
try:
from ansible.module_utils.ansible_release import __version__ as ANSIBLE_VERSION
except Exception:
ANSIBLE_VERSION = 'unknown'
try:
from msrestazure.azure_exceptions import CloudError
from msrestazure.azure_configuration import AzureConfiguration
from msrest.service_client import ServiceClient
from msrest.pipeline import ClientRawResponse
from msrest.polling import LROPoller
from msrestazure.polling.arm_polling import ARMPolling
import uuid
import json
except ImportError:
# This is handled in azure_rm_common
AzureConfiguration = object
ANSIBLE_USER_AGENT = 'Ansible/{0}'.format(ANSIBLE_VERSION)
class GenericRestClientConfiguration(AzureConfiguration):
def __init__(self, credentials, subscription_id, base_url=None):
if credentials is None:
raise ValueError("Parameter 'credentials' must not be None.")
if subscription_id is None:
raise ValueError("Parameter 'subscription_id' must not be None.")
if not base_url:
base_url = 'https://management.azure.com'
super(GenericRestClientConfiguration, self).__init__(base_url)
self.add_user_agent(ANSIBLE_USER_AGENT)
self.credentials = credentials
self.subscription_id = subscription_id
class GenericRestClient(object):
def __init__(self, credentials, subscription_id, base_url=None):
self.config = GenericRestClientConfiguration(credentials, subscription_id, base_url)
self._client = ServiceClient(self.config.credentials, self.config)
self.models = None
def query(self, url, method, query_parameters, header_parameters, body, expected_status_codes, polling_timeout, polling_interval):
# Construct and send request
operation_config = {}
request = None
if header_parameters is None:
header_parameters = {}
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if method == 'GET':
request = self._client.get(url, query_parameters)
elif method == 'PUT':
request = self._client.put(url, query_parameters)
elif method == 'POST':
request = self._client.post(url, query_parameters)
elif method == 'HEAD':
request = self._client.head(url, query_parameters)
elif method == 'PATCH':
request = self._client.patch(url, query_parameters)
elif method == 'DELETE':
request = self._client.delete(url, query_parameters)
elif method == 'MERGE':
request = self._client.merge(url, query_parameters)
response = self._client.send(request, header_parameters, body, **operation_config)
if response.status_code not in expected_status_codes:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
elif response.status_code == 202 and polling_timeout > 0:
def get_long_running_output(response):
return response
poller = LROPoller(self._client,
ClientRawResponse(None, response),
get_long_running_output,
ARMPolling(polling_interval, **operation_config))
response = self.get_poller_result(poller, polling_timeout)
return response
def get_poller_result(self, poller, timeout):
try:
poller.wait(timeout=timeout)
return poller.result()
except Exception as exc:
raise

Some files were not shown because too many files have changed in this diff Show More