Init: mediaserver

This commit is contained in:
2023-02-08 12:13:28 +01:00
parent 848bc9739c
commit f7c23d4ba9
31914 changed files with 6175775 additions and 0 deletions

View File

@@ -0,0 +1,234 @@
===================================================
Netapp E-Series SANtricity Collection Release Notes
===================================================
.. contents:: Topics
v1.4.0
======
Minor Changes
-------------
- netapp_eseries.santricity.na_santricity_iscsi_interface - Add support of iSCSI HIC speed.
- netapp_eseries.santricity.nar_santricity_host - Add support of iSCSI HIC speed.
Bugfixes
--------
- netapp_eseries.santricity.na_santricity_mgmt_interface - Add the ability to configure DNS, NTP and SSH separately from management interfaces.
- netapp_eseries.santricity.nar_santricity_host - Fix default MTU value for NVMe RoCE.
- netapp_eseries.santricity.nar_santricity_management - Add tasks to set DNS, NTP and SSH globally separately from management interfaces.
v1.3.1
======
Minor Changes
-------------
- Require Ansible 2.10 or later.
- na_santricity_volume - Add size_tolerance option to handle the difference in volume size with SANtricity System Manager.
- nar_santricity_common - utilize provided eseries management information to determine network to search.
Bugfixes
--------
- na_santricity_mgmt_interface - Fix default required_if state option for na_santricity_mgmt_interface
- netapp_eseries.santricity.nar_santricity_host - Fix default MTU value for NVMe RoCE.
v1.3.0
======
Minor Changes
-------------
- na_santricity_global - Add controller_shelf_id argument to set controller shelf identifier.
- na_santricity_volume - Add flag to control whether volume expansion operations are allowed.
- na_santricity_volume - Add volume write cache mirroring option.
- nar_santricity_host - Add volume write cache mirroring options.
Bugfixes
--------
- santricity_host - Ensure a list of volumes are provided to prevent netapp_eseries.santricity.santricity_host (lookup) index is string not integer exception.
v1.2.13
=======
Bugfixes
--------
- Fix availability of client certificate change.
v1.2.12
=======
Bugfixes
--------
- Fix host and host port names from being changed to lower case.
v1.2.11
=======
Bugfixes
--------
- Fix login banner message option bytes error in na_santricity_global.
v1.2.10
=======
Minor Changes
-------------
- Add login banner message to na_santricity_global module and nar_santricity_management role.
- Add usable drive option for na_santricity_storagepool module and nar_santricity_host role which can be used to choose selected drives for storage pool/volumes or define a pattern drive selection.
Bugfixes
--------
- Fix PEM certificate/key imports in the na_santricity_server_certificate module.
- Fix na_santricity_mgmt_interface IPv4 and IPv6 form validation.
v1.2.9
======
Minor Changes
-------------
- Add eseries_system_old_password variable to faciliate changing the storage system's admin password.
- Add remove_unspecified_user_certificates variable to the client certificates module.
Bugfixes
--------
- Fix missing proxy client and server certificate in management role.
- Fix missing proxy validate_certs and change current proxy password variables.
- Fix server certificate module not forwarding certificate imports to the embedded web services.
v1.2.8
======
Bugfixes
--------
- Fix pkcs8 private key passphrase issue.
- Fix storage system admin password change from web services proxy in na_santricity_auth module.
v1.2.7
======
v1.2.6
======
Bugfixes
--------
- Fix jinja issue with collecting certificates paths in nar_santricity_management role.
v1.2.5
======
Bugfixes
--------
- Add missing http(s) proxy username and password parameters from na_santricity_asup module and nar_santricity_management role."
- Add missing storage pool configuration parameter, criteria_drive_interface_type, to nar_santricity_host role.
v1.2.4
======
v1.2.3
======
Minor Changes
-------------
- Added nvme4k as a drive type interface to the na_santricity_storagepool module.
- Added options for critical and warning threshold setting in na_santricity_storagepool module and nar_santricity_host role.
- Fix dynamic disk pool critical and warning threshold settings.
Bugfixes
--------
- Fix drive firmware upgrade issue that prevented updating firware when drive was in use.
v1.2.2
======
v1.2.1
======
Release Summary
---------------
Release 1.2.2 simply removes resource-provisioned volumes feature from collection.
Minor Changes
-------------
- Add IPv6 and FQDN support for NTP
- Add IPv6 support for DNS
- Add criteria_drive_max_size option to na_santricity_storagepool and nar_santricity_host role.
- Add resource-provisioned volumes option to globals and nar_santricity_management role.
- Remove resource-provisioned volumes setting from na_santicity_global module and nar_santricity_management role."
v1.2.0
======
Release Summary
---------------
1.2.0 release of ``netapp_eseries.santricity`` collection on 2021-03-01.
Minor Changes
-------------
- na_santricity_discover - Add support for discovering storage systems directly using devmgr/v2/storage-systems/1/about endpoint since its old method of discover is being deprecated.
- na_santricity_facts - Add storage system information to facilitate ``netapp_eseries.host`` collection various protocol configuration.
- na_santricity_server_certificate - New module to configure storage system's web server certificate configuration.
- na_santricity_snapshot - New module to configure NetApp E-Series Snapshot consistency groups any number of base volumes.
- na_santricity_volume - Add percentage size unit (pct) and which allows the creates volumes based on the total storage pool size.
- nar_santricity_host - Add eseries_storage_pool_configuration list options, criteria_volume_count, criteria_reserve_free_capacity_pct, and common_volume_host to facilitate volumes based on percentages of storage pool or volume group.
- nar_santricity_host - Add support for snapshot group creation.
- nar_santricity_host - Improve host mapping information discovery.
- nar_santricity_host - Improve storage system discovery related error messages.
- nar_santricity_management - Add support for server certificate management.
Bugfixes
--------
- nar_santricity_host - Fix README.md examples.
v1.1.0
======
Release Summary
---------------
This release focused on providing volume details to through the netapp_volumes_by_initiators in the na_santricity_facts module, improving on the nar_santricity_common role storage system API information and resolving issues.
Minor Changes
-------------
- Add functionality to remove all inventory configuration in the nar_santricity_host role. Set configuration.eseries_remove_all_configuration=True to remove all storage pool/volume configuration, host, hostgroup, and lun mapping configuration.
- Add host_types, host_port_protocols, host_port_information, hostside_io_interface_protocols to netapp_volumes_by_initiators in the na_santricity_facts module.
- Add storage pool information to the volume_by_initiator facts.
- Add storage system not found exception to the common role's build_info task.
- Add volume_metadata option to na_santricity_volume module, add volume_metadata information to the netapp_volumes_by_initiators dictionary in na_santricity_facts module, and update the nar_santricity_host role with the option.
- Improve nar_santricity_common storage system api determinations; attempts to discover the storage system using the information provided in the inventory before attempting to search the subnet.
- Increased the storage system discovery connection timeouts to 30 seconds to prevent systems from not being discovered over slow connections.
- Minimize the facts gathered for the host initiators.
- Update ib iser determination to account for changes in firmware 11.60.2.
- Use existing Web Services Proxy storage system identifier when one is already created and one is not provided in the inventory.
- Utilize eseries_iscsi_iqn before searching host for iqn in nar_santricity_host role.
Bugfixes
--------
- Fix check_port_type method for ib iser when ib is the port type.
- Fix examples in the netapp_e_mgmt_interface module.
- Fix issue with changing host port name.
- Fix na_santricity_lun_mapping unmapping issue; previously mapped volumes failed to be unmapped.

View File

@@ -0,0 +1,37 @@
# Contributing
Thank you for your interest in contributing to the E-Series SANtricity Collection! 🎉
We appreciate that you want to take the time to contribute! Please follow these steps before submitting your PR.
## Creating a Pull Request
1. Please search [existing issues](https://github.com/netappeseries/santricity/issues) to determine if an issue already exists for what you intend to contribute.
2. If the issue does not exist, [create a new one](https://github.com/netappeseries/santricity/issues/new) that explains the bug or feature request.
* Let us know in the issue that you plan on creating a pull request for it. This helps us to keep track of the pull request and make sure there isn't duplicate effort.
3. Before creating a pull request, write up a brief proposal in the issue describing what your change would be and how it would work so that others can comment.
* It's better to wait for feedback from someone on NetApp's E-Series SANtricity Collection development team before writing code. We don't have an SLA for our feedback, but we will do our best to respond in a timely manner (at a minimum, to give you an idea if you're on the right track and that you should proceed, or not).
4. Sign and submit [NetApp's Corporate Contributor License Agreement (CCLA)](https://netapp.tap.thinksmart.com/prod/Portal/ShowWorkFlow/AnonymousEmbed/3d2f3aa5-9161-4970-997d-e482b0b033fa).
* From the **Project Name** dropdown select `E-Series SANtricity Collection`.
* For the **Project Website** specify `https://github.com/netappeseries/santricity`
5. If you've made it this far, have written the code that solves your issue, and addressed the review comments, then feel free to create your pull request.
Important: **NetApp will NOT look at the PR or any of the code submitted in the PR if the CCLA is not on file with NetApp Legal.**
## E-Series SANtricity Collection Team's Commitment
While we truly appreciate your efforts on pull requests, we **cannot** commit to including your PR in the E-Series SANtricity Collection project. Here are a few reasons why:
* There are many factors involved in integrating new code into this project, including things like:
* support for a wide variety of NetApp backends
* proper adherence to our existing and/or upcoming architecture
* sufficient functional and/or scenario tests across all backends
* etc.
In other words, while your bug fix or feature may be perfect as a standalone patch, we have to ensure that the changes work in all use cases, configurations, backends and across our support matrix.
* The E-Series SANtricity Collection team must plan our resources to integrate your code into our code base and CI platform, and depending on the complexity of your PR, we may or may not have the resources available to make it happen in a timely fashion. We'll do our best.
* Sometimes a PR doesn't fit into our future plans or conflicts with other items on the roadmap. It's possible that a PR you submit doesn't align with our upcoming plans, thus we won't be able to use it. It's not personal.
Thank you for considering to contribute to the E-Series SANtricity Collection project!

View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -0,0 +1,57 @@
// Copyright 2022 NetApp, Inc. All Rights Reserved.
// Licensed under the BSD-3-Clause.
// Set up build parameters so any branch can be manually rebuilt with different values.
properties([
parameters([
string(name: 'hubProjectVersion', defaultValue: '', description: 'Set this to force a BlackDuck scan and ' +
'manually tag it to a particular BlackDuck version (e.g. 1.0.1).')
])
])
hubProjectName = 'esg-ansible-santricity-collection'
hubProjectVersion = 'master'
if (params.hubProjectVersion != '') {
// Tag the manually selected version if the hubProjectVersion build parameter is set.
hubProjectVersion = params.hubProjectVersion
}
pipeline {
agent any
options {
timestamps()
timeout(time: 3, unit: 'HOURS')
buildDiscarder(logRotator(artifactNumToKeepStr: '15'))
}
stages {
stage("BlackDuck Scan") {
options {
timeout(time: 60, unit: 'MINUTES')
}
steps {
echo "Performing BlackDuck scanning..."
synopsys_detect detectProperties: """
--detect.project.name=${hubProjectName} \
--detect.project.version.name=${hubProjectVersion} \
--detect.cleanup=false \
--detect.project.code.location.unmap=true \
--detect.detector.search.depth=50 \
--detect.code.location.name=${hubProjectName}_${hubProjectVersion}_code \
--detect.bom.aggregate.name=${hubProjectName}_${hubProjectVersion}_bom \
--detect.excluded.directories=blackduck/ \
--detect.output.path=blackduck
"""
}
post {
success {
archiveArtifacts(artifacts: 'blackduck/runs/**')
}
}
}
}
}

View File

@@ -0,0 +1,36 @@
{
"collection_info": {
"namespace": "netapp_eseries",
"name": "santricity",
"version": "1.4.0",
"authors": [
"Joe McCormick (@iamjoemccormick)",
"Nathan Swartz (@ndswartz)"
],
"readme": "README.md",
"tags": [
"netapp",
"eseries",
"santricity"
],
"description": "Latest content available for NetApp E-Series Ansible automation.",
"license": [
"GPL-3.0-only",
"BSD-3-Clause"
],
"license_file": null,
"dependencies": {},
"repository": "https://www.github.com/netapp-eseries/santricity",
"documentation": "https://www.netapp.com/us/media/tr-4574.pdf",
"homepage": "https://www.github.com/netapp-eseries/santricity",
"issues": "https://github.com/netappeseries/santricity/issues"
},
"file_manifest_file": {
"name": "FILES.json",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "8ca82e60ff032c6438a4b21aeb0d9cda1b9591adedba932cdc73be72361184c9",
"format": 1
},
"format": 1
}

View File

@@ -0,0 +1,7 @@
[galaxy]
server_list = release_galaxy
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=260684515156e5658f2ca685ac392c6e40771bad

View File

@@ -0,0 +1,297 @@
plugins:
become: {}
cache: {}
callback: {}
cliconf: {}
connection: {}
httpapi: {}
inventory: {}
lookup:
santricity_host:
description: Collects host information
name: santricity_host
version_added: null
santricity_host_detail:
description: Expands the host information from santricity_host lookup
name: santricity_host_detail
version_added: null
santricity_storage_pool:
description: Storage pool information
name: santricity_storage_pool
version_added: null
module:
na_santricity_alerts:
description: NetApp E-Series manage email notification settings
name: na_santricity_alerts
namespace: ''
version_added: null
na_santricity_alerts_syslog:
description: NetApp E-Series manage syslog servers receiving storage system
alerts.
name: na_santricity_alerts_syslog
namespace: ''
version_added: null
na_santricity_asup:
description: NetApp E-Series manage auto-support settings
name: na_santricity_asup
namespace: ''
version_added: null
na_santricity_auditlog:
description: NetApp E-Series manage audit-log configuration
name: na_santricity_auditlog
namespace: ''
version_added: null
na_santricity_auth:
description: NetApp E-Series set or update the password for a storage array
device or SANtricity Web Services Proxy.
name: na_santricity_auth
namespace: ''
version_added: null
na_santricity_client_certificate:
description: NetApp E-Series manage remote server certificates.
name: na_santricity_client_certificate
namespace: ''
version_added: null
na_santricity_discover:
description: NetApp E-Series discover E-Series storage systems
name: na_santricity_discover
namespace: ''
version_added: null
na_santricity_drive_firmware:
description: NetApp E-Series manage drive firmware
name: na_santricity_drive_firmware
namespace: ''
version_added: null
na_santricity_facts:
description: NetApp E-Series retrieve facts about NetApp E-Series storage arrays
name: na_santricity_facts
namespace: ''
version_added: null
na_santricity_firmware:
description: NetApp E-Series manage firmware.
name: na_santricity_firmware
namespace: ''
version_added: null
na_santricity_global:
description: NetApp E-Series manage global settings configuration
name: na_santricity_global
namespace: ''
version_added: null
na_santricity_host:
description: NetApp E-Series manage eseries hosts
name: na_santricity_host
namespace: ''
version_added: null
na_santricity_hostgroup:
description: NetApp E-Series manage array host groups
name: na_santricity_hostgroup
namespace: ''
version_added: null
na_santricity_ib_iser_interface:
description: NetApp E-Series manage InfiniBand iSER interface configuration
name: na_santricity_ib_iser_interface
namespace: ''
version_added: null
na_santricity_iscsi_interface:
description: NetApp E-Series manage iSCSI interface configuration
name: na_santricity_iscsi_interface
namespace: ''
version_added: null
na_santricity_iscsi_target:
description: NetApp E-Series manage iSCSI target configuration
name: na_santricity_iscsi_target
namespace: ''
version_added: null
na_santricity_ldap:
description: NetApp E-Series manage LDAP integration to use for authentication
name: na_santricity_ldap
namespace: ''
version_added: null
na_santricity_lun_mapping:
description: NetApp E-Series manage lun mappings
name: na_santricity_lun_mapping
namespace: ''
version_added: null
na_santricity_mgmt_interface:
description: NetApp E-Series manage management interface configuration
name: na_santricity_mgmt_interface
namespace: ''
version_added: null
na_santricity_nvme_interface:
description: NetApp E-Series manage NVMe interface configuration
name: na_santricity_nvme_interface
namespace: ''
version_added: null
na_santricity_proxy_drive_firmware_upload:
description: NetApp E-Series manage proxy drive firmware files
name: na_santricity_proxy_drive_firmware_upload
namespace: ''
version_added: null
na_santricity_proxy_firmware_upload:
description: NetApp E-Series manage proxy firmware uploads.
name: na_santricity_proxy_firmware_upload
namespace: ''
version_added: null
na_santricity_proxy_systems:
description: NetApp E-Series manage SANtricity web services proxy storage arrays
name: na_santricity_proxy_systems
namespace: ''
version_added: null
na_santricity_storagepool:
description: NetApp E-Series manage volume groups and disk pools
name: na_santricity_storagepool
namespace: ''
version_added: null
na_santricity_syslog:
description: NetApp E-Series manage syslog settings
name: na_santricity_syslog
namespace: ''
version_added: null
na_santricity_volume:
description: NetApp E-Series manage storage volumes (standard and thin)
name: na_santricity_volume
namespace: ''
version_added: null
netapp_e_alerts:
description: NetApp E-Series manage email notification settings
name: netapp_e_alerts
namespace: ''
version_added: '2.7'
netapp_e_amg:
description: NetApp E-Series create, remove, and update asynchronous mirror
groups
name: netapp_e_amg
namespace: ''
version_added: '2.2'
netapp_e_amg_role:
description: NetApp E-Series update the role of a storage array within an Asynchronous
Mirror Group (AMG).
name: netapp_e_amg_role
namespace: ''
version_added: '2.2'
netapp_e_amg_sync:
description: NetApp E-Series conduct synchronization actions on asynchronous
mirror groups.
name: netapp_e_amg_sync
namespace: ''
version_added: '2.2'
netapp_e_asup:
description: NetApp E-Series manage auto-support settings
name: netapp_e_asup
namespace: ''
version_added: '2.7'
netapp_e_auditlog:
description: NetApp E-Series manage audit-log configuration
name: netapp_e_auditlog
namespace: ''
version_added: '2.7'
netapp_e_auth:
description: NetApp E-Series set or update the password for a storage array.
name: netapp_e_auth
namespace: ''
version_added: '2.2'
netapp_e_drive_firmware:
description: NetApp E-Series manage drive firmware
name: netapp_e_drive_firmware
namespace: ''
version_added: '2.9'
netapp_e_facts:
description: NetApp E-Series retrieve facts about NetApp E-Series storage arrays
name: netapp_e_facts
namespace: ''
version_added: '2.2'
netapp_e_firmware:
description: NetApp E-Series manage firmware.
name: netapp_e_firmware
namespace: ''
version_added: '2.9'
netapp_e_flashcache:
description: NetApp E-Series manage SSD caches
name: netapp_e_flashcache
namespace: ''
version_added: '2.2'
netapp_e_global:
description: NetApp E-Series manage global settings configuration
name: netapp_e_global
namespace: ''
version_added: '2.7'
netapp_e_host:
description: NetApp E-Series manage eseries hosts
name: netapp_e_host
namespace: ''
version_added: '2.2'
netapp_e_hostgroup:
description: NetApp E-Series manage array host groups
name: netapp_e_hostgroup
namespace: ''
version_added: '2.2'
netapp_e_iscsi_interface:
description: NetApp E-Series manage iSCSI interface configuration
name: netapp_e_iscsi_interface
namespace: ''
version_added: '2.7'
netapp_e_iscsi_target:
description: NetApp E-Series manage iSCSI target configuration
name: netapp_e_iscsi_target
namespace: ''
version_added: '2.7'
netapp_e_ldap:
description: NetApp E-Series manage LDAP integration to use for authentication
name: netapp_e_ldap
namespace: ''
version_added: '2.7'
netapp_e_lun_mapping:
description: NetApp E-Series create, delete, or modify lun mappings
name: netapp_e_lun_mapping
namespace: ''
version_added: '2.2'
netapp_e_mgmt_interface:
description: NetApp E-Series management interface configuration
name: netapp_e_mgmt_interface
namespace: ''
version_added: '2.7'
netapp_e_snapshot_group:
description: NetApp E-Series manage snapshot groups
name: netapp_e_snapshot_group
namespace: ''
version_added: '2.2'
netapp_e_snapshot_images:
description: NetApp E-Series create and delete snapshot images
name: netapp_e_snapshot_images
namespace: ''
version_added: '2.2'
netapp_e_snapshot_volume:
description: NetApp E-Series manage snapshot volumes.
name: netapp_e_snapshot_volume
namespace: ''
version_added: '2.2'
netapp_e_storage_system:
description: NetApp E-Series Web Services Proxy manage storage arrays
name: netapp_e_storage_system
namespace: ''
version_added: '2.2'
netapp_e_storagepool:
description: NetApp E-Series manage volume groups and disk pools
name: netapp_e_storagepool
namespace: ''
version_added: '2.2'
netapp_e_syslog:
description: NetApp E-Series manage syslog settings
name: netapp_e_syslog
namespace: ''
version_added: '2.7'
netapp_e_volume:
description: NetApp E-Series manage storage volumes (standard and thin)
name: netapp_e_volume
namespace: ''
version_added: '2.2'
netapp_e_volume_copy:
description: NetApp E-Series create volume copy pairs
name: netapp_e_volume_copy
namespace: ''
version_added: '2.2'
netconf: {}
shell: {}
strategy: {}
vars: {}
version: 1.4.0

View File

@@ -0,0 +1,271 @@
ancestor: null
releases:
1.1.0:
changes:
bugfixes:
- Fix check_port_type method for ib iser when ib is the port type.
- Fix examples in the netapp_e_mgmt_interface module.
- Fix issue with changing host port name.
- Fix na_santricity_lun_mapping unmapping issue; previously mapped volumes failed
to be unmapped.
minor_changes:
- Add functionality to remove all inventory configuration in the nar_santricity_host
role. Set configuration.eseries_remove_all_configuration=True to remove all
storage pool/volume configuration, host, hostgroup, and lun mapping configuration.
- Add host_types, host_port_protocols, host_port_information, hostside_io_interface_protocols
to netapp_volumes_by_initiators in the na_santricity_facts module.
- Add storage pool information to the volume_by_initiator facts.
- Add storage system not found exception to the common role's build_info task.
- Add volume_metadata option to na_santricity_volume module, add volume_metadata
information to the netapp_volumes_by_initiators dictionary in na_santricity_facts
module, and update the nar_santricity_host role with the option.
- Improve nar_santricity_common storage system api determinations; attempts
to discover the storage system using the information provided in the inventory
before attempting to search the subnet.
- Increased the storage system discovery connection timeouts to 30 seconds to
prevent systems from not being discovered over slow connections.
- Minimize the facts gathered for the host initiators.
- Update ib iser determination to account for changes in firmware 11.60.2.
- Use existing Web Services Proxy storage system identifier when one is already
created and one is not provided in the inventory.
- Utilize eseries_iscsi_iqn before searching host for iqn in nar_santricity_host
role.
release_summary: This release focused on providing volume details to through
the netapp_volumes_by_initiators in the na_santricity_facts module, improving
on the nar_santricity_common role storage system API information and resolving
issues.
fragments:
- 1.0.9.yml
- add_io_communication_protocol_info_to_volume_by_initator_facts.yml
- add_storage_pool_info_to_volume_by_initiator_facts.yml
- add_storage_system_not_found_exception.yml
- add_undo_configuration.yml
- add_volume_metadata_option.yml
- fix_change_host_port.yml
- fix_ib_iser_port_type.yml
- fix_netapp_e_mgmt_interface_examples.yml
- fix_volume_unmapping_issue.yml
- improve_storage_system_api_determinations.yml
- increase_discovery_connection_timeout.yml
- minimize_host_initiator_facts_gathered.yml
- update_ib_iser_determination.yml
- use_existing_proxy_ssid_when_unspecified.yml
- utilize_eseries_iscsi_iqn_before_searching_host.yml
release_date: '2020-09-18'
1.2.0:
changes:
bugfixes:
- nar_santricity_host - Fix README.md examples.
minor_changes:
- na_santricity_discover - Add support for discovering storage systems directly
using devmgr/v2/storage-systems/1/about endpoint since its old method of discover
is being deprecated.
- na_santricity_facts - Add storage system information to facilitate ``netapp_eseries.host``
collection various protocol configuration.
- na_santricity_server_certificate - New module to configure storage system's
web server certificate configuration.
- na_santricity_snapshot - New module to configure NetApp E-Series Snapshot
consistency groups any number of base volumes.
- na_santricity_volume - Add percentage size unit (pct) and which allows the
creates volumes based on the total storage pool size.
- nar_santricity_host - Add eseries_storage_pool_configuration list options,
criteria_volume_count, criteria_reserve_free_capacity_pct, and common_volume_host
to facilitate volumes based on percentages of storage pool or volume group.
- nar_santricity_host - Add support for snapshot group creation.
- nar_santricity_host - Improve host mapping information discovery.
- nar_santricity_host - Improve storage system discovery related error messages.
- nar_santricity_management - Add support for server certificate management.
release_summary: 1.2.0 release of ``netapp_eseries.santricity`` collection on
2021-03-01.
fragments:
- 1.2.0.yml
- error-messages.yml
- host-mapping-information.yml
- hostside-facts.yml
- readme-examples.yml
- server-certificate.yml
- snapshots.yml
- storage-system-discovery.yml
- volume-by-percentage.yml
release_date: '2021-03-30'
1.2.1:
changes:
minor_changes:
- Add IPv6 and FQDN support for NTP
- Add IPv6 support for DNS
- Add criteria_drive_max_size option to na_santricity_storagepool and nar_santricity_host
role.
- Add resource-provisioned volumes option to globals and nar_santricity_management
role.
- Remove resource-provisioned volumes setting from na_santicity_global module
and nar_santricity_management role."
release_summary: Release 1.2.2 simply removes resource-provisioned volumes feature
from collection.
fragments:
- 1.2.2.yml
- criteria_drive_max_size.yml
- fix_dns_ntp.yml
- remove_resource_provisioned_volumes.yml
- resource_provisioned_volume.yml
release_date: '2021-04-12'
1.2.10:
changes:
bugfixes:
- Fix PEM certificate/key imports in the na_santricity_server_certificate module.
- Fix na_santricity_mgmt_interface IPv4 and IPv6 form validation.
minor_changes:
- Add login banner message to na_santricity_global module and nar_santricity_management
role.
- Add usable drive option for na_santricity_storagepool module and nar_santricity_host
role which can be used to choose selected drives for storage pool/volumes
or define a pattern drive selection.
fragments:
- add_login_banner_message.yml
- add_usable_drive_storage_pool_option.yml
- fix_mgmt_ip_address_form_validation.yml
- fix_server_pem_certificate_imports.yml
release_date: '2021-05-26'
1.2.11:
changes:
bugfixes:
- Fix login banner message option bytes error in na_santricity_global.
fragments:
- fix_login_banner.yml
release_date: '2021-06-01'
1.2.12:
changes:
bugfixes:
- Fix host and host port names from being changed to lower case.
fragments:
- fix_host_object_naming_case.yml
release_date: '2021-06-07'
1.2.13:
changes:
bugfixes:
- Fix availability of client certificate change.
fragments:
- fix_client_certificate_availability.yml
release_date: '2021-06-11'
1.2.2:
release_date: '2021-04-13'
1.2.3:
changes:
bugfixes:
- Fix drive firmware upgrade issue that prevented updating firware when drive
was in use.
minor_changes:
- Added nvme4k as a drive type interface to the na_santricity_storagepool module.
- Added options for critical and warning threshold setting in na_santricity_storagepool
module and nar_santricity_host role.
- Fix dynamic disk pool critical and warning threshold settings.
fragments:
- add_nvme_drive_interface.yml
- fix_ddp_threshold_setting.yml
- fix_drive_firmware.yml
release_date: '2021-04-14'
1.2.4:
release_date: '2021-04-14'
1.2.5:
changes:
bugfixes:
- Add missing http(s) proxy username and password parameters from na_santricity_asup
module and nar_santricity_management role."
- Add missing storage pool configuration parameter, criteria_drive_interface_type,
to nar_santricity_host role.
fragments:
- criteria_drive_interface_type.yml
- fix_missing_asup_parameters.yml
release_date: '2021-04-19'
1.2.6:
changes:
bugfixes:
- Fix jinja issue with collecting certificates paths in nar_santricity_management
role.
fragments:
- fix_security_certificates.yml
release_date: '2021-04-19'
1.2.7:
fragments:
- proxy_asup_documentation.yml
release_date: '2021-04-19'
1.2.8:
changes:
bugfixes:
- Fix pkcs8 private key passphrase issue.
- Fix storage system admin password change from web services proxy in na_santricity_auth
module.
fragments:
- fix_pkcs8_cert_issue.yml
- fix_proxy_admin_password_change.yml
release_date: '2021-05-11'
1.2.9:
changes:
bugfixes:
- Fix missing proxy client and server certificate in management role.
- Fix missing proxy validate_certs and change current proxy password variables.
- Fix server certificate module not forwarding certificate imports to the embedded
web services.
minor_changes:
- Add eseries_system_old_password variable to faciliate changing the storage
system's admin password.
- Add remove_unspecified_user_certificates variable to the client certificates
module.
fragments:
- add_eseries_system_old_password_variable_to_change_admin.yml
- fix_certificates.yml
release_date: '2021-05-13'
1.3.0:
changes:
bugfixes:
- santricity_host - Ensure a list of volumes are provided to prevent netapp_eseries.santricity.santricity_host
(lookup) index is string not integer exception.
minor_changes:
- na_santricity_global - Add controller_shelf_id argument to set controller
shelf identifier.
- na_santricity_volume - Add flag to control whether volume expansion operations
are allowed.
- na_santricity_volume - Add volume write cache mirroring option.
- nar_santricity_host - Add volume write cache mirroring options.
fragments:
- add_controller_shelf_id_option.yml
- add_flag_to_allow_volume_expansion.yml
- add_volume_write_cache_mirroring_option.yml
- fix_single_volume_host_mapping_determinations.yml
release_date: '2022-04-05'
1.3.1:
changes:
bugfixes:
- na_santricity_mgmt_interface - Fix default required_if state option for na_santricity_mgmt_interface
- netapp_eseries.santricity.nar_santricity_host - Fix default MTU value for
NVMe RoCE.
minor_changes:
- Require Ansible 2.10 or later.
- na_santricity_volume - Add size_tolerance option to handle the difference
in volume size with SANtricity System Manager.
- nar_santricity_common - utilize provided eseries management information to
determine network to search.
fragments:
- add_volume_size_tolerance.yml
- fix_nvme_roce_mtu_default.yml
- fix_required_if_state_option.yml
- improve_system_discovery.yml
- require_ansible_2.10_or_later.yml
release_date: '2022-08-15'
1.4.0:
changes:
bugfixes:
- netapp_eseries.santricity.na_santricity_mgmt_interface - Add the ability to
configure DNS, NTP and SSH separately from management interfaces.
- netapp_eseries.santricity.nar_santricity_host - Fix default MTU value for
NVMe RoCE.
- netapp_eseries.santricity.nar_santricity_management - Add tasks to set DNS,
NTP and SSH globally separately from management interfaces.
minor_changes:
- netapp_eseries.santricity.na_santricity_iscsi_interface - Add support of iSCSI
HIC speed.
- netapp_eseries.santricity.nar_santricity_host - Add support of iSCSI HIC speed.
fragments:
- add_iscsi_hic_speed.yml
- fix_global_management_interface_configuration.yml
- fix_nvme_roce_mtu_default.yml
release_date: '2023-01-30'

View File

@@ -0,0 +1,32 @@
changelog_filename_template: ../CHANGELOG.rst
changelog_filename_version_depth: 0
changes_file: changelog.yaml
changes_format: combined
ignore_other_fragment_extensions: true
keep_fragments: false
mention_ancestor: true
new_plugins_after_name: removed_features
notesdir: fragments
prelude_section_name: release_summary
prelude_section_title: Release Summary
sanitize_changelog: true
sections:
- - major_changes
- Major Changes
- - minor_changes
- Minor Changes
- - breaking_changes
- Breaking Changes / Porting Guide
- - deprecated_features
- Deprecated Features
- - removed_features
- Removed Features (previously deprecated)
- - security_fixes
- Security Fixes
- - bugfixes
- Bugfixes
- - known_issues
- Known Issues
title: Netapp E-Series SANtricity Collection
trivial_section_name: trivial
use_fqcn: true

View File

@@ -0,0 +1,2 @@
---
requires_ansible: '>=2.13'

View File

@@ -0,0 +1,57 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Sumit Kumar <sumit4@netapp.com>, chris Archibald <carchi@netapp.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r'''
options:
- See respective platform section for more details
requirements:
- See respective platform section for more details
notes:
- Ansible modules are available for the following NetApp Storage Platforms: E-Series, ONTAP, SolidFire
'''
# Documentation fragment for E-Series
ESERIES = r'''
options:
api_username:
required: true
type: str
description:
- The username to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_password:
required: true
type: str
description:
- The password to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_url:
required: true
type: str
description:
- The url to the SANtricity Web Services Proxy or Embedded Web Services API.
Example https://prod-1.wahoo.acme.com/devmgr/v2
validate_certs:
required: false
default: true
description:
- Should https certificates be validated?
type: bool
ssid:
required: false
type: str
default: 1
description:
- The ID of the array to manage. This value must be unique for each array.
notes:
- The E-Series Ansible modules require either an instance of the Web Services Proxy (WSP), to be available to manage
the storage-system, or an E-Series storage-system that supports the Embedded Web Services API.
- Embedded Web Services is currently available on the E2800, E5700, EF570, and newer hardware models.
- M(netapp_e_storage_system) may be utilized for configuring the systems managed by a WSP instance.
'''

View File

@@ -0,0 +1,90 @@
# -*- coding: utf-8 -*-
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r"""
options:
- See respective platform section for more details
requirements:
- See respective platform section for more details
notes:
- Ansible modules are available for the following NetApp Storage Platforms: E-Series
"""
# Documentation fragment for E-Series
SANTRICITY_PROXY_DOC = r"""
options:
api_username:
required: true
type: str
description:
- The username to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_password:
required: true
type: str
description:
- The password to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_url:
required: true
type: str
description:
- The url to the SANtricity Web Services Proxy or Embedded Web Services API.
- Example https://prod-1.wahoo.acme.com:8443/devmgr/v2
validate_certs:
required: false
default: true
description:
- Should https certificates be validated?
type: bool
notes:
- The E-Series Ansible modules require either an instance of the Web Services Proxy (WSP), to be available to manage
the storage-system, or an E-Series storage-system that supports the Embedded Web Services API.
- Embedded Web Services is currently available on the E2800, E5700, EF570, and newer hardware models.
- M(netapp_e_storage_system) may be utilized for configuring the systems managed by a WSP instance.
"""
# Documentation fragment for E-Series
SANTRICITY_DOC = r"""
options:
api_username:
required: true
type: str
description:
- The username to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_password:
required: true
type: str
description:
- The password to authenticate with the SANtricity Web Services Proxy or Embedded Web Services API.
api_url:
required: true
type: str
description:
- The url to the SANtricity Web Services Proxy or Embedded Web Services API.
- Example https://prod-1.wahoo.acme.com:8443/devmgr/v2
validate_certs:
required: false
default: true
description:
- Should https certificates be validated?
type: bool
ssid:
required: false
type: str
default: 1
description:
- The ID of the array to manage. This value must be unique for each array.
notes:
- The E-Series Ansible modules require either an instance of the Web Services Proxy (WSP), to be available to manage
the storage-system, or an E-Series storage-system that supports the Embedded Web Services API.
- Embedded Web Services is currently available on the E2800, E5700, EF570, and newer hardware models.
- M(netapp_e_storage_system) may be utilized for configuring the systems managed by a WSP instance.
"""

View File

@@ -0,0 +1,85 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: santricity_hosts
author: Nathan Swartz
short_description: Collects host information
description:
- Collects current host, expected host and host group inventory definitions.
options:
inventory:
description:
- E-Series storage array inventory, hostvars[inventory_hostname].
- Run na_santricity_facts prior to calling
required: True
type: complex
volumes:
description:
- Volume information returned from santricity_volume lookup plugin which expands
"""
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, inventory, volumes, **kwargs):
if isinstance(inventory, list):
inventory = inventory[0]
if not isinstance(volumes, list):
volumes = [volumes]
if ("eseries_storage_pool_configuration" not in inventory or not isinstance(inventory["eseries_storage_pool_configuration"], list) or
len(inventory["eseries_storage_pool_configuration"]) == 0):
return list()
if "eseries_storage_pool_configuration" not in inventory.keys():
raise AnsibleError("eseries_storage_pool_configuration must be defined. See nar_santricity_host role documentation.")
info = {"current_hosts": {}, "expected_hosts": {}, "host_groups": {}}
groups = []
hosts = []
non_inventory_hosts = []
non_inventory_groups = []
for group in inventory["groups"].keys():
groups.append(group)
hosts.extend(inventory["groups"][group])
if "eseries_host_object" in inventory.keys():
non_inventory_hosts = [host["name"] for host in inventory["eseries_host_object"]]
non_inventory_groups = [host["group"] for host in inventory["eseries_host_object"] if "group" in host]
for volume in volumes:
if volume["state"] == "present" and "host" in volume.keys():
if volume["host"] in groups:
# Add all expected group hosts
for expected_host in inventory["groups"][volume["host"]]:
if "host_type" in volume:
info["expected_hosts"].update({expected_host: {"state": "present",
"host_type": volume["host_type"],
"group": volume["host"]}})
else:
info["expected_hosts"].update({expected_host: {"state": "present",
"group": volume["host"]}})
info["host_groups"].update({volume["host"]: inventory["groups"][volume["host"]]})
elif volume["host"] in hosts:
if "host_type" in volume:
info["expected_hosts"].update({volume["host"]: {"state": "present",
"host_type": volume["host_type"],
"group": None}})
else:
info["expected_hosts"].update({volume["host"]: {"state": "present",
"group": None}})
elif volume["host"] not in non_inventory_hosts and volume["host"] not in non_inventory_groups:
raise AnsibleError("Expected host or host group does not exist in your Ansible inventory and is not specified in"
" eseries_host_object variable! [%s]." % volume["host"])
return [info]

View File

@@ -0,0 +1,106 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: santricity_hosts_detail
author: Nathan Swartz
short_description: Expands the host information from santricity_host lookup
description:
- Expands the host information from santricity_host lookup to include system and port information
options:
hosts:
description:
- E-Series storage array inventory, hostvars[inventory_hostname].
- Run na_santricity_facts prior to calling
required: True
type: list
hosts_info:
description:
- The registered results from the setup module from each expected_hosts, hosts_info['results'].
- Collected results from the setup module for each expected_hosts from the results of the santricity_host lookup plugin.
required: True
type: list
host_interface_ports:
description:
- List of dictionaries containing "stdout_lines" which is a list of iqn/wwpns for each expected_hosts from the results of
the santricity_host lookup plugin.
- Register the results from the shell module that is looped over each host in expected_hosts. The command issued should result
in a newline delineated list of iqns, nqns, or wwpns.
required: True
type: list
protocol:
description:
- Storage system interface protocol (iscsi, sas, fc, ib-iser, ib-srp, nvme_ib, nvme_fc, or nvme_roce)
required: True
type: str
"""
import re
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, hosts, hosts_info, host_interface_ports, protocol, **kwargs):
if isinstance(hosts, list):
hosts = hosts[0]
if "expected_hosts" not in hosts:
raise AnsibleError("Invalid argument: hosts must contain the output from santricity_host lookup plugin.")
if not isinstance(hosts_info, list):
raise AnsibleError("Invalid argument: hosts_info must contain the results from the setup module for each"
" expected_hosts found in the output of the santricity_host lookup plugin.")
if not isinstance(host_interface_ports, list):
raise AnsibleError("Invalid argument: host_interface_ports must contain list of dictionaries containing 'stdout_lines' key"
" which is a list of iqns, nqns, or wwpns for each expected_hosts from the results of the santricity_host lookup plugin")
if protocol not in ["iscsi", "sas", "fc", "ib_iser", "ib_srp", "nvme_ib", "nvme_fc", "nvme_roce"]:
raise AnsibleError("Invalid argument: protocol must one of the following: iscsi, sas, fc, ib_iser, ib_srp, nvme_ib, nvme_fc, nvme_roce.")
for host in hosts["expected_hosts"].keys():
sanitized_hostname = re.sub("[.:-]", "_", host)[:20]
# Add host information to expected host
for info in hosts_info:
if info["item"] == host:
# Determine host type
if "host_type" not in hosts["expected_hosts"][host].keys():
if info["ansible_facts"]["ansible_os_family"].lower() == "windows":
hosts["expected_hosts"][host]["host_type"] = "windows"
elif info["ansible_facts"]["ansible_os_family"].lower() in ["redhat", "debian", "suse"]:
hosts["expected_hosts"][host]["host_type"] = "linux dm-mp"
# Update hosts object
hosts["expected_hosts"][host].update({"sanitized_hostname": sanitized_hostname, "ports": []})
# Add SAS ports
for interface in host_interface_ports:
if interface["item"] == host and "stdout_lines" in interface.keys():
if protocol == "sas":
for index, address in enumerate([base[:-1] + str(index) for base in interface["stdout_lines"] for index in range(8)]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": "sas", "label": label, "port": address})
elif protocol == "ib_iser" or protocol == "ib_srp":
for index, address in enumerate(interface["stdout_lines"]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": "ib", "label": label, "port": address})
elif protocol == "nvme_ib":
for index, address in enumerate(interface["stdout_lines"]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": "nvmeof", "label": label, "port": address})
elif protocol == "nvme_fc":
for index, address in enumerate(interface["stdout_lines"]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": "nvmeof", "label": label, "port": address})
elif protocol == "nvme_roce":
for index, address in enumerate(interface["stdout_lines"]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": "nvmeof", "label": label, "port": address})
else:
for index, address in enumerate(interface["stdout_lines"]):
label = "%s_%s" % (sanitized_hostname, index)
hosts["expected_hosts"][host]["ports"].append({"type": protocol, "label": label, "port": address})
return [hosts]

View File

@@ -0,0 +1,143 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError
class LookupModule(LookupBase):
def run(self, array_facts, volumes, **kwargs):
if isinstance(array_facts, list):
array_facts = array_facts[0]
if isinstance(volumes, dict): # This means that there is only one volume and volumes was stripped of its list
volumes = [volumes]
if "storage_array_facts" not in array_facts.keys():
# Don't throw exceptions unless you want run to terminate!!!
# raise AnsibleError("Storage array information not available. Collect facts using na_santricity_facts module.")
return list()
# Remove any absent volumes
volumes = [vol for vol in volumes if "state" not in vol or vol["state"] == "present"]
self.array_facts = array_facts["storage_array_facts"]
self.luns_by_target = self.array_facts["netapp_luns_by_target"]
self.access_volume_lun = self.array_facts["netapp_default_hostgroup_access_volume_lun"]
# Search for volumes that have a specified host or host group initiator
mapping_info = list()
for volume in volumes:
if "host" in volume.keys():
# host initiator is already mapped on the storage system
if volume["host"] in self.luns_by_target:
used_luns = [lun for name, lun in self.luns_by_target[volume["host"]]]
for host_group in self.array_facts["netapp_host_groups"]:
if volume["host"] == host_group["name"]: # target is an existing host group
for host in host_group["hosts"]:
used_luns.extend([lun for name, lun in self.luns_by_target[host]])
break
elif volume["host"] in host_group["hosts"]: # target is an existing host in the host group.
used_luns.extend([lun for name, lun in self.luns_by_target[host_group["name"]]])
break
for name, lun in self.luns_by_target[volume["host"]]:
# Check whether volume is mapped to the expected host
if name == volume["name"]:
# Check whether lun option differs from existing lun
if "lun" in volume and volume["lun"] != lun:
self.change_volume_mapping_lun(volume["name"], volume["host"], volume["lun"])
lun = volume["lun"]
if lun in used_luns:
raise AnsibleError("Volume [%s] cannot be mapped to host or host group [%s] using lun number %s!"
% (name, volume["host"], lun))
mapping_info.append({"volume": volume["name"], "target": volume["host"], "lun": lun})
break
# Volume has not been mapped to host initiator
else:
# Check whether lun option has been used
if "lun" in volume:
if volume["lun"] in used_luns:
for target in self.array_facts["netapp_luns_by_target"].keys():
for mapped_volume, mapped_lun in [entry for entry in self.array_facts["netapp_luns_by_target"][target] if entry]:
if volume["lun"] == mapped_lun:
if volume["name"] != mapped_volume:
raise AnsibleError("Volume [%s] cannot be mapped to host or host group [%s] using lun number %s!"
% (volume["name"], volume["host"], volume["lun"]))
else: # volume is being remapped with the same lun number
self.remove_volume_mapping(mapped_volume, target)
lun = volume["lun"]
else:
lun = self.next_available_lun(used_luns)
mapping_info.append({"volume": volume["name"], "target": volume["host"], "lun": lun})
self.add_volume_mapping(volume["name"], volume["host"], lun)
else:
raise AnsibleError("The host or host group [%s] is not defined!" % volume["host"])
else:
mapping_info.append({"volume": volume["name"]})
return mapping_info
def next_available_lun(self, used_luns):
"""Find next available lun numbers."""
if self.access_volume_lun is not None:
used_luns.append(self.access_volume_lun)
lun = 1
while lun in used_luns:
lun += 1
return lun
def add_volume_mapping(self, name, host, lun):
"""Add volume mapping to record table (luns_by_target)."""
# Find associated group and the groups hosts
for host_group in self.array_facts["netapp_host_groups"]:
if host == host_group["name"]:
# add to group
self.luns_by_target[host].append([name, lun])
# add to hosts
for hostgroup_host in host_group["hosts"]:
self.luns_by_target[hostgroup_host].append([name, lun])
break
else:
self.luns_by_target[host].append([name, lun])
def remove_volume_mapping(self, name, host):
"""remove volume mapping to record table (luns_by_target)."""
# Find associated group and the groups hosts
for host_group in self.array_facts["netapp_host_groups"]:
if host == host_group["name"]:
# add to group
for entry in self.luns_by_target[host_group["name"]]:
if entry[0] == name:
del entry
# add to hosts
for hostgroup_host in host_group["hosts"]:
for entry in self.luns_by_target[hostgroup_host]:
if entry[0] == name:
del entry
break
else:
for index, entry in enumerate(self.luns_by_target[host]):
if entry[0] == name:
self.luns_by_target[host].pop(index)
def change_volume_mapping_lun(self, name, host, lun):
"""remove volume mapping to record table (luns_by_target)."""
self.remove_volume_mapping(name, host)
self.add_volume_mapping(name, host, lun)

View File

@@ -0,0 +1,80 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: santricity_sp_config
author: Nathan Swartz
short_description: Storage pool information
description:
- Retrieves storage pool information from the inventory
"""
import re
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError
from itertools import product
class LookupModule(LookupBase):
def run(self, inventory, state, **kwargs):
if isinstance(inventory, list):
inventory = inventory[0]
if ("eseries_storage_pool_configuration" not in inventory or not isinstance(inventory["eseries_storage_pool_configuration"], list) or
len(inventory["eseries_storage_pool_configuration"]) == 0):
return list()
sp_list = list()
for sp_info in inventory["eseries_storage_pool_configuration"]:
if not isinstance(sp_info, dict) or "name" not in sp_info:
raise AnsibleError("eseries_storage_pool_configuration must contain a list of dictionaries containing the necessary information.")
for sp in patternize(sp_info["name"], inventory):
if (("eseries_remove_all_configuration_state" in inventory and inventory["eseries_remove_all_configuration_state"] == "absent") or
("state" in sp_info and sp_info["state"] == "absent") or
("state" not in sp_info and "eseries_storage_pool_state" in inventory and inventory["eseries_storage_pool_state"] == "absent")):
sp_options = {"state": "absent"}
else:
sp_options = {"state": "present"}
for option in sp_info.keys():
sp_options.update({option: sp_info[option]})
sp_options.update({"name": sp})
if sp_options["state"] == state:
sp_list.append(sp_options)
return sp_list
def patternize(pattern, inventory, storage_pool=None):
"""Generate list of strings determined by a pattern"""
if storage_pool:
pattern = pattern.replace("[pool]", storage_pool)
if inventory:
inventory_tokens = re.findall(r"\[[a-zA-Z0-9_]*\]", pattern)
for token in inventory_tokens:
pattern = pattern.replace(token, str(inventory[token[1:-1]]))
tokens = re.findall(r"\[[0-9]-[0-9]\]|\[[a-z]-[a-z]\]|\[[A-Z]-[A-Z]\]", pattern)
segments = "%s".join(re.split(r"\[[0-9]-[0-9]\]|\[[a-z]-[a-z]\]|\[[A-Z]-[A-Z]\]", pattern))
if len(tokens) == 0:
return [pattern]
combinations = []
for token in tokens:
start, stop = token[1:-1].split("-")
try:
start = int(start)
stop = int(stop)
combinations.append([str(number) for number in range(start, stop + 1)])
except ValueError:
combinations.append([chr(number) for number in range(ord(start), ord(stop) + 1)])
return [segments % subset for subset in list(product(*combinations))]

View File

@@ -0,0 +1,128 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError
from itertools import product
class LookupModule(LookupBase):
def run(self, inventory, **kwargs):
if isinstance(inventory, list):
inventory = inventory[0]
if ("eseries_storage_pool_configuration" not in inventory.keys() or not isinstance(inventory["eseries_storage_pool_configuration"], list) or
len(inventory["eseries_storage_pool_configuration"]) == 0):
return list()
vol_list = list()
for sp_info in inventory["eseries_storage_pool_configuration"]:
if "name" not in sp_info.keys():
continue
if "volumes" in sp_info.keys() and ("criteria_volume_count" in sp_info.keys() or "criteria_reserve_free_capacity_pct" in sp_info.keys()):
raise AnsibleError("Incompatible parameters: You cannot specify both volumes with either criteria_volume_count or "
"criteria_reserve_free_capacity for any given eseries_storage_pool_configuration entry.")
if ("common_volume_configuration" in sp_info.keys() and isinstance(sp_info["common_volume_configuration"], dict) and
"size" in sp_info["common_volume_configuration"].keys() and "criteria_reserve_free_capacity_pct" in sp_info.keys()):
raise AnsibleError("Incompatible parameters: You cannot specify both size in common_volume_configuration with "
"criteria_reserve_free_capacity for any given eseries_storage_pool_configuration entry.")
if "volumes" not in sp_info.keys():
if "criteria_volume_count" in sp_info.keys():
if "common_volume_configuration" not in sp_info:
sp_info.update({"common_volume_configuration": {}})
reserve_free_capacity_pct = sp_info["criteria_reserve_free_capacity_pct"] if "criteria_reserve_free_capacity_pct" in sp_info.keys() else 0.0
volume_size = (100.0 - reserve_free_capacity_pct) / sp_info["criteria_volume_count"]
count_digits = len(str(sp_info["criteria_volume_count"]))
if "size" not in sp_info["common_volume_configuration"].keys():
sp_info["common_volume_configuration"].update({"size": volume_size, "size_unit": "pct"})
if "host" not in sp_info["common_volume_configuration"].keys() and "common_volume_host" in sp_info.keys():
sp_info["common_volume_configuration"].update({"host": sp_info["common_volume_host"]})
if (("eseries_remove_all_configuration_state" in inventory and inventory["eseries_remove_all_configuration_state"] == "absent") or
("state" in sp_info and sp_info["state"] == "absent") or
("state" not in sp_info and "eseries_volume_state" in inventory and inventory["eseries_volume_state"] == "absent")):
sp_info["common_volume_configuration"].update({"state": "absent"})
else:
sp_info["common_volume_configuration"].update({"state": "present"})
for count in range(sp_info["criteria_volume_count"]):
if "volumes" not in sp_info.keys():
sp_info.update({"volumes": []})
sp_info["volumes"].append({"name": "[pool]_%0*d" % (count_digits, count)})
else:
continue
elif not isinstance(sp_info["volumes"], list):
raise AnsibleError("Volumes must be a list")
for sp in patternize(sp_info["name"], inventory):
for vol_info in sp_info["volumes"]:
if not isinstance(vol_info, dict):
raise AnsibleError("Volume in the storage pool, %s, must be a dictionary." % sp_info["name"])
for vol in patternize(vol_info["name"], inventory, storage_pool=sp):
vol_options = dict()
# Add common_volume_configuration information
combined_volume_metadata = {}
if "common_volume_configuration" in sp_info:
for option, value in sp_info["common_volume_configuration"].items():
vol_options.update({option: value})
if "volume_metadata" in sp_info["common_volume_configuration"].keys():
combined_volume_metadata.update(sp_info["common_volume_configuration"]["volume_metadata"])
# Add/update volume specific information
for option, value in vol_info.items():
vol_options.update({option: value})
if "volume_metadata" in vol_info.keys():
combined_volume_metadata.update(vol_info["volume_metadata"])
vol_options.update({"volume_metadata": combined_volume_metadata})
if (("eseries_remove_all_configuration_state" in inventory and inventory["eseries_remove_all_configuration_state"] == "absent") or
("state" in sp_info and sp_info["state"] == "absent") or
("state" not in sp_info and "eseries_volume_state" in inventory and inventory["eseries_volume_state"] == "absent")):
vol_options.update({"state": "absent"})
else:
vol_options.update({"state": "present"})
vol_options.update({"name": vol, "storage_pool_name": sp})
vol_list.append(vol_options)
return vol_list
def patternize(pattern, inventory, storage_pool=None):
"""Generate list of strings determined by a pattern"""
if storage_pool:
pattern = pattern.replace("[pool]", storage_pool)
if inventory:
inventory_tokens = re.findall(r"\[[a-zA-Z0-9_]*\]", pattern)
for token in inventory_tokens:
pattern = pattern.replace(token, str(inventory[token[1:-1]]))
tokens = re.findall(r"\[[0-9]-[0-9]\]|\[[a-z]-[a-z]\]|\[[A-Z]-[A-Z]\]", pattern)
segments = "%s".join(re.split(r"\[[0-9]-[0-9]\]|\[[a-z]-[a-z]\]|\[[A-Z]-[A-Z]\]", pattern))
if len(tokens) == 0:
return [pattern]
combinations = []
for token in tokens:
start, stop = token[1:-1].split("-")
try:
start = int(start)
stop = int(stop)
combinations.append([str(number) for number in range(start, stop + 1)])
except ValueError:
combinations.append([chr(number) for number in range(ord(start), ord(stop) + 1)])
return [segments % subset for subset in list(product(*combinations))]

View File

@@ -0,0 +1,746 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c) 2017, Sumit Kumar <sumit4@netapp.com>
# Copyright (c) 2017, Michael Price <michael.price@netapp.com>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
import os
import random
import mimetypes
from pprint import pformat
from ansible.module_utils import six
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils.urls import open_url
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils._text import to_native
try:
from ansible.module_utils.ansible_release import __version__ as ansible_version
except ImportError:
ansible_version = 'unknown'
try:
from netapp_lib.api.zapi import zapi
HAS_NETAPP_LIB = True
except ImportError:
HAS_NETAPP_LIB = False
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
import ssl
try:
from urlparse import urlparse, urlunparse
except ImportError:
from urllib.parse import urlparse, urlunparse
HAS_SF_SDK = False
SF_BYTE_MAP = dict(
# Management GUI displays 1024 ** 3 as 1.1 GB, thus use 1000.
bytes=1,
b=1,
kb=1000,
mb=1000 ** 2,
gb=1000 ** 3,
tb=1000 ** 4,
pb=1000 ** 5,
eb=1000 ** 6,
zb=1000 ** 7,
yb=1000 ** 8
)
POW2_BYTE_MAP = dict(
# Here, 1 kb = 1024
bytes=1,
b=1,
kb=1024,
mb=1024 ** 2,
gb=1024 ** 3,
tb=1024 ** 4,
pb=1024 ** 5,
eb=1024 ** 6,
zb=1024 ** 7,
yb=1024 ** 8
)
try:
from solidfire.factory import ElementFactory
from solidfire.custom.models import TimeIntervalFrequency
from solidfire.models import Schedule, ScheduleInfo
HAS_SF_SDK = True
except Exception:
HAS_SF_SDK = False
def has_netapp_lib():
return HAS_NETAPP_LIB
def has_sf_sdk():
return HAS_SF_SDK
def na_ontap_host_argument_spec():
return dict(
hostname=dict(required=True, type='str'),
username=dict(required=True, type='str', aliases=['user']),
password=dict(required=True, type='str', aliases=['pass'], no_log=True),
https=dict(required=False, type='bool', default=False),
validate_certs=dict(required=False, type='bool', default=True),
http_port=dict(required=False, type='int'),
ontapi=dict(required=False, type='int'),
use_rest=dict(required=False, type='str', default='Auto', choices=['Never', 'Always', 'Auto'])
)
def ontap_sf_host_argument_spec():
return dict(
hostname=dict(required=True, type='str'),
username=dict(required=True, type='str', aliases=['user']),
password=dict(required=True, type='str', aliases=['pass'], no_log=True)
)
def aws_cvs_host_argument_spec():
return dict(
api_url=dict(required=True, type='str'),
validate_certs=dict(required=False, type='bool', default=True),
api_key=dict(required=True, type='str'),
secret_key=dict(required=True, type='str')
)
def create_sf_connection(module, port=None):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
if HAS_SF_SDK and hostname and username and password:
try:
return_val = ElementFactory.create(hostname, username, password, port=port)
return return_val
except Exception:
raise Exception("Unable to create SF connection")
else:
module.fail_json(msg="the python SolidFire SDK module is required")
def setup_na_ontap_zapi(module, vserver=None):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
https = module.params['https']
validate_certs = module.params['validate_certs']
port = module.params['http_port']
version = module.params['ontapi']
if HAS_NETAPP_LIB:
# set up zapi
server = zapi.NaServer(hostname)
server.set_username(username)
server.set_password(password)
if vserver:
server.set_vserver(vserver)
if version:
minor = version
else:
minor = 110
server.set_api_version(major=1, minor=minor)
# default is HTTP
if https:
if port is None:
port = 443
transport_type = 'HTTPS'
# HACK to bypass certificate verification
if validate_certs is False:
if not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
else:
if port is None:
port = 80
transport_type = 'HTTP'
server.set_transport_type(transport_type)
server.set_port(port)
server.set_server_type('FILER')
return server
else:
module.fail_json(msg="the python NetApp-Lib module is required")
def setup_ontap_zapi(module, vserver=None):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
if HAS_NETAPP_LIB:
# set up zapi
server = zapi.NaServer(hostname)
server.set_username(username)
server.set_password(password)
if vserver:
server.set_vserver(vserver)
# Todo : Replace hard-coded values with configurable parameters.
server.set_api_version(major=1, minor=110)
server.set_port(80)
server.set_server_type('FILER')
server.set_transport_type('HTTP')
return server
else:
module.fail_json(msg="the python NetApp-Lib module is required")
def eseries_host_argument_spec():
"""Retrieve a base argument specification common to all NetApp E-Series modules"""
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_username=dict(type='str', required=True),
api_password=dict(type='str', required=True, no_log=True),
api_url=dict(type='str', required=True),
ssid=dict(type='str', required=False, default='1'),
validate_certs=dict(type='bool', required=False, default=True)
))
return argument_spec
class NetAppESeriesModule(object):
"""Base class for all NetApp E-Series modules.
Provides a set of common methods for NetApp E-Series modules, including version checking, mode (proxy, embedded)
verification, http requests, secure http redirection for embedded web services, and logging setup.
Be sure to add the following lines in the module's documentation section:
extends_documentation_fragment:
- netapp.eseries
:param dict(dict) ansible_options: dictionary of ansible option definitions
:param str web_services_version: minimally required web services rest api version (default value: "02.00.0000.0000")
:param bool supports_check_mode: whether the module will support the check_mode capabilities (default=False)
:param list(list) mutually_exclusive: list containing list(s) of mutually exclusive options (optional)
:param list(list) required_if: list containing list(s) containing the option, the option value, and then
a list of required options. (optional)
:param list(list) required_one_of: list containing list(s) of options for which at least one is required. (optional)
:param list(list) required_together: list containing list(s) of options that are required together. (optional)
:param bool log_requests: controls whether to log each request (default: True)
"""
DEFAULT_TIMEOUT = 60
DEFAULT_SECURE_PORT = "8443"
DEFAULT_REST_API_PATH = "devmgr/v2/"
DEFAULT_REST_API_ABOUT_PATH = "devmgr/utils/about"
DEFAULT_HEADERS = {"Content-Type": "application/json", "Accept": "application/json",
"netapp-client-type": "Ansible-%s" % ansible_version}
HTTP_AGENT = "Ansible / %s" % ansible_version
SIZE_UNIT_MAP = dict(bytes=1, b=1, kb=1024, mb=1024**2, gb=1024**3, tb=1024**4,
pb=1024**5, eb=1024**6, zb=1024**7, yb=1024**8)
def __init__(self, ansible_options, web_services_version=None, supports_check_mode=False,
mutually_exclusive=None, required_if=None, required_one_of=None, required_together=None,
log_requests=True):
argument_spec = eseries_host_argument_spec()
argument_spec.update(ansible_options)
self.module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive, required_if=required_if,
required_one_of=required_one_of, required_together=required_together)
args = self.module.params
self.web_services_version = web_services_version if web_services_version else "02.00.0000.0000"
self.ssid = args["ssid"]
self.url = args["api_url"]
self.log_requests = log_requests
self.creds = dict(url_username=args["api_username"],
url_password=args["api_password"],
validate_certs=args["validate_certs"])
if not self.url.endswith("/"):
self.url += "/"
self.is_embedded_mode = None
self.is_web_services_valid_cache = None
def _check_web_services_version(self):
"""Verify proxy or embedded web services meets minimum version required for module.
The minimum required web services version is evaluated against version supplied through the web services rest
api. AnsibleFailJson exception will be raised when the minimum is not met or exceeded.
This helper function will update the supplied api url if secure http is not used for embedded web services
:raise AnsibleFailJson: raised when the contacted api service does not meet the minimum required version.
"""
if not self.is_web_services_valid_cache:
url_parts = urlparse(self.url)
if not url_parts.scheme or not url_parts.netloc:
self.module.fail_json(msg="Failed to provide valid API URL. Example: https://192.168.1.100:8443/devmgr/v2. URL [%s]." % self.url)
if url_parts.scheme not in ["http", "https"]:
self.module.fail_json(msg="Protocol must be http or https. URL [%s]." % self.url)
self.url = "%s://%s/" % (url_parts.scheme, url_parts.netloc)
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, ignore_errors=True, **self.creds)
if rc != 200:
self.module.warn("Failed to retrieve web services about information! Retrying with secure ports. Array Id [%s]." % self.ssid)
self.url = "https://%s:8443/" % url_parts.netloc.split(":")[0]
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
try:
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, **self.creds)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve the webservices about information! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(error)))
major, minor, other, revision = data["version"].split(".")
minimum_major, minimum_minor, other, minimum_revision = self.web_services_version.split(".")
if not (major > minimum_major or
(major == minimum_major and minor > minimum_minor) or
(major == minimum_major and minor == minimum_minor and revision >= minimum_revision)):
self.module.fail_json(msg="Web services version does not meet minimum version required. Current version: [%s]."
" Version required: [%s]." % (data["version"], self.web_services_version))
self.module.log("Web services rest api version met the minimum required version.")
self.is_web_services_valid_cache = True
def is_embedded(self):
"""Determine whether web services server is the embedded web services.
If web services about endpoint fails based on an URLError then the request will be attempted again using
secure http.
:raise AnsibleFailJson: raised when web services about endpoint failed to be contacted.
:return bool: whether contacted web services is running from storage array (embedded) or from a proxy.
"""
self._check_web_services_version()
if self.is_embedded_mode is None:
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
try:
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, **self.creds)
self.is_embedded_mode = not data["runningAsProxy"]
except Exception as error:
self.module.fail_json(msg="Failed to retrieve the webservices about information! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(error)))
return self.is_embedded_mode
def request(self, path, data=None, method='GET', headers=None, ignore_errors=False):
"""Issue an HTTP request to a url, retrieving an optional JSON response.
:param str path: web services rest api endpoint path (Example: storage-systems/1/graph). Note that when the
full url path is specified then that will be used without supplying the protocol, hostname, port and rest path.
:param data: data required for the request (data may be json or any python structured data)
:param str method: request method such as GET, POST, DELETE.
:param dict headers: dictionary containing request headers.
:param bool ignore_errors: forces the request to ignore any raised exceptions.
"""
self._check_web_services_version()
if headers is None:
headers = self.DEFAULT_HEADERS
if not isinstance(data, str) and headers["Content-Type"] == "application/json":
data = json.dumps(data)
if path.startswith("/"):
path = path[1:]
request_url = self.url + self.DEFAULT_REST_API_PATH + path
if self.log_requests or True:
self.module.log(pformat(dict(url=request_url, data=data, method=method)))
return request(url=request_url, data=data, method=method, headers=headers, use_proxy=True, force=False, last_mod_time=None,
timeout=self.DEFAULT_TIMEOUT, http_agent=self.HTTP_AGENT, force_basic_auth=True, ignore_errors=ignore_errors, **self.creds)
def create_multipart_formdata(files, fields=None, send_8kb=False):
"""Create the data for a multipart/form request.
:param list(list) files: list of lists each containing (name, filename, path).
:param list(list) fields: list of lists each containing (key, value).
:param bool send_8kb: only sends the first 8kb of the files (default: False).
"""
boundary = "---------------------------" + "".join([str(random.randint(0, 9)) for x in range(27)])
data_parts = list()
data = None
if six.PY2: # Generate payload for Python 2
newline = "\r\n"
if fields is not None:
for key, value in fields:
data_parts.extend(["--%s" % boundary,
'Content-Disposition: form-data; name="%s"' % key,
"",
value])
for name, filename, path in files:
with open(path, "rb") as fh:
value = fh.read(8192) if send_8kb else fh.read()
data_parts.extend(["--%s" % boundary,
'Content-Disposition: form-data; name="%s"; filename="%s"' % (name, filename),
"Content-Type: %s" % (mimetypes.guess_type(path)[0] or "application/octet-stream"),
"",
value])
data_parts.extend(["--%s--" % boundary, ""])
data = newline.join(data_parts)
else:
newline = six.b("\r\n")
if fields is not None:
for key, value in fields:
data_parts.extend([six.b("--%s" % boundary),
six.b('Content-Disposition: form-data; name="%s"' % key),
six.b(""),
six.b(value)])
for name, filename, path in files:
with open(path, "rb") as fh:
value = fh.read(8192) if send_8kb else fh.read()
data_parts.extend([six.b("--%s" % boundary),
six.b('Content-Disposition: form-data; name="%s"; filename="%s"' % (name, filename)),
six.b("Content-Type: %s" % (mimetypes.guess_type(path)[0] or "application/octet-stream")),
six.b(""),
value])
data_parts.extend([six.b("--%s--" % boundary), b""])
data = newline.join(data_parts)
headers = {
"Content-Type": "multipart/form-data; boundary=%s" % boundary,
"Content-Length": str(len(data))}
return headers, data
def request(url, data=None, headers=None, method='GET', use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=True, ignore_errors=False):
"""Issue an HTTP request to a url, retrieving an optional JSON response."""
if headers is None:
headers = {"Content-Type": "application/json", "Accept": "application/json"}
headers.update({"netapp-client-type": "Ansible-%s" % ansible_version})
if not http_agent:
http_agent = "Ansible / %s" % ansible_version
try:
r = open_url(url=url, data=data, headers=headers, method=method, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth)
except HTTPError as err:
r = err.fp
try:
raw_data = r.read()
if raw_data:
data = json.loads(raw_data)
else:
raw_data = None
except Exception:
if ignore_errors:
pass
else:
raise Exception(raw_data)
resp_code = r.getcode()
if resp_code >= 400 and not ignore_errors:
raise Exception(resp_code, data)
else:
return resp_code, data
def ems_log_event(source, server, name="Ansible", id="12345", version=ansible_version,
category="Information", event="setup", autosupport="false"):
ems_log = zapi.NaElement('ems-autosupport-log')
# Host name invoking the API.
ems_log.add_new_child("computer-name", name)
# ID of event. A user defined event-id, range [0..2^32-2].
ems_log.add_new_child("event-id", id)
# Name of the application invoking the API.
ems_log.add_new_child("event-source", source)
# Version of application invoking the API.
ems_log.add_new_child("app-version", version)
# Application defined category of the event.
ems_log.add_new_child("category", category)
# Description of event to log. An application defined message to log.
ems_log.add_new_child("event-description", event)
ems_log.add_new_child("log-level", "6")
ems_log.add_new_child("auto-support", autosupport)
server.invoke_successfully(ems_log, True)
def get_cserver_zapi(server):
vserver_info = zapi.NaElement('vserver-get-iter')
query_details = zapi.NaElement.create_node_with_children('vserver-info', **{'vserver-type': 'admin'})
query = zapi.NaElement('query')
query.add_child_elem(query_details)
vserver_info.add_child_elem(query)
result = server.invoke_successfully(vserver_info,
enable_tunneling=False)
attribute_list = result.get_child_by_name('attributes-list')
vserver_list = attribute_list.get_child_by_name('vserver-info')
return vserver_list.get_child_content('vserver-name')
def get_cserver(connection, is_rest=False):
if not is_rest:
return get_cserver_zapi(connection)
params = {'fields': 'type'}
api = "private/cli/vserver"
json, error = connection.get(api, params)
if json is None or error is not None:
# exit if there is an error or no data
return None
vservers = json.get('records')
if vservers is not None:
for vserver in vservers:
if vserver['type'] == 'admin': # cluster admin
return vserver['vserver']
if len(vservers) == 1: # assume vserver admin
return vservers[0]['vserver']
return None
class OntapRestAPI(object):
def __init__(self, module, timeout=60):
self.module = module
self.username = self.module.params['username']
self.password = self.module.params['password']
self.hostname = self.module.params['hostname']
self.use_rest = self.module.params['use_rest']
self.verify = self.module.params['validate_certs']
self.timeout = timeout
self.url = 'https://' + self.hostname + '/api/'
self.errors = list()
self.debug_logs = list()
self.check_required_library()
def check_required_library(self):
if not HAS_REQUESTS:
self.module.fail_json(msg=missing_required_lib('requests'))
def send_request(self, method, api, params, json=None, return_status_code=False):
''' send http request and process reponse, including error conditions '''
url = self.url + api
status_code = None
content = None
json_dict = None
json_error = None
error_details = None
def get_json(response):
''' extract json, and error message if present '''
try:
json = response.json()
except ValueError:
return None, None
error = json.get('error')
return json, error
try:
response = requests.request(method, url, verify=self.verify, auth=(self.username, self.password), params=params, timeout=self.timeout, json=json)
content = response.content # for debug purposes
status_code = response.status_code
# If the response was successful, no Exception will be raised
response.raise_for_status()
json_dict, json_error = get_json(response)
except requests.exceptions.HTTPError as err:
__, json_error = get_json(response)
if json_error is None:
self.log_error(status_code, 'HTTP error: %s' % err)
error_details = str(err)
# If an error was reported in the json payload, it is handled below
except requests.exceptions.ConnectionError as err:
self.log_error(status_code, 'Connection error: %s' % err)
error_details = str(err)
except Exception as err:
self.log_error(status_code, 'Other error: %s' % err)
error_details = str(err)
if json_error is not None:
self.log_error(status_code, 'Endpoint error: %d: %s' % (status_code, json_error))
error_details = json_error
self.log_debug(status_code, content)
if return_status_code:
return status_code, error_details
return json_dict, error_details
def get(self, api, params):
method = 'GET'
return self.send_request(method, api, params)
def post(self, api, data, params=None):
method = 'POST'
return self.send_request(method, api, params, json=data)
def patch(self, api, data, params=None):
method = 'PATCH'
return self.send_request(method, api, params, json=data)
def delete(self, api, data, params=None):
method = 'DELETE'
return self.send_request(method, api, params, json=data)
def _is_rest(self, used_unsupported_rest_properties=None):
if self.use_rest == "Always":
if used_unsupported_rest_properties:
error = "REST API currently does not support '%s'" % \
', '.join(used_unsupported_rest_properties)
return True, error
else:
return True, None
if self.use_rest == 'Never' or used_unsupported_rest_properties:
# force ZAPI if requested or if some parameter requires it
return False, None
method = 'HEAD'
api = 'cluster/software'
status_code, __ = self.send_request(method, api, params=None, return_status_code=True)
if status_code == 200:
return True, None
return False, None
def is_rest(self, used_unsupported_rest_properties=None):
''' only return error if there is a reason to '''
use_rest, error = self._is_rest(used_unsupported_rest_properties)
if used_unsupported_rest_properties is None:
return use_rest
return use_rest, error
def log_error(self, status_code, message):
self.errors.append(message)
self.debug_logs.append((status_code, message))
def log_debug(self, status_code, content):
self.debug_logs.append((status_code, content))
class AwsCvsRestAPI(object):
def __init__(self, module, timeout=60):
self.module = module
self.api_key = self.module.params['api_key']
self.secret_key = self.module.params['secret_key']
self.api_url = self.module.params['api_url']
self.verify = self.module.params['validate_certs']
self.timeout = timeout
self.url = 'https://' + self.api_url + '/v1/'
self.check_required_library()
def check_required_library(self):
if not HAS_REQUESTS:
self.module.fail_json(msg=missing_required_lib('requests'))
def send_request(self, method, api, params, json=None):
''' send http request and process reponse, including error conditions '''
url = self.url + api
status_code = None
content = None
json_dict = None
json_error = None
error_details = None
headers = {
'Content-type': "application/json",
'api-key': self.api_key,
'secret-key': self.secret_key,
'Cache-Control': "no-cache",
}
def get_json(response):
''' extract json, and error message if present '''
try:
json = response.json()
except ValueError:
return None, None
success_code = [200, 201, 202]
if response.status_code not in success_code:
error = json.get('message')
else:
error = None
return json, error
try:
response = requests.request(method, url, headers=headers, timeout=self.timeout, json=json)
status_code = response.status_code
# If the response was successful, no Exception will be raised
json_dict, json_error = get_json(response)
except requests.exceptions.HTTPError as err:
__, json_error = get_json(response)
if json_error is None:
error_details = str(err)
except requests.exceptions.ConnectionError as err:
error_details = str(err)
except Exception as err:
error_details = str(err)
if json_error is not None:
error_details = json_error
return json_dict, error_details
# If an error was reported in the json payload, it is handled below
def get(self, api, params=None):
method = 'GET'
return self.send_request(method, api, params)
def post(self, api, data, params=None):
method = 'POST'
return self.send_request(method, api, params, json=data)
def patch(self, api, data, params=None):
method = 'PATCH'
return self.send_request(method, api, params, json=data)
def put(self, api, data, params=None):
method = 'PUT'
return self.send_request(method, api, params, json=data)
def delete(self, api, data, params=None):
method = 'DELETE'
return self.send_request(method, api, params, json=data)
def get_state(self, jobId):
""" Method to get the state of the job """
method = 'GET'
response, status_code = self.get('Jobs/%s' % jobId)
while str(response['state']) not in 'done':
response, status_code = self.get('Jobs/%s' % jobId)
return 'done'

View File

@@ -0,0 +1,465 @@
# (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
import random
import mimetypes
from pprint import pformat
from ansible.module_utils import six
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils.urls import open_url
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils._text import to_native
try:
from ansible.module_utils.ansible_release import __version__ as ansible_version
except ImportError:
ansible_version = 'unknown'
try:
from urlparse import urlparse, urlunparse
except ImportError:
from urllib.parse import urlparse, urlunparse
def eseries_host_argument_spec():
"""Retrieve a base argument specification common to all NetApp E-Series modules"""
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_username=dict(type="str", required=True),
api_password=dict(type="str", required=True, no_log=True),
api_url=dict(type="str", required=True),
ssid=dict(type="str", required=False, default="1"),
validate_certs=dict(type="bool", required=False, default=True)
))
return argument_spec
def eseries_proxy_argument_spec():
"""Retrieve a base argument specification common to all NetApp E-Series modules for proxy specific tasks"""
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_username=dict(type="str", required=True),
api_password=dict(type="str", required=True, no_log=True),
api_url=dict(type="str", required=True),
validate_certs=dict(type="bool", required=False, default=True)
))
return argument_spec
class NetAppESeriesModule(object):
"""Base class for all NetApp E-Series modules.
Provides a set of common methods for NetApp E-Series modules, including version checking, mode (proxy, embedded)
verification, http requests, secure http redirection for embedded web services, and logging setup.
Be sure to add the following lines in the module's documentation section:
extends_documentation_fragment:
- santricity
:param dict(dict) ansible_options: dictionary of ansible option definitions
:param str web_services_version: minimally required web services rest api version (default value: "02.00.0000.0000")
:param bool supports_check_mode: whether the module will support the check_mode capabilities (default=False)
:param list(list) mutually_exclusive: list containing list(s) of mutually exclusive options (optional)
:param list(list) required_if: list containing list(s) containing the option, the option value, and then a list of required options. (optional)
:param list(list) required_one_of: list containing list(s) of options for which at least one is required. (optional)
:param list(list) required_together: list containing list(s) of options that are required together. (optional)
:param bool log_requests: controls whether to log each request (default: True)
:param bool proxy_specific_task: controls whether ssid is a default option (default: False)
"""
DEFAULT_TIMEOUT = 300
DEFAULT_SECURE_PORT = "8443"
DEFAULT_BASE_PATH = "devmgr/"
DEFAULT_REST_API_PATH = "devmgr/v2/"
DEFAULT_REST_API_ABOUT_PATH = "devmgr/utils/about"
DEFAULT_HEADERS = {"Content-Type": "application/json", "Accept": "application/json",
"netapp-client-type": "Ansible-%s" % ansible_version}
HTTP_AGENT = "Ansible / %s" % ansible_version
SIZE_UNIT_MAP = dict(bytes=1, b=1, kb=1024, mb=1024**2, gb=1024**3, tb=1024**4,
pb=1024**5, eb=1024**6, zb=1024**7, yb=1024**8)
HOST_TYPE_INDEXES = {"aix mpio": 9, "avt 4m": 5, "hp-ux": 15, "linux atto": 24, "linux dm-mp": 28, "linux pathmanager": 25, "solaris 10 or earlier": 2,
"solaris 11 or later": 17, "svc": 18, "ontap": 26, "mac": 22, "vmware": 10, "windows": 1, "windows atto": 23, "windows clustered": 8}
def __init__(self, ansible_options, web_services_version=None, supports_check_mode=False,
mutually_exclusive=None, required_if=None, required_one_of=None, required_together=None,
log_requests=True, proxy_specific_task=False):
if proxy_specific_task:
argument_spec = eseries_proxy_argument_spec()
else:
argument_spec = eseries_host_argument_spec()
argument_spec.update(ansible_options)
self.module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive, required_if=required_if,
required_one_of=required_one_of, required_together=required_together)
args = self.module.params
self.web_services_version = web_services_version if web_services_version else "02.00.0000.0000"
if proxy_specific_task:
self.ssid = "0"
else:
self.ssid = args["ssid"]
self.url = args["api_url"]
self.log_requests = log_requests
self.creds = dict(url_username=args["api_username"],
url_password=args["api_password"],
validate_certs=args["validate_certs"])
if not self.url.endswith("/"):
self.url += "/"
self.is_proxy_used_cache = None
self.is_embedded_available_cache = None
self.is_web_services_valid_cache = None
def _check_ssid(self):
"""Verify storage system identifier exist on the proxy and, if not, then update to match storage system name."""
try:
rc, data = self._request(url=self.url + self.DEFAULT_REST_API_ABOUT_PATH, **self.creds)
if data["runningAsProxy"]:
if self.ssid.lower() not in ["proxy", "0"]:
try:
rc, systems = self._request(url=self.url + self.DEFAULT_REST_API_PATH + "storage-systems", **self.creds)
alternates = []
for system in systems:
if system["id"] == self.ssid:
break
elif system["name"] == self.ssid:
alternates.append(system["id"])
else:
if len(alternates) == 1:
self.module.warn("Array Id does not exist on Web Services Proxy Instance! However, there is a storage system with a"
" matching name. Updating Identifier. Array Name: [%s], Array Id [%s]." % (self.ssid, alternates[0]))
self.ssid = alternates[0]
else:
self.module.fail_json(msg="Array identifier does not exist on Web Services Proxy Instance! Array ID [%s]." % self.ssid)
except Exception as error:
self.module.fail_json(msg="Failed to determine Web Services Proxy storage systems! Array [%s]. Error [%s]" % (self.ssid, to_native(error)))
except Exception as error:
# Don't fail here, if the ssid is wrong the it will fail on the next request. Causes issues for na_santricity_auth module.
pass
def _check_web_services_version(self):
"""Verify proxy or embedded web services meets minimum version required for module.
The minimum required web services version is evaluated against version supplied through the web services rest
api. AnsibleFailJson exception will be raised when the minimum is not met or exceeded.
This helper function will update the supplied api url if secure http is not used for embedded web services
:raise AnsibleFailJson: raised when the contacted api service does not meet the minimum required version.
"""
if not self.is_web_services_valid_cache:
url_parts = urlparse(self.url)
if not url_parts.scheme or not url_parts.netloc:
self.module.fail_json(msg="Failed to provide valid API URL. Example: https://192.168.1.100:8443/devmgr/v2. URL [%s]." % self.url)
if url_parts.scheme not in ["http", "https"]:
self.module.fail_json(msg="Protocol must be http or https. URL [%s]." % self.url)
self.url = "%s://%s/" % (url_parts.scheme, url_parts.netloc)
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, ignore_errors=True, force_basic_auth=False, **self.creds)
if rc != 200:
self.module.warn("Failed to retrieve web services about information! Retrying with secure ports. Array Id [%s]." % self.ssid)
self.url = "https://%s:8443/" % url_parts.netloc.split(":")[0]
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
try:
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, **self.creds)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve the webservices about information! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(error)))
if len(data["version"].split(".")) == 4:
major, minor, other, revision = data["version"].split(".")
minimum_major, minimum_minor, other, minimum_revision = self.web_services_version.split(".")
if not (major > minimum_major or
(major == minimum_major and minor > minimum_minor) or
(major == minimum_major and minor == minimum_minor and revision >= minimum_revision)):
self.module.fail_json(msg="Web services version does not meet minimum version required. Current version: [%s]."
" Version required: [%s]." % (data["version"], self.web_services_version))
self.module.log("Web services rest api version met the minimum required version.")
else:
self.module.warn("Web services rest api version unknown!")
self._check_ssid()
self.is_web_services_valid_cache = True
def is_web_services_version_met(self, version):
"""Determines whether a particular web services version has been satisfied."""
split_version = version.split(".")
if len(split_version) != 4 or not split_version[0].isdigit() or not split_version[1].isdigit() or not split_version[3].isdigit():
self.module.fail_json(msg="Version is not a valid Web Services version. Version [%s]." % version)
url_parts = urlparse(self.url)
if not url_parts.scheme or not url_parts.netloc:
self.module.fail_json(msg="Failed to provide valid API URL. Example: https://192.168.1.100:8443/devmgr/v2. URL [%s]." % self.url)
if url_parts.scheme not in ["http", "https"]:
self.module.fail_json(msg="Protocol must be http or https. URL [%s]." % self.url)
self.url = "%s://%s/" % (url_parts.scheme, url_parts.netloc)
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, ignore_errors=True, **self.creds)
if rc != 200:
self.module.warn("Failed to retrieve web services about information! Retrying with secure ports. Array Id [%s]." % self.ssid)
self.url = "https://%s:8443/" % url_parts.netloc.split(":")[0]
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
try:
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, **self.creds)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve the webservices about information! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
if len(data["version"].split(".")) == 4:
major, minor, other, revision = data["version"].split(".")
minimum_major, minimum_minor, other, minimum_revision = split_version
if not (major > minimum_major or
(major == minimum_major and minor > minimum_minor) or
(major == minimum_major and minor == minimum_minor and revision >= minimum_revision)):
return False
else:
return False
return True
def is_embedded_available(self):
"""Determine whether the storage array has embedded services available."""
self._check_web_services_version()
if self.is_embedded_available_cache is None:
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
self.is_embedded_available_cache = False
else:
try:
rc, bundle = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/saData/extendedSAData/codeVersions[codeModule='bundle']"
% self.ssid)
self.is_embedded_available_cache = False
if bundle:
self.is_embedded_available_cache = True
except Exception as error:
self.module.fail_json(msg="Failed to retrieve information about storage system [%s]. Error [%s]." % (self.ssid, to_native(error)))
else: # Contacted using embedded web services
self.is_embedded_available_cache = True
self.module.log("embedded_available: [%s]" % ("True" if self.is_embedded_available_cache else "False"))
return self.is_embedded_available_cache
def is_embedded(self):
"""Determine whether web services server is the embedded web services."""
return not self.is_proxy()
def is_proxy(self):
"""Determine whether web services server is the proxy web services.
:raise AnsibleFailJson: raised when web services about endpoint failed to be contacted.
:return bool: whether contacted web services is running from storage array (embedded) or from a proxy.
"""
self._check_web_services_version()
if self.is_proxy_used_cache is None:
about_url = self.url + self.DEFAULT_REST_API_ABOUT_PATH
try:
rc, data = request(about_url, timeout=self.DEFAULT_TIMEOUT, headers=self.DEFAULT_HEADERS, force_basic_auth=False, **self.creds)
self.is_proxy_used_cache = data["runningAsProxy"]
self.module.log("proxy: [%s]" % ("True" if self.is_proxy_used_cache else "False"))
except Exception as error:
self.module.fail_json(msg="Failed to retrieve the webservices about information! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
return self.is_proxy_used_cache
def request(self, path, rest_api_path=DEFAULT_REST_API_PATH, rest_api_url=None, data=None, method='GET', headers=None, ignore_errors=False, timeout=None,
force_basic_auth=True, log_request=None, json_response=True):
"""Issue an HTTP request to a url, retrieving an optional JSON response.
:param str path: web services rest api endpoint path (Example: storage-systems/1/graph). Note that when the
full url path is specified then that will be used without supplying the protocol, hostname, port and rest path.
:param str rest_api_path: override the class DEFAULT_REST_API_PATH which is used to build the request URL.
:param str rest_api_url: override the class url member which contains the base url for web services.
:param data: data required for the request (data may be json or any python structured data)
:param str method: request method such as GET, POST, DELETE.
:param dict headers: dictionary containing request headers.
:param bool ignore_errors: forces the request to ignore any raised exceptions.
:param int timeout: duration of seconds before request finally times out.
:param bool force_basic_auth: Ensure that basic authentication is being used.
:param bool log_request: Log the request and response
:param bool json_response: Whether the response should be loaded as JSON, otherwise the response is return raw.
"""
self._check_web_services_version()
if rest_api_url is None:
rest_api_url = self.url
if headers is None:
headers = self.DEFAULT_HEADERS
if timeout is None:
timeout = self.DEFAULT_TIMEOUT
if log_request is None:
log_request = self.log_requests
if not isinstance(data, str) and "Content-Type" in headers and headers["Content-Type"] == "application/json":
data = json.dumps(data)
if path.startswith("/"):
path = path[1:]
request_url = rest_api_url + rest_api_path + path
if log_request:
self.module.log(pformat(dict(url=request_url, data=data, method=method, headers=headers)))
response = self._request(url=request_url, data=data, method=method, headers=headers, last_mod_time=None, timeout=timeout, http_agent=self.HTTP_AGENT,
force_basic_auth=force_basic_auth, ignore_errors=ignore_errors, json_response=json_response, **self.creds)
if log_request:
self.module.log(pformat(response))
return response
@staticmethod
def _request(url, data=None, headers=None, method='GET', use_proxy=True, force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=True, ignore_errors=False, json_response=True):
"""Issue an HTTP request to a url, retrieving an optional JSON response."""
if headers is None:
headers = {"Content-Type": "application/json", "Accept": "application/json"}
headers.update({"netapp-client-type": "Ansible-%s" % ansible_version})
if not http_agent:
http_agent = "Ansible / %s" % ansible_version
try:
r = open_url(url=url, data=data, headers=headers, method=method, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth)
rc = r.getcode()
response = r.read()
if json_response and response:
response = json.loads(response)
except HTTPError as error:
rc = error.code
response = error.fp.read()
try:
if json_response:
response = json.loads(response)
except Exception:
pass
if not ignore_errors:
raise Exception(rc, response)
except ValueError as error:
pass
return rc, response
def create_multipart_formdata(files, fields=None, send_8kb=False):
"""Create the data for a multipart/form request.
:param list(list) files: list of lists each containing (name, filename, path).
:param list(list) fields: list of lists each containing (key, value).
:param bool send_8kb: only sends the first 8kb of the files (default: False).
"""
boundary = "---------------------------" + "".join([str(random.randint(0, 9)) for x in range(27)])
data_parts = list()
data = None
if six.PY2: # Generate payload for Python 2
newline = "\r\n"
if fields is not None:
for key, value in fields:
data_parts.extend(["--%s" % boundary,
'Content-Disposition: form-data; name="%s"' % key,
"",
value])
for name, filename, path in files:
with open(path, "rb") as fh:
value = fh.read(8192) if send_8kb else fh.read()
data_parts.extend(["--%s" % boundary,
'Content-Disposition: form-data; name="%s"; filename="%s"' % (name, filename),
"Content-Type: %s" % (mimetypes.guess_type(path)[0] or "application/octet-stream"),
"",
value])
data_parts.extend(["--%s--" % boundary, ""])
data = newline.join(data_parts)
else:
newline = six.b("\r\n")
if fields is not None:
for key, value in fields:
data_parts.extend([six.b("--%s" % boundary),
six.b('Content-Disposition: form-data; name="%s"' % key),
six.b(""),
six.b(value)])
for name, filename, path in files:
with open(path, "rb") as fh:
value = fh.read(8192) if send_8kb else fh.read()
data_parts.extend([six.b("--%s" % boundary),
six.b('Content-Disposition: form-data; name="%s"; filename="%s"' % (name, filename)),
six.b("Content-Type: %s" % (mimetypes.guess_type(path)[0] or "application/octet-stream")),
six.b(""),
value])
data_parts.extend([six.b("--%s--" % boundary), b""])
data = newline.join(data_parts)
headers = {
"Content-Type": "multipart/form-data; boundary=%s" % boundary,
"Content-Length": str(len(data))}
return headers, data
def request(url, data=None, headers=None, method='GET', use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=True, ignore_errors=False):
"""Issue an HTTP request to a url, retrieving an optional JSON response."""
if headers is None:
headers = {"Content-Type": "application/json", "Accept": "application/json"}
headers.update({"netapp-client-type": "Ansible-%s" % ansible_version})
if not http_agent:
http_agent = "Ansible / %s" % ansible_version
try:
r = open_url(url=url, data=data, headers=headers, method=method, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth)
except HTTPError as err:
r = err.fp
try:
raw_data = r.read()
if raw_data:
data = json.loads(raw_data)
else:
raw_data = None
except Exception:
if ignore_errors:
pass
else:
raise Exception(raw_data)
resp_code = r.getcode()
if resp_code >= 400 and not ignore_errors:
raise Exception(resp_code, data)
else:
return resp_code, data

View File

@@ -0,0 +1,253 @@
#!/usr/bin/python
# (c) 2018, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_alerts
short_description: NetApp E-Series manage email notification settings
description:
- Certain E-Series systems have the capability to send email notifications on potentially critical events.
- This module will allow the owner of the system to specify email recipients for these messages.
author: Michael Price (@lmprice)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
state:
description:
- Enable/disable the sending of email-based alerts.
type: str
default: enabled
required: false
choices:
- enabled
- disabled
server:
description:
- A fully qualified domain name, IPv4 address, or IPv6 address of a mail server.
- To use a fully qualified domain name, you must configure a DNS server on both controllers using
M(na_santricity_mgmt_interface).
- Required when I(state=enabled).
type: str
required: false
sender:
description:
- This is the sender that the recipient will see. It doesn't necessarily need to be a valid email account.
- Required when I(state=enabled).
type: str
required: false
contact:
description:
- Allows the owner to specify some free-form contact information to be included in the emails.
- This is typically utilized to provide a contact phone number.
type: str
required: false
recipients:
description:
- The email addresses that will receive the email notifications.
- Required when I(state=enabled).
type: list
required: false
test:
description:
- When a change is detected in the configuration, a test email will be sent.
- This may take a few minutes to process.
- Only applicable if I(state=enabled).
type: bool
default: false
notes:
- Check mode is supported.
- Alertable messages are a subset of messages shown by the Major Event Log (MEL), of the storage-system. Examples
of alertable messages include drive failures, failed controllers, loss of redundancy, and other warning/critical
events.
- This API is currently only supported with the Embedded Web Services API v2.0 and higher.
"""
EXAMPLES = """
- name: Enable email-based alerting
na_santricity_alerts:
state: enabled
sender: noreply@example.com
server: mail@example.com
contact: "Phone: 1-555-555-5555"
recipients:
- name1@example.com
- name2@example.com
api_url: "10.1.1.1:8443"
api_username: "admin"
api_password: "myPass"
- name: Disable alerting
na_santricity_alerts:
state: disabled
api_url: "10.1.1.1:8443"
api_username: "admin"
api_password: "myPass"
"""
RETURN = """
msg:
description: Success message
returned: on success
type: str
sample: The settings have been updated.
"""
import re
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
class NetAppESeriesAlerts(NetAppESeriesModule):
def __init__(self):
ansible_options = dict(state=dict(type='str', required=False, default='enabled', choices=['enabled', 'disabled']),
server=dict(type='str', required=False),
sender=dict(type='str', required=False),
contact=dict(type='str', required=False),
recipients=dict(type='list', required=False),
test=dict(type='bool', required=False, default=False))
required_if = [['state', 'enabled', ['server', 'sender', 'recipients']]]
super(NetAppESeriesAlerts, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
required_if=required_if,
supports_check_mode=True)
args = self.module.params
self.alerts = args['state'] == 'enabled'
self.server = args['server']
self.sender = args['sender']
self.contact = args['contact']
self.recipients = args['recipients']
self.test = args['test']
self.check_mode = self.module.check_mode
# Very basic validation on email addresses: xx@yy.zz
email = re.compile(r"[^@]+@[^@]+\.[^@]+")
if self.sender and not email.match(self.sender):
self.module.fail_json(msg="The sender (%s) provided is not a valid email address." % self.sender)
if self.recipients is not None:
for recipient in self.recipients:
if not email.match(recipient):
self.module.fail_json(msg="The recipient (%s) provided is not a valid email address." % recipient)
if len(self.recipients) < 1:
self.module.fail_json(msg="At least one recipient address must be specified.")
def get_configuration(self):
"""Retrieve the current storage system alert settings."""
if self.is_proxy():
if self.is_embedded_available():
try:
rc, result = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/device-alerts" % self.ssid)
return result
except Exception as err:
self.module.fail_json(msg="Failed to retrieve the alerts configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
else:
self.module.fail_json(msg="Setting SANtricity alerts is only available from SANtricity Web Services Proxy if the storage system has"
" SANtricity Web Services Embedded available. Array [%s]." % self.ssid)
else:
try:
rc, result = self.request("storage-systems/%s/device-alerts" % self.ssid)
return result
except Exception as err:
self.module.fail_json(msg="Failed to retrieve the alerts configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def update_configuration(self):
"""Update the storage system alert settings."""
config = self.get_configuration()
update = False
body = dict()
if self.alerts:
body = dict(alertingEnabled=True)
if not config['alertingEnabled']:
update = True
body.update(emailServerAddress=self.server)
if config['emailServerAddress'] != self.server:
update = True
body.update(additionalContactInformation=self.contact, sendAdditionalContactInformation=True)
if self.contact and (self.contact != config['additionalContactInformation']
or not config['sendAdditionalContactInformation']):
update = True
body.update(emailSenderAddress=self.sender)
if config['emailSenderAddress'] != self.sender:
update = True
self.recipients.sort()
if config['recipientEmailAddresses']:
config['recipientEmailAddresses'].sort()
body.update(recipientEmailAddresses=self.recipients)
if config['recipientEmailAddresses'] != self.recipients:
update = True
elif config['alertingEnabled']:
body = {"alertingEnabled": False, "emailServerAddress": "", "emailSenderAddress": "", "sendAdditionalContactInformation": False,
"additionalContactInformation": "", "recipientEmailAddresses": []}
update = True
if update and not self.check_mode:
if self.is_proxy() and self.is_embedded_available():
try:
rc, result = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/device-alerts" % self.ssid, method="POST", data=body)
except Exception as err:
self.module.fail_json(msg="We failed to set the storage-system name! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
else:
try:
rc, result = self.request("storage-systems/%s/device-alerts" % self.ssid, method="POST", data=body)
except Exception as err:
self.module.fail_json(msg="We failed to set the storage-system name! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
return update
def send_test_email(self):
"""Send a test email to verify that the provided configuration is valid and functional."""
if not self.check_mode:
if self.is_proxy() and self.is_embedded_available():
try:
rc, resp = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/device-alerts/alert-email-test" % self.ssid, method="POST")
if resp['response'] != 'emailSentOK':
self.module.fail_json(msg="The test email failed with status=[%s]! Array Id [%s]." % (resp['response'], self.ssid))
except Exception as err:
self.module.fail_json(msg="We failed to send the test email! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
else:
try:
rc, resp = self.request("storage-systems/%s/device-alerts/alert-email-test" % self.ssid, method="POST")
if resp['response'] != 'emailSentOK':
self.module.fail_json(msg="The test email failed with status=[%s]! Array Id [%s]." % (resp['response'], self.ssid))
except Exception as err:
self.module.fail_json(msg="We failed to send the test email! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def update(self):
update = self.update_configuration()
if self.test and update:
self.send_test_email()
if self.alerts:
msg = 'Alerting has been enabled using server=%s, sender=%s.' % (self.server, self.sender)
else:
msg = 'Alerting has been disabled.'
self.module.exit_json(msg=msg, changed=update)
def main():
alerts = NetAppESeriesAlerts()
alerts.update()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,176 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_alerts_syslog
short_description: NetApp E-Series manage syslog servers receiving storage system alerts.
description:
- Manage the list of syslog servers that will notifications on potentially critical events.
author: Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
servers:
description:
- List of dictionaries where each dictionary contains a syslog server entry.
type: list
required: False
suboptions:
address:
description:
- Syslog server address can be a fully qualified domain name, IPv4 address, or IPv6 address.
required: true
port:
description:
- UDP Port must be a numerical value between 0 and 65535. Typically, the UDP Port for syslog is 514.
required: false
default: 514
test:
description:
- This forces a test syslog message to be sent to the stated syslog server.
- Test will only be issued when a change is made.
type: bool
default: false
notes:
- Check mode is supported.
- This API is currently only supported with the Embedded Web Services API v2.12 (bundled with
SANtricity OS 11.40.2) and higher.
"""
EXAMPLES = """
- name: Add two syslog server configurations to NetApp E-Series storage array.
na_santricity_alerts_syslog:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
servers:
- address: "192.168.1.100"
- address: "192.168.2.100"
port: 514
- address: "192.168.3.100"
port: 1000
"""
RETURN = """
msg:
description: Success message
returned: on success
type: str
sample: The settings have been updated.
"""
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
class NetAppESeriesAlertsSyslog(NetAppESeriesModule):
def __init__(self):
ansible_options = dict(servers=dict(type="list", required=False),
test=dict(type="bool", default=False, require=False))
required_if = [["state", "present", ["address"]]]
mutually_exclusive = [["test", "absent"]]
super(NetAppESeriesAlertsSyslog, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
mutually_exclusive=mutually_exclusive,
required_if=required_if,
supports_check_mode=True)
args = self.module.params
if args["servers"] and len(args["servers"]) > 5:
self.module.fail_json(msg="Maximum number of syslog servers is 5! Array Id [%s]." % self.ssid)
self.servers = {}
if args["servers"] is not None:
for server in args["servers"]:
port = 514
if "port" in server:
port = server["port"]
self.servers.update({server["address"]: port})
self.test = args["test"]
self.check_mode = self.module.check_mode
# Check whether request needs to be forwarded on to the controller web services rest api.
self.url_path_prefix = ""
if not self.is_embedded() and self.ssid != "0" and self.ssid.lower() != "proxy":
self.url_path_prefix = "storage-systems/%s/forward/devmgr/v2/" % self.ssid
def get_current_configuration(self):
"""Retrieve existing alert-syslog configuration."""
try:
rc, result = self.request(self.url_path_prefix + "storage-systems/%s/device-alerts/alert-syslog" % ("1" if self.url_path_prefix else self.ssid))
return result
except Exception as error:
self.module.fail_json(msg="Failed to retrieve syslog configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
def is_change_required(self):
"""Determine whether changes are required."""
current_config = self.get_current_configuration()
# When syslog servers should exist, search for them.
if self.servers:
for entry in current_config["syslogReceivers"]:
if entry["serverName"] not in self.servers.keys() or entry["portNumber"] != self.servers[entry["serverName"]]:
return True
for server, port in self.servers.items():
for entry in current_config["syslogReceivers"]:
if server == entry["serverName"] and port == entry["portNumber"]:
break
else:
return True
return False
elif current_config["syslogReceivers"]:
return True
return False
def make_request_body(self):
"""Generate the request body."""
body = {"syslogReceivers": [], "defaultFacility": 3, "defaultTag": "StorageArray"}
for server, port in self.servers.items():
body["syslogReceivers"].append({"serverName": server, "portNumber": port})
return body
def test_configuration(self):
"""Send syslog test message to all systems (only option)."""
try:
rc, result = self.request(self.url_path_prefix + "storage-systems/%s/device-alerts/alert-syslog-test"
% ("1" if self.url_path_prefix else self.ssid), method="POST")
except Exception as error:
self.module.fail_json(msg="Failed to send test message! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
def update(self):
"""Update configuration and respond to ansible."""
change_required = self.is_change_required()
if change_required and not self.check_mode:
try:
rc, result = self.request(self.url_path_prefix + "storage-systems/%s/device-alerts/alert-syslog" % ("1" if self.url_path_prefix else self.ssid),
method="POST", data=self.make_request_body())
except Exception as error:
self.module.fail_json(msg="Failed to add syslog server! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
if self.test and self.servers:
self.test_configuration()
self.module.exit_json(msg="The syslog settings have been updated.", changed=change_required)
def main():
settings = NetAppESeriesAlertsSyslog()
settings.update()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,544 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_asup
short_description: NetApp E-Series manage auto-support settings
description:
- Allow the auto-support settings to be configured for an individual E-Series storage-system
author:
- Michael Price (@lmprice)
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
state:
description:
- Enable/disable the E-Series auto-support configuration or maintenance mode.
- When this option is enabled, configuration, logs, and other support-related information will be relayed
to NetApp to help better support your system. No personally identifiable information, passwords, etc, will
be collected.
- The maintenance state enables the maintenance window which allows maintenance activities to be performed on the storage array without
generating support cases.
- Maintenance mode cannot be enabled unless ASUP has previously been enabled.
type: str
default: enabled
choices:
- enabled
- disabled
- maintenance_enabled
- maintenance_disabled
active:
description:
- Enable active/proactive monitoring for ASUP. When a problem is detected by our monitoring systems, it's
possible that the bundle did not contain all of the required information at the time of the event.
Enabling this option allows NetApp support personnel to manually request transmission or re-transmission
of support data in order ot resolve the problem.
- Only applicable if I(state=enabled).
default: true
type: bool
start:
description:
- A start hour may be specified in a range from 0 to 23 hours.
- ASUP bundles will be sent daily between the provided start and end time (UTC).
- I(start) must be less than I(end).
type: int
default: 0
end:
description:
- An end hour may be specified in a range from 1 to 24 hours.
- ASUP bundles will be sent daily between the provided start and end time (UTC).
- I(start) must be less than I(end).
type: int
default: 24
days:
description:
- A list of days of the week that ASUP bundles will be sent. A larger, weekly bundle will be sent on one
of the provided days.
type: list
choices:
- monday
- tuesday
- wednesday
- thursday
- friday
- saturday
- sunday
required: false
aliases:
- schedule_days
- days_of_week
method:
description:
- AutoSupport dispatch delivery method.
choices:
- https
- http
- email
type: str
required: false
default: https
routing_type:
description:
- AutoSupport routing
- Required when M(method==https or method==http).
choices:
- direct
- proxy
- script
type: str
default: direct
required: false
proxy:
description:
- Information particular to the proxy delivery method.
- Required when M((method==https or method==http) and routing_type==proxy).
type: dict
required: false
suboptions:
host:
description:
- Proxy host IP address or fully qualified domain name.
- Required when M(method==http or method==https) and M(routing_type==proxy).
type: str
required: false
port:
description:
- Proxy host port.
- Required when M(method==http or method==https) and M(routing_type==proxy).
type: int
required: false
script:
description:
- Path to the AutoSupport routing script file.
- Required when M(method==http or method==https) and M(routing_type==script).
type: str
required: false
username:
description:
- Username for the proxy.
type: str
required: false
password:
description:
- Password for the proxy.
type: str
required: false
email:
description:
- Information particular to the e-mail delivery method.
- Uses the SMTP protocol.
- Required when M(method==email).
type: dict
required: false
suboptions:
server:
description:
- Mail server's IP address or fully qualified domain name.
- Required when M(routing_type==email).
type: str
required: false
sender:
description:
- Sender's email account
- Required when M(routing_type==email).
type: str
required: false
test_recipient:
description:
- Test verification email
- Required when M(routing_type==email).
type: str
required: false
maintenance_duration:
description:
- The duration of time the ASUP maintenance mode will be active.
- Permittable range is between 1 and 72 hours.
- Required when I(state==maintenance_enabled).
type: int
default: 24
required: false
maintenance_emails:
description:
- List of email addresses for maintenance notifications.
- Required when I(state==maintenance_enabled).
type: list
required: false
validate:
description:
- Validate ASUP configuration.
type: bool
default: false
required: false
notes:
- Check mode is supported.
- Enabling ASUP will allow our support teams to monitor the logs of the storage-system in order to proactively
respond to issues with the system. It is recommended that all ASUP-related options be enabled, but they may be
disabled if desired.
- This API is currently only supported with the Embedded Web Services API v2.0 and higher.
"""
EXAMPLES = """
- name: Enable ASUP and allow pro-active retrieval of bundles
na_santricity_asup:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
state: enabled
active: true
days: ["saturday", "sunday"]
start: 17
end: 20
- name: Set the ASUP schedule to only send bundles from 12 AM CST to 3 AM CST.
na_santricity_asup:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
state: disabled
- name: Set the ASUP schedule to only send bundles from 12 AM CST to 3 AM CST.
na_santricity_asup:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
state: maintenance_enabled
maintenance_duration: 24
maintenance_emails:
- admin@example.com
- name: Set the ASUP schedule to only send bundles from 12 AM CST to 3 AM CST.
na_santricity_asup:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
state: maintenance_disabled
"""
RETURN = """
msg:
description: Success message
returned: on success
type: str
sample: The settings have been updated.
asup:
description:
- True if ASUP is enabled.
returned: on success
sample: true
type: bool
active:
description:
- True if the active option has been enabled.
returned: on success
sample: true
type: bool
cfg:
description:
- Provide the full ASUP configuration.
returned: on success
type: complex
contains:
asupEnabled:
description:
- True if ASUP has been enabled.
type: bool
onDemandEnabled:
description:
- True if ASUP active monitoring has been enabled.
type: bool
daysOfWeek:
description:
- The days of the week that ASUP bundles will be sent.
type: list
"""
import time
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
class NetAppESeriesAsup(NetAppESeriesModule):
DAYS_OPTIONS = ["sunday", "monday", "tuesday", "wednesday", "thursday", "friday", "saturday"]
def __init__(self):
ansible_options = dict(
state=dict(type="str", required=False, default="enabled", choices=["enabled", "disabled", "maintenance_enabled", "maintenance_disabled"]),
active=dict(type="bool", required=False, default=True),
days=dict(type="list", required=False, aliases=["schedule_days", "days_of_week"], choices=self.DAYS_OPTIONS),
start=dict(type="int", required=False, default=0),
end=dict(type="int", required=False, default=24),
method=dict(type="str", required=False, choices=["https", "http", "email"], default="https"),
routing_type=dict(type="str", required=False, choices=["direct", "proxy", "script"], default="direct"),
proxy=dict(type="dict", required=False, options=dict(host=dict(type="str", required=False),
port=dict(type="int", required=False),
script=dict(type="str", required=False),
username=dict(type="str", required=False),
password=dict(type="str", no_log=True, required=False))),
email=dict(type="dict", required=False, options=dict(server=dict(type="str", required=False),
sender=dict(type="str", required=False),
test_recipient=dict(type="str", required=False))),
maintenance_duration=dict(type="int", required=False, default=24),
maintenance_emails=dict(type="list", required=False),
validate=dict(type="bool", require=False, default=False))
mutually_exclusive = [["host", "script"],
["port", "script"]]
required_if = [["method", "https", ["routing_type"]],
["method", "http", ["routing_type"]],
["method", "email", ["email"]],
["state", "maintenance_enabled", ["maintenance_duration", "maintenance_emails"]]]
super(NetAppESeriesAsup, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
mutually_exclusive=mutually_exclusive,
required_if=required_if,
supports_check_mode=True)
args = self.module.params
self.state = args["state"]
self.active = args["active"]
self.days = args["days"]
self.start = args["start"]
self.end = args["end"]
self.method = args["method"]
self.routing_type = args["routing_type"] if args["routing_type"] else "none"
self.proxy = args["proxy"]
self.email = args["email"]
self.maintenance_duration = args["maintenance_duration"]
self.maintenance_emails = args["maintenance_emails"]
self.validate = args["validate"]
if self.validate and self.email and "test_recipient" not in self.email.keys():
self.module.fail_json(msg="test_recipient must be provided for validating email delivery method. Array [%s]" % self.ssid)
self.check_mode = self.module.check_mode
if self.start >= self.end:
self.module.fail_json(msg="The value provided for the start time is invalid."
" It must be less than the end time.")
if self.start < 0 or self.start > 23:
self.module.fail_json(msg="The value provided for the start time is invalid. It must be between 0 and 23.")
else:
self.start = self.start * 60
if self.end < 1 or self.end > 24:
self.module.fail_json(msg="The value provided for the end time is invalid. It must be between 1 and 24.")
else:
self.end = min(self.end * 60, 1439)
if self.maintenance_duration < 1 or self.maintenance_duration > 72:
self.module.fail_json(msg="The maintenance duration must be equal to or between 1 and 72 hours.")
if not self.days:
self.days = self.DAYS_OPTIONS
# Check whether request needs to be forwarded on to the controller web services rest api.
self.url_path_prefix = ""
if not self.is_embedded() and self.ssid != "0" and self.ssid.lower() != "proxy":
self.url_path_prefix = "storage-systems/%s/forward/devmgr/v2/" % self.ssid
def get_configuration(self):
try:
rc, result = self.request(self.url_path_prefix + "device-asup")
if not (result["asupCapable"] and result["onDemandCapable"]):
self.module.fail_json(msg="ASUP is not supported on this device. Array Id [%s]." % self.ssid)
return result
except Exception as err:
self.module.fail_json(msg="Failed to retrieve ASUP configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def in_maintenance_mode(self):
"""Determine whether storage device is currently in maintenance mode."""
results = False
try:
rc, key_values = self.request(self.url_path_prefix + "key-values")
for key_value in key_values:
if key_value["key"] == "ansible_asup_maintenance_email_list":
if not self.maintenance_emails:
self.maintenance_emails = key_value["value"].split(",")
elif key_value["key"] == "ansible_asup_maintenance_stop_time":
if time.time() < float(key_value["value"]):
results = True
except Exception as error:
self.module.fail_json(msg="Failed to retrieve maintenance windows information! Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
return results
def update_configuration(self):
config = self.get_configuration()
update = False
body = dict()
# Build request body
if self.state == "enabled":
body = dict(asupEnabled=True)
if not config["asupEnabled"]:
update = True
if (config["onDemandEnabled"] and config["remoteDiagsEnabled"]) != self.active:
update = True
body.update(dict(onDemandEnabled=self.active,
remoteDiagsEnabled=self.active))
self.days.sort()
config["schedule"]["daysOfWeek"].sort()
body["schedule"] = dict(daysOfWeek=self.days,
dailyMinTime=self.start,
dailyMaxTime=self.end,
weeklyMinTime=self.start,
weeklyMaxTime=self.end)
if self.days != config["schedule"]["daysOfWeek"]:
update = True
if self.start != config["schedule"]["dailyMinTime"] or self.start != config["schedule"]["weeklyMinTime"]:
update = True
elif self.end != config["schedule"]["dailyMaxTime"] or self.end != config["schedule"]["weeklyMaxTime"]:
update = True
if self.method in ["https", "http"]:
if self.routing_type == "direct":
body["delivery"] = dict(method=self.method,
routingType="direct")
elif self.routing_type == "proxy":
body["delivery"] = dict(method=self.method,
proxyHost=self.proxy["host"],
proxyPort=self.proxy["port"],
routingType="proxyServer")
if "username" in self.proxy.keys():
body["delivery"].update({"proxyUserName": self.proxy["username"]})
if "password" in self.proxy.keys():
body["delivery"].update({"proxyPassword": self.proxy["password"]})
elif self.routing_type == "script":
body["delivery"] = dict(method=self.method,
proxyScript=self.proxy["script"],
routingType="proxyScript")
else:
body["delivery"] = dict(method="smtp",
mailRelayServer=self.email["server"],
mailSenderAddress=self.email["sender"],
routingType="none")
# Check whether changes are required.
if config["delivery"]["method"] != body["delivery"]["method"]:
update = True
elif config["delivery"]["method"] in ["https", "http"]:
if config["delivery"]["routingType"] != body["delivery"]["routingType"]:
update = True
elif config["delivery"]["routingType"] == "proxyServer":
if (config["delivery"]["proxyHost"] != body["delivery"]["proxyHost"] or
config["delivery"]["proxyPort"] != body["delivery"]["proxyPort"] or
config["delivery"]["proxyUserName"] != body["delivery"]["proxyUserName"] or
config["delivery"]["proxyPassword"] != body["delivery"]["proxyPassword"]):
update = True
elif config["delivery"]["routingType"] == "proxyScript":
if config["delivery"]["proxyScript"] != body["delivery"]["proxyScript"]:
update = True
elif (config["delivery"]["method"] == "smtp" and
config["delivery"]["mailRelayServer"] != body["delivery"]["mailRelayServer"] and
config["delivery"]["mailSenderAddress"] != body["delivery"]["mailSenderAddress"]):
update = True
if self.in_maintenance_mode():
update = True
elif self.state == "disabled":
if config["asupEnabled"]: # Disable asupEnable is asup is disabled.
body = dict(asupEnabled=False)
update = True
else:
if not config["asupEnabled"]:
self.module.fail_json(msg="AutoSupport must be enabled before enabling or disabling maintenance mode. Array [%s]." % self.ssid)
if self.in_maintenance_mode() or self.state == "maintenance_enabled":
update = True
# Apply required changes.
if update and not self.check_mode:
if self.state == "maintenance_enabled":
try:
rc, response = self.request(self.url_path_prefix + "device-asup/maintenance-window", method="POST",
data=dict(maintenanceWindowEnabled=True,
duration=self.maintenance_duration,
emailAddresses=self.maintenance_emails))
except Exception as error:
self.module.fail_json(msg="Failed to enabled ASUP maintenance window. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Add maintenance information to the key-value store
try:
rc, response = self.request(self.url_path_prefix + "key-values/ansible_asup_maintenance_email_list", method="POST",
data=",".join(self.maintenance_emails))
rc, response = self.request(self.url_path_prefix + "key-values/ansible_asup_maintenance_stop_time", method="POST",
data=str(time.time() + 60 * 60 * self.maintenance_duration))
except Exception as error:
self.module.fail_json(msg="Failed to store maintenance information. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
elif self.state == "maintenance_disabled":
try:
rc, response = self.request(self.url_path_prefix + "device-asup/maintenance-window", method="POST",
data=dict(maintenanceWindowEnabled=False,
emailAddresses=self.maintenance_emails))
except Exception as error:
self.module.fail_json(msg="Failed to disable ASUP maintenance window. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Remove maintenance information to the key-value store
try:
rc, response = self.request(self.url_path_prefix + "key-values/ansible_asup_maintenance_email_list", method="DELETE")
rc, response = self.request(self.url_path_prefix + "key-values/ansible_asup_maintenance_stop_time", method="DELETE")
except Exception as error:
self.module.fail_json(msg="Failed to store maintenance information. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
else:
if body["asupEnabled"] and self.validate:
validate_body = dict(delivery=body["delivery"])
if self.email:
validate_body["mailReplyAddress"] = self.email["test_recipient"]
try:
rc, response = self.request(self.url_path_prefix + "device-asup/verify-config", timeout=600, method="POST", data=validate_body)
except Exception as err:
self.module.fail_json(msg="Failed to validate ASUP configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
try:
rc, response = self.request(self.url_path_prefix + "device-asup", method="POST", data=body)
# This is going to catch cases like a connection failure
except Exception as err:
self.module.fail_json(msg="Failed to change ASUP configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
return update
def apply(self):
update = self.update_configuration()
cfg = self.get_configuration()
if update:
self.module.exit_json(msg="The ASUP settings have been updated.", changed=update, asup=cfg["asupEnabled"], active=cfg["onDemandEnabled"], cfg=cfg)
else:
self.module.exit_json(msg="No ASUP changes required.", changed=update, asup=cfg["asupEnabled"], active=cfg["onDemandEnabled"], cfg=cfg)
def main():
asup = NetAppESeriesAsup()
asup.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,200 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_auditlog
short_description: NetApp E-Series manage audit-log configuration
description:
- This module allows an e-series storage system owner to set audit-log configuration parameters.
author: Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
max_records:
description:
- The maximum number log messages audit-log will retain.
- Max records must be between and including 100 and 50000.
type: int
default: 50000
log_level:
description: Filters the log messages according to the specified log level selection.
choices:
- all
- writeOnly
type: str
default: writeOnly
full_policy:
description: Specifies what audit-log should do once the number of entries approach the record limit.
choices:
- overWrite
- preventSystemAccess
type: str
default: overWrite
threshold:
description:
- This is the memory full percent threshold that audit-log will start issuing warning messages.
- Percent range must be between and including 60 and 90.
type: int
default: 90
force:
description:
- Forces the audit-log configuration to delete log history when log messages fullness cause immediate
warning or full condition.
- Warning! This will cause any existing audit-log messages to be deleted.
- This is only applicable for I(full_policy=preventSystemAccess).
type: bool
default: no
notes:
- Check mode is supported.
- Use I(ssid=="0") or I(ssid=="proxy") to configure SANtricity Web Services Proxy auditlog settings otherwise.
"""
EXAMPLES = """
- name: Define audit-log to prevent system access if records exceed 50000 with warnings occurring at 60% capacity.
na_santricity_auditlog:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
max_records: 50000
log_level: all
full_policy: preventSystemAccess
threshold: 60
"""
RETURN = """
msg:
description: Success message
returned: on success
type: str
sample: The settings have been updated.
"""
import json
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
class NetAppESeriesAuditLog(NetAppESeriesModule):
"""Audit-log module configuration class."""
MAX_RECORDS = 50000
def __init__(self):
ansible_options = dict(max_records=dict(type="int", default=50000),
log_level=dict(type="str", default="writeOnly", choices=["all", "writeOnly"]),
full_policy=dict(type="str", default="overWrite", choices=["overWrite", "preventSystemAccess"]),
threshold=dict(type="int", default=90),
force=dict(type="bool", default=False))
super(NetAppESeriesAuditLog, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
args = self.module.params
self.log_level = args["log_level"]
self.force = args["force"]
self.full_policy = args["full_policy"]
self.max_records = args["max_records"]
self.threshold = args["threshold"]
if self.max_records < 100 or self.max_records > self.MAX_RECORDS:
self.module.fail_json(msg="Audit-log max_records count must be between 100 and 50000: [%s]" % self.max_records)
if self.threshold < 60 or self.threshold > 90:
self.module.fail_json(msg="Audit-log percent threshold must be between 60 and 90: [%s]" % self.threshold)
# Append web services proxy forward end point.
self.url_path_prefix = ""
if not self.is_embedded() and self.ssid != "0" and self.ssid.lower() != "proxy":
self.url_path_prefix = "storage-systems/%s/forward/devmgr/v2/" % self.ssid
def get_configuration(self):
"""Retrieve the existing audit-log configurations.
:returns: dictionary containing current audit-log configuration
"""
try:
if self.is_proxy() and (self.ssid == "0" or self.ssid.lower() != "proxy"):
rc, data = self.request("audit-log/config")
else:
rc, data = self.request(self.url_path_prefix + "storage-systems/1/audit-log/config")
return data
except Exception as err:
self.module.fail_json(msg="Failed to retrieve the audit-log configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def build_configuration(self):
"""Build audit-log expected configuration.
:returns: Tuple containing update boolean value and dictionary of audit-log configuration
"""
config = self.get_configuration()
current = dict(auditLogMaxRecords=config["auditLogMaxRecords"],
auditLogLevel=config["auditLogLevel"],
auditLogFullPolicy=config["auditLogFullPolicy"],
auditLogWarningThresholdPct=config["auditLogWarningThresholdPct"])
body = dict(auditLogMaxRecords=self.max_records,
auditLogLevel=self.log_level,
auditLogFullPolicy=self.full_policy,
auditLogWarningThresholdPct=self.threshold)
update = current != body
return update, body
def delete_log_messages(self):
"""Delete all audit-log messages."""
try:
if self.is_proxy() and (self.ssid == "0" or self.ssid.lower() != "proxy"):
rc, result = self.request("audit-log?clearAll=True", method="DELETE")
else:
rc, result = self.request(self.url_path_prefix + "storage-systems/1/audit-log?clearAll=True", method="DELETE")
except Exception as err:
self.module.fail_json(msg="Failed to delete audit-log messages! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def update_configuration(self, update=None, body=None, attempt_recovery=True):
"""Update audit-log configuration."""
if update is None or body is None:
update, body = self.build_configuration()
if update and not self.module.check_mode:
try:
if self.is_proxy() and (self.ssid == "0" or self.ssid.lower() != "proxy"):
rc, result = self.request("audit-log/config", data=json.dumps(body), method='POST', ignore_errors=True)
else:
rc, result = self.request(self.url_path_prefix + "storage-systems/1/audit-log/config",
data=json.dumps(body), method='POST', ignore_errors=True)
if rc == 422:
if self.force and attempt_recovery:
self.delete_log_messages()
update = self.update_configuration(update, body, False)
else:
self.module.fail_json(msg="Failed to update audit-log configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(rc, result)))
except Exception as error:
self.module.fail_json(msg="Failed to update audit-log configuration! Array Id [%s]. Error [%s]." % (self.ssid, to_native(error)))
return update
def update(self):
"""Update the audit-log configuration."""
update = self.update_configuration()
if update:
self.module.exit_json(msg="Audit-log update complete", changed=update)
else:
self.module.exit_json(msg="No audit-log changes required", changed=update)
def main():
auditlog = NetAppESeriesAuditLog()
auditlog.update()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,351 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_auth
short_description: NetApp E-Series set or update the password for a storage array device or SANtricity Web Services Proxy.
description:
- Sets or updates the password for a storage array device or SANtricity Web Services Proxy.
author:
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
current_admin_password:
description:
- The current admin password.
- When making changes to the embedded web services's login passwords, api_password will be used and current_admin_password will be ignored.
- When making changes to the proxy web services's login passwords, api_password will be used and current_admin_password will be ignored.
- Only required when the password has been set and will be ignored if not set.
type: str
required: false
password:
description:
- The password you would like to set.
- Cannot be more than 30 characters.
type: str
required: false
user:
description:
- The local user account password to update
- For systems prior to E2800, use admin to change the rw (system password).
- For systems prior to E2800, all choices except admin will be ignored.
type: str
choices: ["admin", "monitor", "support", "security", "storage"]
default: "admin"
required: false
minimum_password_length:
description:
- This option defines the minimum password length.
type: int
required: false
notes:
- Set I(ssid=="0") or I(ssid=="proxy") when attempting to change the password for SANtricity Web Services Proxy.
- SANtricity Web Services Proxy storage password will be updated when changing the password on a managed storage system from the proxy; This is only true
when the storage system has been previously contacted.
"""
EXAMPLES = """
- name: Set the initial password
na_santricity_auth:
ssid: 1
api_url: https://192.168.1.100:8443/devmgr/v2
api_username: admin
api_password: adminpass
validate_certs: true
current_admin_password: currentadminpass
password: newpassword123
user: admin
"""
RETURN = """
msg:
description: Success message
returned: success
type: str
sample: "Password Updated Successfully"
"""
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
from time import sleep
class NetAppESeriesAuth(NetAppESeriesModule):
def __init__(self):
version = "02.00.0000.0000"
ansible_options = dict(current_admin_password=dict(type="str", required=False, no_log=True),
password=dict(type="str", required=False, no_log=True),
user=dict(type="str", choices=["admin", "monitor", "support", "security", "storage"], default="admin", required=False),
minimum_password_length=dict(type="int", required=False, no_log=True))
super(NetAppESeriesAuth, self).__init__(ansible_options=ansible_options, web_services_version=version, supports_check_mode=True)
args = self.module.params
self.current_admin_password = args["current_admin_password"]
self.password = args["password"]
self.user = args["user"]
self.minimum_password_length = args["minimum_password_length"]
self.DEFAULT_HEADERS.update({"x-netapp-password-validate-method": "none"})
self.is_admin_password_set = None
self.current_password_length_requirement = None
def minimum_password_length_change_required(self):
"""Retrieve the current storage array's global configuration."""
change_required = False
try:
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
rc, system_info = self.request("local-users/info", force_basic_auth=False)
elif self.is_embedded_available():
rc, system_info = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/local-users/info" % self.ssid,
force_basic_auth=False)
else:
return False # legacy systems without embedded web services.
else:
rc, system_info = self.request("storage-systems/%s/local-users/info" % self.ssid, force_basic_auth=False)
except Exception as error:
self.module.fail_json(msg="Failed to determine minimum password length. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
self.is_admin_password_set = system_info["adminPasswordSet"]
if self.minimum_password_length is not None and self.minimum_password_length != system_info["minimumPasswordLength"]:
change_required = True
if (self.password is not None and ((change_required and self.minimum_password_length > len(self.password)) or
(not change_required and system_info["minimumPasswordLength"] > len(self.password)))):
self.module.fail_json(msg="Password does not meet the length requirement [%s]. Array Id [%s]." % (system_info["minimumPasswordLength"], self.ssid))
return change_required
def update_minimum_password_length(self):
"""Update automatic load balancing state."""
try:
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
try:
if not self.is_admin_password_set:
self.creds["url_password"] = "admin"
rc, minimum_password_length = self.request("local-users/password-length", method="POST",
data={"minimumPasswordLength": self.minimum_password_length})
except Exception as error:
if not self.is_admin_password_set:
self.creds["url_password"] = ""
rc, minimum_password_length = self.request("local-users/password-length", method="POST",
data={"minimumPasswordLength": self.minimum_password_length})
elif self.is_embedded_available():
if not self.is_admin_password_set:
self.creds["url_password"] = ""
rc, minimum_password_length = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/local-users/password-length" % self.ssid,
method="POST", data={"minimumPasswordLength": self.minimum_password_length})
else:
if not self.is_admin_password_set:
self.creds["url_password"] = ""
rc, minimum_password_length = self.request("storage-systems/%s/local-users/password-length" % self.ssid, method="POST",
data={"minimumPasswordLength": self.minimum_password_length})
except Exception as error:
self.module.fail_json(msg="Failed to set minimum password length. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def logout_system(self):
"""Ensure system is logged out. This is required because login test will always succeed if previously logged in."""
try:
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
rc, system_info = self.request("utils/login", rest_api_path=self.DEFAULT_BASE_PATH, method="DELETE", force_basic_auth=False)
elif self.is_embedded_available():
rc, system_info = self.request("storage-systems/%s/forward/devmgr/utils/login" % self.ssid, method="DELETE", force_basic_auth=False)
else:
# Nothing to do for legacy systems without embedded web services.
pass
else:
rc, system_info = self.request("utils/login", rest_api_path=self.DEFAULT_BASE_PATH, method="DELETE", force_basic_auth=False)
except Exception as error:
self.module.fail_json(msg="Failed to log out of storage system [%s]. Error [%s]." % (self.ssid, to_native(error)))
def password_change_required(self):
"""Verify whether the current password is expected array password. Works only against embedded systems."""
if self.password is None:
return False
change_required = False
system_info = None
try:
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
rc, system_info = self.request("local-users/info", force_basic_auth=False)
elif self.is_embedded_available():
rc, system_info = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/local-users/info" % self.ssid,
force_basic_auth=False)
else:
rc, response = self.request("storage-systems/%s/passwords" % self.ssid, ignore_errors=True)
system_info = {"minimumPasswordLength": 0, "adminPasswordSet": response["adminPasswordSet"]}
else:
rc, system_info = self.request("storage-systems/%s/local-users/info" % self.ssid, force_basic_auth=False)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve information about storage system [%s]. Error [%s]." % (self.ssid, to_native(error)))
self.is_admin_password_set = system_info["adminPasswordSet"]
if not self.is_admin_password_set:
if self.user == "admin" and self.password != "":
change_required = True
# Determine whether user's password needs to be changed
else:
utils_login_used = False
self.logout_system() # This ensures that login test functions correctly. The query onlycheck=true does not work.
if self.is_proxy():
if self.ssid == "0" or self.ssid.lower() == "proxy":
utils_login_used = True
rc, response = self.request("utils/login?uid=%s&pwd=%s&xsrf=false&onlycheck=false" % (self.user, self.password),
rest_api_path=self.DEFAULT_BASE_PATH, log_request=False, ignore_errors=True, force_basic_auth=False)
# elif self.is_embedded_available():
# utils_login_used = True
# rc, response = self.request("storage-systems/%s/forward/devmgr/utils/login?uid=%s&pwd=%s&xsrf=false&onlycheck=false"
# % (self.ssid, self.user, self.password), log_request=False, ignore_errors=True, force_basic_auth=False)
else:
if self.user == "admin":
rc, response = self.request("storage-systems/%s/stored-password/validate" % self.ssid, method="POST", log_request=False,
ignore_errors=True, data={"password": self.password})
if rc == 200:
change_required = not response["isValidPassword"]
elif rc == 404: # endpoint did not exist, old proxy version
if self.is_web_services_version_met("04.10.0000.0000"):
self.module.fail_json(msg="For platforms before E2800 use SANtricity Web Services Proxy 4.1 or later! Array Id [%s].")
self.module.fail_json(msg="Failed to validate stored password! Array Id [%s].")
else:
self.module.fail_json(msg="Failed to validate stored password! Array Id [%s]." % self.ssid)
else:
self.module.fail_json(msg="Role based login not available! Only storage system password can be set for storage systems prior to E2800."
" Array Id [%s]." % self.ssid)
else:
utils_login_used = True
rc, response = self.request("utils/login?uid=%s&pwd=%s&xsrf=false&onlycheck=false" % (self.user, self.password),
rest_api_path=self.DEFAULT_BASE_PATH, log_request=False, ignore_errors=True, force_basic_auth=False)
# Check return codes to determine whether a change is required
if utils_login_used:
if rc == 401:
change_required = True
elif rc == 422:
self.module.fail_json(msg="SAML enabled! SAML disables default role based login. Array [%s]" % self.ssid)
return change_required
def set_array_admin_password(self):
"""Set the array's admin password."""
if self.is_proxy():
# Update proxy's local users
if self.ssid == "0" or self.ssid.lower() == "proxy":
self.creds["url_password"] = "admin"
try:
body = {"currentAdminPassword": "", "updates": {"userName": "admin", "newPassword": self.password}}
rc, proxy = self.request("local-users", method="POST", data=body)
except Exception as error:
self.creds["url_password"] = ""
try:
body = {"currentAdminPassword": "", "updates": {"userName": "admin", "newPassword": self.password}}
rc, proxy = self.request("local-users", method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set proxy's admin password. Error [%s]." % to_native(error))
self.creds["url_password"] = self.password
# Update password using the password endpoints, this will also update the storaged password
else:
try:
body = {"currentAdminPassword": "", "newPassword": self.password, "adminPassword": True}
rc, storage_system = self.request("storage-systems/%s/passwords" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set storage system's admin password. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Update embedded local users
else:
self.creds["url_password"] = ""
try:
body = {"currentAdminPassword": "", "updates": {"userName": "admin", "newPassword": self.password}}
rc, proxy = self.request("storage-systems/%s/local-users" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set embedded storage system's admin password. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
self.creds["url_password"] = self.password
def set_array_password(self):
"""Set the array password."""
if not self.is_admin_password_set:
self.module.fail_json(msg="Admin password not set! Set admin password before changing non-admin user passwords. Array [%s]." % self.ssid)
if self.is_proxy():
# Update proxy's local users
if self.ssid == "0" or self.ssid.lower() == "proxy":
try:
body = {"currentAdminPassword": self.creds["url_password"], "updates": {"userName": self.user, "newPassword": self.password}}
rc, proxy = self.request("local-users", method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set proxy password. Error [%s]." % to_native(error))
# Update embedded admin password via proxy passwords endpoint to include updating proxy/unified manager
elif self.user == "admin":
try:
body = {"adminPassword": True, "currentAdminPassword": self.current_admin_password, "newPassword": self.password}
rc, proxy = self.request("storage-systems/%s/passwords" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set embedded user password. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Update embedded non-admin passwords via proxy forward endpoint.
elif self.is_embedded_available():
try:
body = {"currentAdminPassword": self.current_admin_password, "updates": {"userName": self.user, "newPassword": self.password}}
rc, proxy = self.request("storage-systems/%s/forward/devmgr/v2/storage-systems/1/local-users" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set embedded user password. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Update embedded local users
else:
try:
body = {"currentAdminPassword": self.creds["url_password"], "updates": {"userName": self.user, "newPassword": self.password}}
rc, proxy = self.request("storage-systems/%s/local-users" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to set embedded user password. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def apply(self):
"""Apply any required changes."""
password_change_required = self.password_change_required()
minimum_password_length_change_required = self.minimum_password_length_change_required()
change_required = password_change_required or minimum_password_length_change_required
if change_required and not self.module.check_mode:
if minimum_password_length_change_required:
self.update_minimum_password_length()
if password_change_required:
if not self.is_admin_password_set:
self.set_array_admin_password()
else:
self.set_array_password()
if password_change_required and minimum_password_length_change_required:
self.module.exit_json(msg="'%s' password and required password length has been changed. Array [%s]."
% (self.user, self.ssid), changed=change_required)
elif password_change_required:
self.module.exit_json(msg="'%s' password has been changed. Array [%s]." % (self.user, self.ssid), changed=change_required)
elif minimum_password_length_change_required:
self.module.exit_json(msg="Required password length has been changed. Array [%s]." % self.ssid, changed=change_required)
self.module.exit_json(msg="No changes have been made. Array [%s]." % self.ssid, changed=change_required)
def main():
auth = NetAppESeriesAuth()
auth.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,278 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
module: na_santricity_client_certificate
short_description: NetApp E-Series manage remote server certificates.
description: Manage NetApp E-Series storage array's remote server certificates.
author: Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
certificates:
description:
- List of certificate files
- Each item must include the path to the file
type: list
required: false
remove_unspecified_user_certificates:
description:
- Whether to remove user install client certificates that are not specified in I(certificates).
type: bool
default: false
required: false
reload_certificates:
description:
- Whether to reload certificates when certificates have been added or removed.
- Certificates will not be available or removed until the servers have been reloaded.
type: bool
default: true
required: false
notes:
- Set I(ssid=="0") or I(ssid=="proxy") to specifically reference SANtricity Web Services Proxy.
requirements:
- cryptography
"""
EXAMPLES = """
- name: Upload certificates
na_santricity_client_certificate:
ssid: 1
api_url: https://192.168.1.100:8443/devmgr/v2
api_username: admin
api_password: adminpass
certificates: ["/path/to/certificates.crt", "/path/to/another_certificate.crt"]
- name: Remove all certificates
na_santricity_client_certificate:
ssid: 1
api_url: https://192.168.1.100:8443/devmgr/v2
api_username: admin
api_password: adminpass
"""
RETURN = """
changed:
description: Whether changes have been made.
type: bool
returned: always
sample: true
add_certificates:
description: Any SSL certificates that were added.
type: list
returned: always
sample: ["added_cerificiate.crt"]
removed_certificates:
description: Any SSL certificates that were removed.
type: list
returned: always
sample: ["removed_cerificiate.crt"]
"""
import binascii
import os
import re
from time import sleep
from datetime import datetime
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule, create_multipart_formdata
from ansible.module_utils._text import to_native
try:
from cryptography import x509
from cryptography.hazmat.backends import default_backend
except ImportError:
HAS_CRYPTOGRAPHY = False
else:
HAS_CRYPTOGRAPHY = True
class NetAppESeriesClientCertificate(NetAppESeriesModule):
RELOAD_TIMEOUT_SEC = 3 * 60
def __init__(self):
ansible_options = dict(certificates=dict(type="list", required=False),
remove_unspecified_user_certificates=dict(type="bool", default=False, required=False),
reload_certificates=dict(type="bool", default=True, required=False))
super(NetAppESeriesClientCertificate, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
args = self.module.params
self.certificates = args["certificates"] if args["certificates"] else []
self.remove_unspecified_user_certificates = args["remove_unspecified_user_certificates"]
self.apply_reload_certificates = args["reload_certificates"]
# Check whether request needs to be forwarded on to the controller web services rest api.
self.url_path_prefix = ""
if self.is_proxy() and self.ssid != "0" and self.ssid.lower() != "proxy":
self.url_path_prefix = "storage-systems/%s/forward/devmgr/v2/" % self.ssid
self.remove_certificates = list()
self.add_certificates = list()
self.certificate_fingerprint_cache = None
self.certificate_info_cache = None
def certificate_info(self, path):
"""Determine the pertinent certificate information: alias, subjectDN, issuerDN, start and expire.
Note: Use only when certificate/remote-server endpoints do not exist. Used to identify certificates through
the sslconfig/ca endpoint.
"""
certificate = None
with open(path, "rb") as fh:
data = fh.read()
try:
certificate = x509.load_pem_x509_certificate(data, default_backend())
except Exception as error:
try:
certificate = x509.load_der_x509_certificate(data, default_backend())
except Exception as error:
self.module.fail_json(msg="Failed to load certificate. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
if not isinstance(certificate, x509.Certificate):
self.module.fail_json(msg="Failed to open certificate file or invalid certificate object type. Array [%s]." % self.ssid)
return dict(start_date=certificate.not_valid_before,
expire_date=certificate.not_valid_after,
subject_dn=[attr.value for attr in certificate.subject],
issuer_dn=[attr.value for attr in certificate.issuer])
def certificate_fingerprint(self, path):
"""Load x509 certificate that is either encoded DER or PEM encoding and return the certificate fingerprint."""
certificate = None
with open(path, "rb") as fh:
data = fh.read()
try:
certificate = x509.load_pem_x509_certificate(data, default_backend())
except Exception as error:
try:
certificate = x509.load_der_x509_certificate(data, default_backend())
except Exception as error:
self.module.fail_json(msg="Failed to determine certificate fingerprint. File [%s]. Array [%s]. Error [%s]."
% (path, self.ssid, to_native(error)))
return binascii.hexlify(certificate.fingerprint(certificate.signature_hash_algorithm)).decode("utf-8")
def determine_changes(self):
"""Search for remote server certificate that goes by the alias or has a matching fingerprint."""
rc, current_certificates = self.request(self.url_path_prefix + "certificates/remote-server", ignore_errors=True)
if rc == 404: # system down or endpoint does not exist
rc, current_certificates = self.request(self.url_path_prefix + "sslconfig/ca?useTruststore=true", ignore_errors=True)
if rc > 299:
self.module.fail_json(msg="Failed to retrieve remote server certificates. Array [%s]." % self.ssid)
user_installed_certificates = [certificate for certificate in current_certificates if certificate["isUserInstalled"]]
existing_certificates = []
for path in self.certificates:
for current_certificate in user_installed_certificates:
info = self.certificate_info(path)
tmp = dict(subject_dn=[re.sub(r".*=", "", item) for item in current_certificate["subjectDN"].split(", ")],
issuer_dn=[re.sub(r".*=", "", item) for item in current_certificate["issuerDN"].split(", ")],
start_date=datetime.strptime(current_certificate["start"].split(".")[0], "%Y-%m-%dT%H:%M:%S"),
expire_date=datetime.strptime(current_certificate["expire"].split(".")[0], "%Y-%m-%dT%H:%M:%S"))
if (all([attr in info["subject_dn"] for attr in tmp["subject_dn"]]) and
all([attr in info["issuer_dn"] for attr in tmp["issuer_dn"]]) and
tmp["start_date"] == info["start_date"] and
tmp["expire_date"] == info["expire_date"]):
existing_certificates.append(current_certificate)
break
else:
self.add_certificates.append(path)
if self.remove_unspecified_user_certificates:
self.remove_certificates = [certificate for certificate in user_installed_certificates if certificate not in existing_certificates]
elif rc > 299:
self.module.fail_json(msg="Failed to retrieve remote server certificates. Array [%s]." % self.ssid)
else:
user_installed_certificates = [certificate for certificate in current_certificates if certificate["isUserInstalled"]]
existing_certificates = []
for path in self.certificates:
fingerprint = self.certificate_fingerprint(path)
for current_certificate in user_installed_certificates:
if current_certificate["sha256Fingerprint"] == fingerprint or current_certificate["shaFingerprint"] == fingerprint:
existing_certificates.append(current_certificate)
break
else:
self.add_certificates.append(path)
if self.remove_unspecified_user_certificates:
self.remove_certificates = [certificate for certificate in user_installed_certificates if certificate not in existing_certificates]
def upload_certificate(self, path):
"""Add or update remote server certificate to the storage array."""
file_name = os.path.basename(path)
headers, data = create_multipart_formdata(files=[("file", file_name, path)])
rc, resp = self.request(self.url_path_prefix + "certificates/remote-server", method="POST", headers=headers, data=data, ignore_errors=True)
if rc == 404:
rc, resp = self.request(self.url_path_prefix + "sslconfig/ca?useTruststore=true", method="POST", headers=headers, data=data, ignore_errors=True)
if rc > 299:
self.module.fail_json(msg="Failed to upload certificate. Array [%s]. Error [%s, %s]." % (self.ssid, rc, resp))
def delete_certificate(self, info):
"""Delete existing remote server certificate in the storage array truststore."""
rc, resp = self.request(self.url_path_prefix + "certificates/remote-server/%s" % info["alias"], method="DELETE", ignore_errors=True)
if rc == 404:
rc, resp = self.request(self.url_path_prefix + "sslconfig/ca/%s?useTruststore=true" % info["alias"], method="DELETE", ignore_errors=True)
if rc > 204:
self.module.fail_json(msg="Failed to delete certificate. Alias [%s]. Array [%s]. Error [%s, %s]." % (info["alias"], self.ssid, rc, resp))
def reload_certificates(self):
"""Reload certificates on both controllers."""
rc, resp = self.request(self.url_path_prefix + "certificates/reload?reloadBoth=true", method="POST", ignore_errors=True)
if rc == 404:
rc, resp = self.request(self.url_path_prefix + "sslconfig/reload?reloadBoth=true", method="POST", ignore_errors=True)
if rc > 202:
self.module.fail_json(msg="Failed to initiate certificate reload on both controllers! Array [%s]." % self.ssid)
# Wait for controller to be online again.
for retry in range(int(self.RELOAD_TIMEOUT_SEC / 3)):
rc, current_certificates = self.request(self.url_path_prefix + "certificates/remote-server", ignore_errors=True)
if rc == 404: # system down or endpoint does not exist
rc, current_certificates = self.request(self.url_path_prefix + "sslconfig/ca?useTruststore=true", ignore_errors=True)
if rc < 300:
break
sleep(3)
else:
self.module.fail_json(msg="Failed to retrieve server certificates. Array [%s]." % self.ssid)
def apply(self):
"""Apply state changes to the storage array's truststore."""
changed = False
self.determine_changes()
if self.remove_certificates or self.add_certificates:
changed = True
if changed and not self.module.check_mode:
for info in self.remove_certificates:
self.delete_certificate(info)
for path in self.add_certificates:
self.upload_certificate(path)
if self.apply_reload_certificates:
self.reload_certificates()
self.module.exit_json(changed=changed, removed_certificates=self.remove_certificates, add_certificates=self.add_certificates)
def main():
client_certs = NetAppESeriesClientCertificate()
client_certs.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,332 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_discover
short_description: NetApp E-Series discover E-Series storage systems
description: Module searches a subnet range and returns any available E-Series storage systems.
author: Nathan Swartz (@ndswartz)
options:
subnet_mask:
description:
- This is the IPv4 search range for discovering E-Series storage arrays.
- IPv4 subnet mask specified in CIDR form. Example 192.168.1.0/24 would search the range 192.168.1.0 to 192.168.1.255.
- Be sure to include all management paths in the search range.
type: str
required: true
ports:
description:
- This option specifies which ports to be tested during the discovery process.
- The first usable port will be used in the returned API url.
type: list
default: [8443]
required: false
proxy_url:
description:
- Web Services Proxy REST API URL. Example https://192.168.1.100:8443/devmgr/v2/
type: str
required: false
proxy_username:
description:
- Web Service Proxy username
type: str
required: false
proxy_password:
description:
- Web Service Proxy user password
type: str
required: false
proxy_validate_certs:
description:
- Whether to validate Web Service Proxy SSL certificate
type: bool
default: true
required: false
prefer_embedded:
description:
- Give preference to Web Services Embedded when an option exists for both Web Services Proxy and Embedded.
- Web Services Proxy will be utilized when available by default.
type: bool
default: false
required: false
notes:
- Only available for platforms E2800 or later (SANtricity Web Services Embedded REST API must be available).
- All E-Series storage systems with SANtricity version 11.62 or later will be discovered.
- Only E-Series storage systems without a set admin password running SANtricity versions prior to 11.62 will be discovered.
- Use SANtricity Web Services Proxy to discover all systems regardless of SANricity version or password.
requirements:
- ipaddress
"""
EXAMPLES = """
- name: Discover all E-Series storage systems on the network.
na_santricity_discover:
subnet_mask: 192.168.1.0/24
"""
RETURN = """
systems_found:
description: Success message
returned: on success
type: dict
sample: '{"012341234123": {
"addresses": ["192.168.1.184", "192.168.1.185"],
"api_urls": ["https://192.168.1.184:8443/devmgr/v2/", "https://192.168.1.185:8443/devmgr/v2/"],
"label": "ExampleArray01",
"proxy_ssid: "",
"proxy_required": false},
"012341234567": {
"addresses": ["192.168.1.23", "192.168.1.24"],
"api_urls": ["https://192.168.1.100:8443/devmgr/v2/"],
"label": "ExampleArray02",
"proxy_ssid": "array_ssid",
"proxy_required": true}}'
"""
import json
import multiprocessing
import threading
from time import sleep
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import request
from ansible.module_utils._text import to_native
try:
import ipaddress
except ImportError:
HAS_IPADDRESS = False
else:
HAS_IPADDRESS = True
try:
import urlparse
except ImportError:
import urllib.parse as urlparse
class NetAppESeriesDiscover:
"""Discover E-Series storage systems."""
MAX_THREAD_POOL_SIZE = 256
CPU_THREAD_MULTIPLE = 32
SEARCH_TIMEOUT = 30
DEFAULT_CONNECTION_TIMEOUT_SEC = 30
DEFAULT_DISCOVERY_TIMEOUT_SEC = 300
def __init__(self):
ansible_options = dict(subnet_mask=dict(type="str", required=True),
ports=dict(type="list", required=False, default=[8443]),
proxy_url=dict(type="str", required=False),
proxy_username=dict(type="str", required=False),
proxy_password=dict(type="str", required=False, no_log=True),
proxy_validate_certs=dict(type="bool", default=True, required=False),
prefer_embedded=dict(type="bool", default=False, required=False))
required_together = [["proxy_url", "proxy_username", "proxy_password"]]
self.module = AnsibleModule(argument_spec=ansible_options, required_together=required_together)
args = self.module.params
self.subnet_mask = args["subnet_mask"]
self.prefer_embedded = args["prefer_embedded"]
self.ports = []
self.proxy_url = args["proxy_url"]
if args["proxy_url"]:
parsed_url = list(urlparse.urlparse(args["proxy_url"]))
parsed_url[2] = "/devmgr/utils/about"
self.proxy_about_url = urlparse.urlunparse(parsed_url)
parsed_url[2] = "/devmgr/v2/"
self.proxy_url = urlparse.urlunparse(parsed_url)
self.proxy_username = args["proxy_username"]
self.proxy_password = args["proxy_password"]
self.proxy_validate_certs = args["proxy_validate_certs"]
for port in args["ports"]:
if str(port).isdigit() and 0 < port < 2 ** 16:
self.ports.append(str(port))
else:
self.module.fail_json(msg="Invalid port! Ports must be positive numbers between 0 and 65536.")
self.systems_found = {}
def check_ip_address(self, systems_found, address):
"""Determine where an E-Series storage system is available at a specific ip address."""
for port in self.ports:
if port == "8080":
url = "http://%s:%s/" % (address, port)
else:
url = "https://%s:%s/" % (address, port)
try:
rc, about = request(url + "devmgr/v2/storage-systems/1/about", validate_certs=False, force_basic_auth=False, ignore_errors=True)
if about["serialNumber"] in systems_found:
systems_found[about["serialNumber"]]["api_urls"].append(url)
else:
systems_found.update({about["serialNumber"]: {"api_urls": [url], "label": about["name"],
"addresses": [], "proxy_ssid": "", "proxy_required": False}})
break
except Exception as error:
try:
rc, sa_data = request(url + "devmgr/v2/storage-systems/1/symbol/getSAData", validate_certs=False, force_basic_auth=False,
ignore_errors=True)
if rc == 401: # Unauthorized
self.module.warn(
"Fail over and discover any storage system without a set admin password. This will discover systems without a set password"
" such as newly deployed storage systems. Address [%s]." % address)
# Fail over and discover any storage system without a set admin password. This will cover newly deployed systems.
rc, graph = request(url + "graph", validate_certs=False, url_username="admin", url_password="", timeout=self.SEARCH_TIMEOUT)
sa_data = graph["sa"]["saData"]
if sa_data["chassisSerialNumber"] in systems_found:
systems_found[sa_data["chassisSerialNumber"]]["api_urls"].append(url)
else:
systems_found.update({sa_data["chassisSerialNumber"]: {"api_urls": [url], "label": sa_data["storageArrayLabel"],
"addresses": [], "proxy_ssid": "", "proxy_required": False}})
break
except Exception as error:
pass
def no_proxy_discover(self):
"""Discover E-Series storage systems using embedded web services."""
thread_pool_size = min(multiprocessing.cpu_count() * self.CPU_THREAD_MULTIPLE, self.MAX_THREAD_POOL_SIZE)
subnet = list(ipaddress.ip_network(u"%s" % self.subnet_mask))
thread_pool = []
search_count = len(subnet)
for start in range(0, search_count, thread_pool_size):
end = search_count if (search_count - start) < thread_pool_size else start + thread_pool_size
for address in subnet[start:end]:
thread = threading.Thread(target=self.check_ip_address, args=(self.systems_found, address))
thread_pool.append(thread)
thread.start()
for thread in thread_pool:
thread.join()
def verify_proxy_service(self):
"""Verify proxy url points to a web services proxy."""
try:
rc, about = request(self.proxy_about_url, validate_certs=self.proxy_validate_certs)
if not about["runningAsProxy"]:
self.module.fail_json(msg="Web Services is not running as a proxy!")
except Exception as error:
self.module.fail_json(msg="Proxy is not available! Check proxy_url. Error [%s]." % to_native(error))
def test_systems_found(self, systems_found, serial, label, addresses):
"""Verify and build api urls."""
api_urls = []
for address in addresses:
for port in self.ports:
if port == "8080":
url = "http://%s:%s/devmgr/" % (address, port)
else:
url = "https://%s:%s/devmgr/" % (address, port)
try:
rc, response = request(url + "utils/about", validate_certs=False, timeout=self.SEARCH_TIMEOUT)
api_urls.append(url + "v2/")
break
except Exception as error:
pass
systems_found.update({serial: {"api_urls": api_urls,
"label": label,
"addresses": addresses,
"proxy_ssid": "",
"proxy_required": False}})
def proxy_discover(self):
"""Search for array using it's chassis serial from web services proxy."""
self.verify_proxy_service()
subnet = ipaddress.ip_network(u"%s" % self.subnet_mask)
try:
rc, request_id = request(self.proxy_url + "discovery", method="POST", validate_certs=self.proxy_validate_certs,
force_basic_auth=True, url_username=self.proxy_username, url_password=self.proxy_password,
data=json.dumps({"startIP": str(subnet[0]), "endIP": str(subnet[-1]),
"connectionTimeout": self.DEFAULT_CONNECTION_TIMEOUT_SEC}))
# Wait for discover to complete
try:
for iteration in range(self.DEFAULT_DISCOVERY_TIMEOUT_SEC):
rc, discovered_systems = request(self.proxy_url + "discovery?requestId=%s" % request_id["requestId"],
validate_certs=self.proxy_validate_certs,
force_basic_auth=True, url_username=self.proxy_username, url_password=self.proxy_password)
if not discovered_systems["discoverProcessRunning"]:
thread_pool = []
for discovered_system in discovered_systems["storageSystems"]:
addresses = []
for controller in discovered_system["controllers"]:
addresses.extend(controller["ipAddresses"])
# Storage systems with embedded web services.
if "https" in discovered_system["supportedManagementPorts"] and self.prefer_embedded:
thread = threading.Thread(target=self.test_systems_found,
args=(self.systems_found, discovered_system["serialNumber"], discovered_system["label"], addresses))
thread_pool.append(thread)
thread.start()
# Storage systems without embedded web services.
else:
self.systems_found.update({discovered_system["serialNumber"]: {"api_urls": [self.proxy_url],
"label": discovered_system["label"],
"addresses": addresses,
"proxy_ssid": "",
"proxy_required": True}})
for thread in thread_pool:
thread.join()
break
sleep(1)
else:
self.module.fail_json(msg="Timeout waiting for array discovery process. Subnet [%s]" % self.subnet_mask)
except Exception as error:
self.module.fail_json(msg="Failed to get the discovery results. Error [%s]." % to_native(error))
except Exception as error:
self.module.fail_json(msg="Failed to initiate array discovery. Error [%s]." % to_native(error))
def update_proxy_with_proxy_ssid(self):
"""Determine the current proxy ssid for all discovered-proxy_required storage systems."""
# Discover all added storage systems to the proxy.
systems = []
try:
rc, systems = request(self.proxy_url + "storage-systems", validate_certs=self.proxy_validate_certs,
force_basic_auth=True, url_username=self.proxy_username, url_password=self.proxy_password)
except Exception as error:
self.module.fail_json(msg="Failed to ascertain storage systems added to Web Services Proxy.")
for system_key, system_info in self.systems_found.items():
if self.systems_found[system_key]["proxy_required"]:
for system in systems:
if system_key == system["chassisSerialNumber"]:
self.systems_found[system_key]["proxy_ssid"] = system["id"]
def discover(self):
"""Discover E-Series storage systems."""
missing_packages = []
if not HAS_IPADDRESS:
missing_packages.append("ipaddress")
if missing_packages:
self.module.fail_json(msg="Python packages are missing! Packages [%s]." % ", ".join(missing_packages))
if self.proxy_url:
self.proxy_discover()
self.update_proxy_with_proxy_ssid()
else:
self.no_proxy_discover()
self.module.exit_json(msg="Discover process complete.", systems_found=self.systems_found, changed=False)
def main():
discover = NetAppESeriesDiscover()
discover.discover()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,209 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_drive_firmware
short_description: NetApp E-Series manage drive firmware
description:
- Ensure drive firmware version is activated on specified drive model.
author:
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
firmware:
description:
- list of drive firmware file paths.
- NetApp E-Series drives require special firmware which can be downloaded from https://mysupport.netapp.com/NOW/download/tools/diskfw_eseries/
type: list
required: True
wait_for_completion:
description:
- This flag will cause module to wait for any upgrade actions to complete.
type: bool
default: false
ignore_inaccessible_drives:
description:
- This flag will determine whether drive firmware upgrade should fail if any affected drives are inaccessible.
type: bool
default: false
upgrade_drives_online:
description:
- This flag will determine whether drive firmware can be upgrade while drives are accepting I/O.
- When I(upgrade_drives_online==False) stop all I/O before running task.
type: bool
default: true
"""
EXAMPLES = """
- name: Ensure correct firmware versions
na_santricity_drive_firmware:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
firmware: "path/to/drive_firmware"
wait_for_completion: true
ignore_inaccessible_drives: false
"""
RETURN = """
msg:
description: Whether any drive firmware was upgraded and whether it is in progress.
type: str
returned: always
sample:
{ changed: True, upgrade_in_process: True }
"""
import os
import re
from time import sleep
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule, create_multipart_formdata, request
from ansible.module_utils._text import to_native
class NetAppESeriesDriveFirmware(NetAppESeriesModule):
WAIT_TIMEOUT_SEC = 60 * 15
def __init__(self):
ansible_options = dict(
firmware=dict(type="list", required=True),
wait_for_completion=dict(type="bool", default=False),
ignore_inaccessible_drives=dict(type="bool", default=False),
upgrade_drives_online=dict(type="bool", default=True))
super(NetAppESeriesDriveFirmware, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
args = self.module.params
self.firmware_list = args["firmware"]
self.wait_for_completion = args["wait_for_completion"]
self.ignore_inaccessible_drives = args["ignore_inaccessible_drives"]
self.upgrade_drives_online = args["upgrade_drives_online"]
self.upgrade_list_cache = None
self.upgrade_required_cache = None
self.upgrade_in_progress = False
self.drive_info_cache = None
def upload_firmware(self):
"""Ensure firmware has been upload prior to uploaded."""
for firmware in self.firmware_list:
firmware_name = os.path.basename(firmware)
files = [("file", firmware_name, firmware)]
headers, data = create_multipart_formdata(files)
try:
rc, response = self.request("/files/drive", method="POST", headers=headers, data=data)
except Exception as error:
self.module.fail_json(msg="Failed to upload drive firmware [%s]. Array [%s]. Error [%s]." % (firmware_name, self.ssid, to_native(error)))
def upgrade_list(self):
"""Determine whether firmware is compatible with the specified drives."""
if self.upgrade_list_cache is None:
self.upgrade_list_cache = list()
try:
rc, response = self.request("storage-systems/%s/firmware/drives" % self.ssid)
# Create upgrade list, this ensures only the firmware uploaded is applied
for firmware in self.firmware_list:
filename = os.path.basename(firmware)
for uploaded_firmware in response["compatibilities"]:
if uploaded_firmware["filename"] == filename:
# Determine whether upgrade is required
drive_reference_list = []
for drive in uploaded_firmware["compatibleDrives"]:
try:
rc, drive_info = self.request("storage-systems/%s/drives/%s" % (self.ssid, drive["driveRef"]))
# Add drive references that are supported and differ from current firmware
if (drive_info["firmwareVersion"] != uploaded_firmware["firmwareVersion"] and
uploaded_firmware["firmwareVersion"] in uploaded_firmware["supportedFirmwareVersions"]):
if self.ignore_inaccessible_drives or not drive_info["offline"]:
drive_reference_list.append(drive["driveRef"])
if not drive["onlineUpgradeCapable"] and self.upgrade_drives_online:
self.module.fail_json(msg="Drive is not capable of online upgrade. Array [%s]. Drive [%s]."
% (self.ssid, drive["driveRef"]))
except Exception as error:
self.module.fail_json(msg="Failed to retrieve drive information. Array [%s]. Drive [%s]. Error [%s]."
% (self.ssid, drive["driveRef"], to_native(error)))
if drive_reference_list:
self.upgrade_list_cache.extend([{"filename": filename, "driveRefList": drive_reference_list}])
except Exception as error:
self.module.fail_json(msg="Failed to complete compatibility and health check. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
return self.upgrade_list_cache
def wait_for_upgrade_completion(self):
"""Wait for drive firmware upgrade to complete."""
drive_references = [reference for drive in self.upgrade_list() for reference in drive["driveRefList"]]
last_status = None
for attempt in range(int(self.WAIT_TIMEOUT_SEC / 5)):
try:
rc, response = self.request("storage-systems/%s/firmware/drives/state" % self.ssid)
# Check drive status
for status in response["driveStatus"]:
last_status = status
if status["driveRef"] in drive_references:
if status["status"] == "okay":
continue
elif status["status"] in ["inProgress", "inProgressRecon", "pending", "notAttempted"]:
break
else:
self.module.fail_json(msg="Drive firmware upgrade failed. Array [%s]. Drive [%s]. Status [%s]."
% (self.ssid, status["driveRef"], status["status"]))
else:
self.upgrade_in_progress = False
break
except Exception as error:
self.module.fail_json(msg="Failed to retrieve drive status. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
sleep(5)
else:
self.module.fail_json(msg="Timed out waiting for drive firmware upgrade. Array [%s]. Status [%s]." % (self.ssid, last_status))
def upgrade(self):
"""Apply firmware to applicable drives."""
try:
rc, response = self.request("storage-systems/%s/firmware/drives/initiate-upgrade?onlineUpdate=%s"
% (self.ssid, "true" if self.upgrade_drives_online else "false"), method="POST", data=self.upgrade_list())
self.upgrade_in_progress = True
except Exception as error:
self.module.fail_json(msg="Failed to upgrade drive firmware. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
if self.wait_for_completion:
self.wait_for_upgrade_completion()
def apply(self):
"""Apply firmware policy has been enforced on E-Series storage system."""
self.upload_firmware()
if self.upgrade_list() and not self.module.check_mode:
self.upgrade()
self.module.exit_json(changed=True if self.upgrade_list() else False,
upgrade_in_process=self.upgrade_in_progress)
def main():
drive_firmware = NetAppESeriesDriveFirmware()
drive_firmware.apply()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,604 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_firmware
short_description: NetApp E-Series manage firmware.
description:
- Ensure specific firmware versions are activated on E-Series storage system.
author:
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
nvsram:
description:
- Path to the NVSRAM file.
- NetApp recommends upgrading the NVSRAM when upgrading firmware.
- Due to concurrency issues, use M(na_santricity_proxy_firmware_upload) to upload firmware and nvsram to SANtricity Web Services Proxy when
upgrading multiple systems at the same time on the same instance of the proxy.
type: str
required: false
firmware:
description:
- Path to the firmware file.
- Due to concurrency issues, use M(na_santricity_proxy_firmware_upload) to upload firmware and nvsram to SANtricity Web Services Proxy when
upgrading multiple systems at the same time on the same instance of the proxy.
type: str
required: True
wait_for_completion:
description:
- This flag will cause module to wait for any upgrade actions to complete.
- When changes are required to both firmware and nvsram and task is executed against SANtricity Web Services Proxy,
the firmware will have to complete before nvsram can be installed.
type: bool
default: false
clear_mel_events:
description:
- This flag will force firmware to be activated in spite of the storage system mel-event issues.
- Warning! This will clear all storage system mel-events. Use at your own risk!
type: bool
default: false
"""
EXAMPLES = """
- name: Ensure correct firmware versions
na_santricity_firmware:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
nvsram: "path/to/nvsram"
firmware: "path/to/bundle"
wait_for_completion: true
clear_mel_events: true
- name: Ensure correct firmware versions
na_santricity_firmware:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
nvsram: "path/to/nvsram"
firmware: "path/to/firmware"
"""
RETURN = """
msg:
description: Status and version of firmware and NVSRAM.
type: str
returned: always
sample:
"""
import os
import multiprocessing
import threading
from time import sleep
from ansible.module_utils import six
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule, create_multipart_formdata, request
from ansible.module_utils._text import to_native
class NetAppESeriesFirmware(NetAppESeriesModule):
COMPATIBILITY_CHECK_TIMEOUT_SEC = 60
REBOOT_TIMEOUT_SEC = 30 * 60
MINIMUM_PROXY_VERSION = "04.10.00.0000"
def __init__(self):
ansible_options = dict(
nvsram=dict(type="str", required=False),
firmware=dict(type="str", required=True),
wait_for_completion=dict(type="bool", default=False),
clear_mel_events=dict(type="bool", default=False))
super(NetAppESeriesFirmware, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
args = self.module.params
self.nvsram = args["nvsram"]
self.firmware = args["firmware"]
self.wait_for_completion = args["wait_for_completion"]
self.clear_mel_events = args["clear_mel_events"]
self.nvsram_name = None
self.firmware_name = None
self.is_bundle_cache = None
self.firmware_version_cache = None
self.nvsram_version_cache = None
self.upgrade_required = False
self.upgrade_in_progress = False
self.module_info = dict()
if self.nvsram:
self.nvsram_name = os.path.basename(self.nvsram)
if self.firmware:
self.firmware_name = os.path.basename(self.firmware)
self.last_known_event = -1
self.is_firmware_activation_started_mel_event_count = 1
self.is_nvsram_download_completed_mel_event_count = 1
self.proxy_wait_for_upgrade_mel_event_count = 1
def is_upgrade_in_progress(self):
"""Determine whether an upgrade is already in progress."""
in_progress = False
if self.is_proxy():
try:
rc, status = self.request("storage-systems/%s/cfw-upgrade" % self.ssid)
in_progress = status["running"]
except Exception as error:
if "errorMessage" in to_native(error):
self.module.warn("Failed to retrieve upgrade status. Array [%s]. Error [%s]." % (self.ssid, error))
in_progress = False
else:
self.module.fail_json(msg="Failed to retrieve upgrade status. Array [%s]. Error [%s]." % (self.ssid, error))
else:
in_progress = False
return in_progress
def is_firmware_bundled(self):
"""Determine whether supplied firmware is bundle."""
if self.is_bundle_cache is None:
with open(self.firmware, "rb") as fh:
signature = fh.read(16).lower()
if b"firmware" in signature:
self.is_bundle_cache = False
elif b"combined_content" in signature:
self.is_bundle_cache = True
else:
self.module.fail_json(msg="Firmware file is invalid. File [%s]. Array [%s]" % (self.firmware, self.ssid))
return self.is_bundle_cache
def firmware_version(self):
"""Retrieve firmware version of the firmware file. Return: bytes string"""
if self.firmware_version_cache is None:
# Search firmware file for bundle or firmware version
with open(self.firmware, "rb") as fh:
line = fh.readline()
while line:
if self.is_firmware_bundled():
if b'displayableAttributeList=' in line:
for item in line[25:].split(b','):
key, value = item.split(b"|")
if key == b'VERSION':
self.firmware_version_cache = value.strip(b"\n")
break
elif b"Version:" in line:
self.firmware_version_cache = line.split()[-1].strip(b"\n")
break
line = fh.readline()
else:
self.module.fail_json(msg="Failed to determine firmware version. File [%s]. Array [%s]." % (self.firmware, self.ssid))
return self.firmware_version_cache
def nvsram_version(self):
"""Retrieve NVSRAM version of the NVSRAM file. Return: byte string"""
if self.nvsram_version_cache is None:
with open(self.nvsram, "rb") as fh:
line = fh.readline()
while line:
if b".NVSRAM Configuration Number" in line:
self.nvsram_version_cache = line.split(b'"')[-2]
break
line = fh.readline()
else:
self.module.fail_json(msg="Failed to determine NVSRAM file version. File [%s]. Array [%s]." % (self.nvsram, self.ssid))
return self.nvsram_version_cache
def check_system_health(self):
"""Ensure E-Series storage system is healthy. Works for both embedded and proxy web services."""
try:
rc, response = self.request("storage-systems/%s/health-check" % self.ssid, method="POST")
return response["successful"]
except Exception as error:
self.module.fail_json(msg="Health check failed! Array Id [%s]. Error[%s]." % (self.ssid, to_native(error)))
def embedded_check_compatibility(self):
"""Verify files are compatible with E-Series storage system."""
if self.nvsram:
self.embedded_check_nvsram_compatibility()
if self.firmware:
self.embedded_check_bundle_compatibility()
def embedded_check_nvsram_compatibility(self):
"""Verify the provided NVSRAM is compatible with E-Series storage system."""
files = [("nvsramimage", self.nvsram_name, self.nvsram)]
headers, data = create_multipart_formdata(files=files)
compatible = {}
try:
rc, compatible = self.request("firmware/embedded-firmware/%s/nvsram-compatibility-check" % self.ssid, method="POST", data=data, headers=headers)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve NVSRAM compatibility results. Array Id [%s]. Error[%s]." % (self.ssid, to_native(error)))
if not compatible["signatureTestingPassed"]:
self.module.fail_json(msg="Invalid NVSRAM file. File [%s]." % self.nvsram)
if not compatible["fileCompatible"]:
self.module.fail_json(msg="Incompatible NVSRAM file. File [%s]." % self.nvsram)
# Determine whether nvsram upgrade is required
for module in compatible["versionContents"]:
if module["bundledVersion"] != module["onboardVersion"]:
self.upgrade_required = True
# Update bundle info
self.module_info.update({module["module"]: {"onboard_version": module["onboardVersion"], "bundled_version": module["bundledVersion"]}})
def embedded_check_bundle_compatibility(self):
"""Verify the provided firmware bundle is compatible with E-Series storage system."""
files = [("files[]", "blob", self.firmware)]
headers, data = create_multipart_formdata(files=files, send_8kb=True)
compatible = {}
try:
rc, compatible = self.request("firmware/embedded-firmware/%s/bundle-compatibility-check" % self.ssid, method="POST", data=data, headers=headers)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve bundle compatibility results. Array Id [%s]. Error[%s]." % (self.ssid, to_native(error)))
# Determine whether valid and compatible firmware
if not compatible["signatureTestingPassed"]:
self.module.fail_json(msg="Invalid firmware bundle file. File [%s]." % self.firmware)
if not compatible["fileCompatible"]:
self.module.fail_json(msg="Incompatible firmware bundle file. File [%s]." % self.firmware)
# Determine whether bundle upgrade is required
for module in compatible["versionContents"]:
bundle_module_version = module["bundledVersion"].split(".")
onboard_module_version = module["onboardVersion"].split(".")
version_minimum_length = min(len(bundle_module_version), len(onboard_module_version))
if bundle_module_version[:version_minimum_length] != onboard_module_version[:version_minimum_length]:
self.upgrade_required = True
# Build the modules information for logging purposes
self.module_info.update({module["module"]: {"onboard_version": module["onboardVersion"], "bundled_version": module["bundledVersion"]}})
def embedded_firmware_activate(self):
"""Activate firmware."""
rc, response = self.request("firmware/embedded-firmware/activate", method="POST", ignore_errors=True, timeout=10)
if rc == "422":
self.module.fail_json(msg="Failed to activate the staged firmware. Array Id [%s]. Error [%s]" % (self.ssid, response))
def embedded_firmware_download(self):
"""Execute the firmware download."""
if self.nvsram:
firmware_url = "firmware/embedded-firmware?nvsram=true&staged=true"
headers, data = create_multipart_formdata(files=[("nvsramfile", self.nvsram_name, self.nvsram),
("dlpfile", self.firmware_name, self.firmware)])
else:
firmware_url = "firmware/embedded-firmware?nvsram=false&staged=true"
headers, data = create_multipart_formdata(files=[("dlpfile", self.firmware_name, self.firmware)])
# Stage firmware and nvsram
try:
rc, response = self.request(firmware_url, method="POST", data=data, headers=headers, timeout=(30 * 60))
except Exception as error:
self.module.fail_json(msg="Failed to stage firmware. Array Id [%s]. Error[%s]." % (self.ssid, to_native(error)))
# Activate firmware
activate_thread = threading.Thread(target=self.embedded_firmware_activate)
activate_thread.start()
self.wait_for_reboot()
def wait_for_reboot(self):
"""Wait for controller A to fully reboot and web services running"""
reboot_started = False
reboot_completed = False
self.module.log("Controller firmware: Reboot commencing. Array Id [%s]." % self.ssid)
while self.wait_for_completion and not (reboot_started and reboot_completed):
try:
rc, response = self.request("storage-systems/%s/symbol/pingController?controller=a&verboseErrorResponse=true"
% self.ssid, method="POST", timeout=10, log_request=False)
if reboot_started and response == "ok":
self.module.log("Controller firmware: Reboot completed. Array Id [%s]." % self.ssid)
reboot_completed = True
sleep(2)
except Exception as error:
if not reboot_started:
self.module.log("Controller firmware: Reboot started. Array Id [%s]." % self.ssid)
reboot_started = True
continue
def firmware_event_logger(self):
"""Determine if firmware activation has started."""
# Determine the last known event
try:
rc, events = self.request("storage-systems/%s/events" % self.ssid)
for event in events:
if int(event["eventNumber"]) > int(self.last_known_event):
self.last_known_event = event["eventNumber"]
except Exception as error:
self.module.fail_json(msg="Failed to determine last known event. Array Id [%s]. Error[%s]." % (self.ssid, to_native(error)))
while True:
try:
rc, events = self.request("storage-systems/%s/events?lastKnown=%s&wait=1" % (self.ssid, self.last_known_event), log_request=False)
for event in events:
if int(event["eventNumber"]) > int(self.last_known_event):
self.last_known_event = event["eventNumber"]
# Log firmware events
if event["eventType"] == "firmwareDownloadEvent":
self.module.log("%s" % event["status"])
if event["status"] == "informational" and event["statusMessage"]:
self.module.log("Controller firmware: %s Array Id [%s]." % (event["statusMessage"], self.ssid))
# When activation is successful, finish thread
if event["status"] == "activate_success":
self.module.log("Controller firmware activated. Array Id [%s]." % self.ssid)
return
except Exception as error:
pass
def wait_for_web_services(self):
"""Wait for web services to report firmware and nvsram upgrade."""
# Wait for system to reflect changes
for count in range(int(self.REBOOT_TIMEOUT_SEC / 5)):
try:
if self.is_firmware_bundled():
firmware_rc, firmware_version = self.request("storage-systems/%s/graph/xpath-filter?query=/controller/"
"codeVersions[codeModule='bundleDisplay']" % self.ssid, log_request=False)
current_firmware_version = six.b(firmware_version[0]["versionString"])
else:
firmware_rc, firmware_version = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/saData/fwVersion"
% self.ssid, log_request=False)
current_firmware_version = six.b(firmware_version[0])
nvsram_rc, nvsram_version = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/saData/nvsramVersion" % self.ssid, log_request=False)
current_nvsram_version = six.b(nvsram_version[0])
if current_firmware_version == self.firmware_version() and (not self.nvsram or current_nvsram_version == self.nvsram_version()):
break
except Exception as error:
pass
sleep(5)
else:
self.module.fail_json(msg="Timeout waiting for Santricity Web Services. Array [%s]" % self.ssid)
# Wait for system to be optimal
for count in range(int(self.REBOOT_TIMEOUT_SEC / 5)):
try:
rc, response = self.request("storage-systems/%s" % self.ssid, log_request=False)
if response["status"] == "optimal":
self.upgrade_in_progress = False
break
except Exception as error:
pass
sleep(5)
else:
self.module.fail_json(msg="Timeout waiting for storage system to return to optimal status. Array [%s]" % self.ssid)
def embedded_upgrade(self):
"""Upload and activate both firmware and NVSRAM."""
download_thread = threading.Thread(target=self.embedded_firmware_download)
event_thread = threading.Thread(target=self.firmware_event_logger)
download_thread.start()
event_thread.start()
download_thread.join()
event_thread.join()
def proxy_check_nvsram_compatibility(self, retries=10):
"""Verify nvsram is compatible with E-Series storage system."""
self.module.log("Checking nvsram compatibility...")
data = {"storageDeviceIds": [self.ssid]}
try:
rc, check = self.request("firmware/compatibility-check", method="POST", data=data)
except Exception as error:
if retries:
sleep(1)
self.proxy_check_nvsram_compatibility(retries - 1)
else:
self.module.fail_json(msg="Failed to receive NVSRAM compatibility information. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
for count in range(int(self.COMPATIBILITY_CHECK_TIMEOUT_SEC / 5)):
try:
rc, response = self.request("firmware/compatibility-check?requestId=%s" % check["requestId"])
except Exception as error:
continue
if not response["checkRunning"]:
for result in response["results"][0]["nvsramFiles"]:
if result["filename"] == self.nvsram_name:
return
self.module.fail_json(msg="NVSRAM is not compatible. NVSRAM [%s]. Array [%s]." % (self.nvsram_name, self.ssid))
sleep(5)
self.module.fail_json(msg="Failed to retrieve NVSRAM status update from proxy. Array [%s]." % self.ssid)
def proxy_check_firmware_compatibility(self, retries=10):
"""Verify firmware is compatible with E-Series storage system."""
check = {}
try:
rc, check = self.request("firmware/compatibility-check", method="POST", data={"storageDeviceIds": [self.ssid]})
except Exception as error:
if retries:
sleep(1)
self.proxy_check_firmware_compatibility(retries - 1)
else:
self.module.fail_json(msg="Failed to receive firmware compatibility information. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
for count in range(int(self.COMPATIBILITY_CHECK_TIMEOUT_SEC / 5)):
try:
rc, response = self.request("firmware/compatibility-check?requestId=%s" % check["requestId"])
except Exception as error:
continue
if not response["checkRunning"]:
for result in response["results"][0]["cfwFiles"]:
if result["filename"] == self.firmware_name:
return
self.module.fail_json(msg="Firmware bundle is not compatible. firmware [%s]. Array [%s]." % (self.firmware_name, self.ssid))
sleep(5)
self.module.fail_json(msg="Failed to retrieve firmware status update from proxy. Array [%s]." % self.ssid)
def proxy_upload_and_check_compatibility(self):
"""Ensure firmware/nvsram file is uploaded and verify compatibility."""
uploaded_files = []
try:
rc, uploaded_files = self.request("firmware/cfw-files")
except Exception as error:
self.module.fail_json(msg="Failed to retrieve uploaded firmware and nvsram files. Error [%s]" % to_native(error))
if self.firmware:
for uploaded_file in uploaded_files:
if uploaded_file["filename"] == self.firmware_name:
break
else:
fields = [("validate", "true")]
files = [("firmwareFile", self.firmware_name, self.firmware)]
headers, data = create_multipart_formdata(files=files, fields=fields)
try:
rc, response = self.request("firmware/upload", method="POST", data=data, headers=headers)
except Exception as error:
self.module.fail_json(msg="Failed to upload firmware bundle file. File [%s]. Array [%s]. Error [%s]."
% (self.firmware_name, self.ssid, to_native(error)))
self.proxy_check_firmware_compatibility()
if self.nvsram:
for uploaded_file in uploaded_files:
if uploaded_file["filename"] == self.nvsram_name:
break
else:
fields = [("validate", "true")]
files = [("firmwareFile", self.nvsram_name, self.nvsram)]
headers, data = create_multipart_formdata(files=files, fields=fields)
try:
rc, response = self.request("firmware/upload", method="POST", data=data, headers=headers)
except Exception as error:
self.module.fail_json(msg="Failed to upload NVSRAM file. File [%s]. Array [%s]. Error [%s]."
% (self.nvsram_name, self.ssid, to_native(error)))
self.proxy_check_nvsram_compatibility()
def proxy_check_upgrade_required(self):
"""Determine whether the onboard firmware/nvsram version is the same as the file"""
# Verify controller consistency and get firmware versions
if self.firmware:
current_firmware_version = b""
try:
# Retrieve current bundle version
if self.is_firmware_bundled():
rc, response = self.request("storage-systems/%s/graph/xpath-filter?query=/controller/codeVersions[codeModule='bundleDisplay']" % self.ssid)
current_firmware_version = six.b(response[0]["versionString"])
else:
rc, response = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/saData/fwVersion" % self.ssid)
current_firmware_version = six.b(response[0])
except Exception as error:
self.module.fail_json(msg="Failed to retrieve controller firmware information. Array [%s]. Error [%s]" % (self.ssid, to_native(error)))
# Determine whether the current firmware version is the same as the file
new_firmware_version = self.firmware_version()
if current_firmware_version != new_firmware_version:
self.upgrade_required = True
# Build the modules information for logging purposes
self.module_info.update({"bundleDisplay": {"onboard_version": current_firmware_version, "bundled_version": new_firmware_version}})
# Determine current NVSRAM version and whether change is required
if self.nvsram:
try:
rc, response = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/saData/nvsramVersion" % self.ssid)
if six.b(response[0]) != self.nvsram_version():
self.upgrade_required = True
except Exception as error:
self.module.fail_json(msg="Failed to retrieve storage system's NVSRAM version. Array [%s]. Error [%s]" % (self.ssid, to_native(error)))
def proxy_wait_for_upgrade(self):
"""Wait for SANtricity Web Services Proxy to report upgrade complete"""
self.module.log("(Proxy) Waiting for upgrade to complete...")
status = {}
while True:
try:
rc, status = self.request("storage-systems/%s/cfw-upgrade" % self.ssid, log_request=False, ignore_errors=True)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve firmware upgrade status! Array [%s]. Error[%s]." % (self.ssid, to_native(error)))
if "errorMessage" in status:
self.module.warn("Proxy reported an error. Checking whether upgrade completed. Array [%s]. Error [%s]." % (self.ssid, status["errorMessage"]))
self.wait_for_web_services()
break
if not status["running"]:
if status["activationCompletionTime"]:
self.upgrade_in_progress = False
break
else:
self.module.fail_json(msg="Failed to complete upgrade. Array [%s]." % self.ssid)
sleep(5)
def delete_mel_events(self):
"""Clear all mel-events."""
try:
rc, response = self.request("storage-systems/%s/mel-events?clearCache=true&resetMel=true" % self.ssid, method="DELETE")
except Exception as error:
self.module.fail_json(msg="Failed to clear mel-events. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def proxy_upgrade(self):
"""Activate previously uploaded firmware related files."""
self.module.log("(Proxy) Firmware upgrade commencing...")
body = {"stageFirmware": False, "skipMelCheck": self.clear_mel_events, "cfwFile": self.firmware_name}
if self.nvsram:
body.update({"nvsramFile": self.nvsram_name})
try:
rc, response = self.request("storage-systems/%s/cfw-upgrade" % self.ssid, method="POST", data=body)
except Exception as error:
self.module.fail_json(msg="Failed to initiate firmware upgrade. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
self.upgrade_in_progress = True
if self.wait_for_completion:
self.proxy_wait_for_upgrade()
def apply(self):
"""Upgrade controller firmware."""
if self.is_upgrade_in_progress():
self.module.fail_json(msg="Upgrade is already is progress. Array [%s]." % self.ssid)
if self.is_embedded():
self.embedded_check_compatibility()
else:
if not self.is_web_services_version_met(self.MINIMUM_PROXY_VERSION):
self.module.fail_json(msg="Minimum proxy version %s required!")
self.proxy_check_upgrade_required()
# This will upload the firmware files to the web services proxy but not to the controller
if self.upgrade_required:
self.proxy_upload_and_check_compatibility()
# Perform upgrade
if self.upgrade_required and not self.module.check_mode:
if self.clear_mel_events:
self.delete_mel_events()
if self.is_embedded():
self.embedded_upgrade()
else:
self.proxy_upgrade()
self.module.exit_json(changed=self.upgrade_required, upgrade_in_process=self.upgrade_in_progress, modules_info=self.module_info)
def main():
firmware = NetAppESeriesFirmware()
firmware.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,506 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_global
short_description: NetApp E-Series manage global settings configuration
description:
- Allow the user to configure several of the global settings associated with an E-Series storage-system
author:
- Michael Price (@lmprice)
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
name:
description:
- Set the name of the E-Series storage-system
- This label/name doesn't have to be unique.
- May be up to 30 characters in length.
type: str
aliases:
- label
cache_block_size:
description:
- Size of the cache's block size.
- All volumes on the storage system share the same cache space; therefore, the volumes can have only one cache block size.
- See M(na_santricity_facts) for available sizes.
type: int
required: False
cache_flush_threshold:
description:
- This is the percentage threshold of the amount of unwritten data that is allowed to remain on the storage array's cache before flushing.
type: int
required: False
default_host_type:
description:
- Default host type for the storage system.
- Either one of the following names can be specified, Linux DM-MP, VMWare, Windows, Windows Clustered, or a
host type index which can be found in M(na_santricity_facts)
type: str
required: False
automatic_load_balancing:
description:
- Enable automatic load balancing to allow incoming traffic from the hosts to be dynamically managed and balanced across both controllers.
- Automatic load balancing requires host connectivity reporting to be enabled.
type: str
choices:
- enabled
- disabled
required: False
host_connectivity_reporting:
description:
- Enable host connectivity reporting to allow host connections to be monitored for connection and multipath driver problems.
- When M(automatic_load_balancing==enabled) then M(host_connectivity_reporting) must be enabled
type: str
choices:
- enabled
- disabled
required: False
login_banner_message:
description:
- Text message that appears prior to the login page.
- I(login_banner_message=="") will delete any existing banner message.
type: str
required: False
controller_shelf_id:
description:
- This is the identifier for the drive enclosure containing the controllers.
type: int
required: false
default: 0
notes:
- Check mode is supported.
- This module requires Web Services API v1.3 or newer.
"""
EXAMPLES = """
- name: Set the storage-system name
na_santricity_global:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
name: myArrayName
cache_block_size: 32768
cache_flush_threshold: 80
automatic_load_balancing: enabled
default_host_type: Linux DM-MP
- name: Set the storage-system name
na_santricity_global:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
name: myOtherArrayName
cache_block_size: 8192
cache_flush_threshold: 60
automatic_load_balancing: disabled
default_host_type: 28
"""
RETURN = """
changed:
description: Whether global settings were changed
returned: on success
type: bool
sample: true
array_name:
description: Current storage array's name
returned: on success
type: str
sample: arrayName
automatic_load_balancing:
description: Whether automatic load balancing feature has been enabled
returned: on success
type: str
sample: enabled
host_connectivity_reporting:
description: Whether host connectivity reporting feature has been enabled
returned: on success
type: str
sample: enabled
cache_settings:
description: Current cache block size and flushing threshold values
returned: on success
type: dict
sample: {"cache_block_size": 32768, "cache_flush_threshold": 80}
default_host_type_index:
description: Current default host type index
returned: on success
type: int
sample: 28
login_banner_message:
description: Current banner message
returned: on success
type: str
sample: "Banner message here!"
controller_shelf_id:
description: Identifier for the drive enclosure containing the controllers.
returned: on success
type: int
sample: 99
"""
import random
import sys
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule, create_multipart_formdata
from ansible.module_utils import six
from ansible.module_utils._text import to_native
try:
from ansible.module_utils.ansible_release import __version__ as ansible_version
except ImportError:
ansible_version = 'unknown'
class NetAppESeriesGlobalSettings(NetAppESeriesModule):
MAXIMUM_LOGIN_BANNER_SIZE_BYTES = 5 * 1024
LAST_AVAILABLE_CONTROLLER_SHELF_ID = 99
def __init__(self):
version = "02.00.0000.0000"
ansible_options = dict(cache_block_size=dict(type="int", require=False),
cache_flush_threshold=dict(type="int", required=False),
default_host_type=dict(type="str", require=False),
automatic_load_balancing=dict(type="str", choices=["enabled", "disabled"], required=False),
host_connectivity_reporting=dict(type="str", choices=["enabled", "disabled"], required=False),
name=dict(type='str', required=False, aliases=['label']),
login_banner_message=dict(type='str', required=False),
controller_shelf_id=dict(type="int", required=False, default=0))
super(NetAppESeriesGlobalSettings, self).__init__(ansible_options=ansible_options,
web_services_version=version,
supports_check_mode=True)
args = self.module.params
self.name = args["name"]
self.cache_block_size = args["cache_block_size"]
self.cache_flush_threshold = args["cache_flush_threshold"]
self.host_type_index = args["default_host_type"]
self.controller_shelf_id = args["controller_shelf_id"]
self.login_banner_message = None
if args["login_banner_message"] is not None:
self.login_banner_message = args["login_banner_message"].rstrip("\n")
self.autoload_enabled = None
if args["automatic_load_balancing"]:
self.autoload_enabled = args["automatic_load_balancing"] == "enabled"
self.host_connectivity_reporting_enabled = None
if args["host_connectivity_reporting"]:
self.host_connectivity_reporting_enabled = args["host_connectivity_reporting"] == "enabled"
elif self.autoload_enabled:
self.host_connectivity_reporting_enabled = True
if self.autoload_enabled and not self.host_connectivity_reporting_enabled:
self.module.fail_json(msg="Option automatic_load_balancing requires host_connectivity_reporting to be enabled. Array [%s]." % self.ssid)
self.current_configuration_cache = None
def get_current_configuration(self, update=False):
"""Retrieve the current storage array's global configuration."""
if self.current_configuration_cache is None or update:
self.current_configuration_cache = dict()
# Get the storage array's capabilities and available options
try:
rc, capabilities = self.request("storage-systems/%s/capabilities" % self.ssid)
self.current_configuration_cache["autoload_capable"] = "capabilityAutoLoadBalancing" in capabilities["productCapabilities"]
self.current_configuration_cache["cache_block_size_options"] = capabilities["featureParameters"]["cacheBlockSizes"]
except Exception as error:
self.module.fail_json(msg="Failed to retrieve storage array capabilities. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
try:
rc, host_types = self.request("storage-systems/%s/host-types" % self.ssid)
self.current_configuration_cache["host_type_options"] = dict()
for host_type in host_types:
self.current_configuration_cache["host_type_options"].update({host_type["code"].lower(): host_type["index"]})
except Exception as error:
self.module.fail_json(msg="Failed to retrieve storage array host options. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
# Get the current cache settings
try:
rc, settings = self.request("storage-systems/%s/graph/xpath-filter?query=/sa" % self.ssid)
self.current_configuration_cache["cache_settings"] = {"cache_block_size": settings[0]["cache"]["cacheBlkSize"],
"cache_flush_threshold": settings[0]["cache"]["demandFlushThreshold"]}
self.current_configuration_cache["default_host_type_index"] = settings[0]["defaultHostTypeIndex"]
except Exception as error:
self.module.fail_json(msg="Failed to retrieve cache settings. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
try:
rc, array_info = self.request("storage-systems/%s" % self.ssid)
self.current_configuration_cache["autoload_enabled"] = array_info["autoLoadBalancingEnabled"]
self.current_configuration_cache["host_connectivity_reporting_enabled"] = array_info["hostConnectivityReportingEnabled"]
self.current_configuration_cache["name"] = array_info['name']
except Exception as error:
self.module.fail_json(msg="Failed to determine current configuration. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
try:
rc, login_banner_message = self.request("storage-systems/%s/login-banner?asFile=false" % self.ssid, ignore_errors=True, json_response=False,
headers={"Accept": "application/octet-stream", "netapp-client-type": "Ansible-%s" % ansible_version})
self.current_configuration_cache["login_banner_message"] = login_banner_message.decode("utf-8").rstrip("\n")
except Exception as error:
self.module.fail_json(msg="Failed to determine current login banner message. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
try:
rc, hardware_inventory = self.request("storage-systems/%s/hardware-inventory" % self.ssid)
self.current_configuration_cache["controller_shelf_reference"] = hardware_inventory["trays"][0]["trayRef"]
self.current_configuration_cache["controller_shelf_id"] = hardware_inventory["trays"][0]["trayId"]
self.current_configuration_cache["used_shelf_ids"] = [tray["trayId"] for tray in hardware_inventory["trays"]]
except Exception as error:
self.module.fail_json(msg="Failed to retrieve controller shelf identifier. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
return self.current_configuration_cache
def change_cache_block_size_required(self):
"""Determine whether cache block size change is required."""
if self.cache_block_size is None:
return False
current_configuration = self.get_current_configuration()
current_available_block_sizes = current_configuration["cache_block_size_options"]
if self.cache_block_size not in current_available_block_sizes:
self.module.fail_json(msg="Invalid cache block size. Array [%s]. Available cache block sizes [%s]." % (self.ssid, current_available_block_sizes))
return self.cache_block_size != current_configuration["cache_settings"]["cache_block_size"]
def change_cache_flush_threshold_required(self):
"""Determine whether cache flush percentage change is required."""
if self.cache_flush_threshold is None:
return False
current_configuration = self.get_current_configuration()
if self.cache_flush_threshold <= 0 or self.cache_flush_threshold >= 100:
self.module.fail_json(msg="Invalid cache flushing threshold, it must be equal to or between 0 and 100. Array [%s]" % self.ssid)
return self.cache_flush_threshold != current_configuration["cache_settings"]["cache_flush_threshold"]
def change_host_type_required(self):
"""Determine whether default host type change is required."""
if self.host_type_index is None:
return False
current_configuration = self.get_current_configuration()
current_available_host_types = current_configuration["host_type_options"]
if isinstance(self.host_type_index, str):
self.host_type_index = self.host_type_index.lower()
if self.host_type_index in self.HOST_TYPE_INDEXES.keys():
self.host_type_index = self.HOST_TYPE_INDEXES[self.host_type_index]
elif self.host_type_index in current_available_host_types.keys():
self.host_type_index = current_available_host_types[self.host_type_index]
if self.host_type_index not in current_available_host_types.values():
self.module.fail_json(msg="Invalid host type index! Array [%s]. Available host options [%s]." % (self.ssid, current_available_host_types))
return int(self.host_type_index) != current_configuration["default_host_type_index"]
def change_autoload_enabled_required(self):
"""Determine whether automatic load balancing state change is required."""
if self.autoload_enabled is None:
return False
change_required = False
current_configuration = self.get_current_configuration()
if self.autoload_enabled and not current_configuration["autoload_capable"]:
self.module.fail_json(msg="Automatic load balancing is not available. Array [%s]." % self.ssid)
if self.autoload_enabled:
if not current_configuration["autoload_enabled"] or not current_configuration["host_connectivity_reporting_enabled"]:
change_required = True
elif current_configuration["autoload_enabled"]:
change_required = True
return change_required
def change_host_connectivity_reporting_enabled_required(self):
"""Determine whether host connectivity reporting state change is required."""
if self.host_connectivity_reporting_enabled is None:
return False
current_configuration = self.get_current_configuration()
return self.host_connectivity_reporting_enabled != current_configuration["host_connectivity_reporting_enabled"]
def change_name_required(self):
"""Determine whether storage array name change is required."""
if self.name is None:
return False
current_configuration = self.get_current_configuration()
if self.name and len(self.name) > 30:
self.module.fail_json(msg="The provided name is invalid, it must be less than or equal to 30 characters in length. Array [%s]" % self.ssid)
return self.name != current_configuration["name"]
def change_login_banner_message_required(self):
"""Determine whether storage array name change is required."""
if self.login_banner_message is None:
return False
current_configuration = self.get_current_configuration()
if self.login_banner_message and sys.getsizeof(self.login_banner_message) > self.MAXIMUM_LOGIN_BANNER_SIZE_BYTES:
self.module.fail_json(msg="The banner message is too long! It must be %s bytes. Array [%s]" % (self.MAXIMUM_LOGIN_BANNER_SIZE_BYTES, self.ssid))
return self.login_banner_message != current_configuration["login_banner_message"]
def change_controller_shelf_id_required(self):
"""Determine whether storage array tray identifier change is required."""
current_configuration = self.get_current_configuration()
if self.controller_shelf_id is not None and self.controller_shelf_id != current_configuration["controller_shelf_id"]:
if self.controller_shelf_id in current_configuration["used_shelf_ids"]:
self.module.fail_json(msg="The controller_shelf_id is currently being used by another shelf. Used Identifiers: [%s]. Array [%s]." % (", ".join([str(id) for id in self.get_current_configuration()["used_shelf_ids"]]), self.ssid))
if self.controller_shelf_id < 0 or self.controller_shelf_id > self.LAST_AVAILABLE_CONTROLLER_SHELF_ID:
self.module.fail_json(msg="The controller_shelf_id must be 0-99 and not already used by another shelf. Used Identifiers: [%s]. Array [%s]." % (", ".join([str(id) for id in self.get_current_configuration()["used_shelf_ids"]]), self.ssid))
return True
return False
def update_cache_settings(self):
"""Update cache block size and/or flushing threshold."""
current_configuration = self.get_current_configuration()
block_size = self.cache_block_size if self.cache_block_size else current_configuration["cache_settings"]["cache_block_size"]
threshold = self.cache_flush_threshold if self.cache_flush_threshold else current_configuration["cache_settings"]["cache_flush_threshold"]
try:
rc, cache_settings = self.request("storage-systems/%s/symbol/setSACacheParams?verboseErrorResponse=true" % self.ssid, method="POST",
data={"cacheBlkSize": block_size, "demandFlushAmount": threshold, "demandFlushThreshold": threshold})
except Exception as error:
self.module.fail_json(msg="Failed to set cache settings. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def update_host_type(self):
"""Update default host type."""
try:
rc, default_host_type = self.request("storage-systems/%s/symbol/setStorageArrayProperties?verboseErrorResponse=true" % self.ssid, method="POST",
data={"settings": {"defaultHostTypeIndex": self.host_type_index}})
except Exception as error:
self.module.fail_json(msg="Failed to set default host type. Array [%s]. Error [%s]" % (self.ssid, to_native(error)))
def update_autoload(self):
"""Update automatic load balancing state."""
current_configuration = self.get_current_configuration()
if self.autoload_enabled and not current_configuration["host_connectivity_reporting_enabled"]:
try:
rc, host_connectivity_reporting = self.request("storage-systems/%s/symbol/setHostConnectivityReporting?verboseErrorResponse=true" % self.ssid,
method="POST", data={"enableHostConnectivityReporting": self.autoload_enabled})
except Exception as error:
self.module.fail_json(msg="Failed to enable host connectivity reporting which is needed for automatic load balancing state."
" Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
try:
rc, autoload = self.request("storage-systems/%s/symbol/setAutoLoadBalancing?verboseErrorResponse=true" % self.ssid,
method="POST", data={"enableAutoLoadBalancing": self.autoload_enabled})
except Exception as error:
self.module.fail_json(msg="Failed to set automatic load balancing state. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def update_host_connectivity_reporting_enabled(self):
"""Update automatic load balancing state."""
try:
rc, host_connectivity_reporting = self.request("storage-systems/%s/symbol/setHostConnectivityReporting?verboseErrorResponse=true" % self.ssid,
method="POST", data={"enableHostConnectivityReporting": self.host_connectivity_reporting_enabled})
except Exception as error:
self.module.fail_json(msg="Failed to enable host connectivity reporting. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def update_name(self):
"""Update storage array's name."""
try:
rc, result = self.request("storage-systems/%s/configuration" % self.ssid, method="POST", data={"name": self.name})
except Exception as err:
self.module.fail_json(msg="Failed to set the storage array name! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def update_login_banner_message(self):
"""Update storage login banner message."""
if self.login_banner_message:
boundary = "---------------------------" + "".join([str(random.randint(0, 9)) for x in range(27)])
data_parts = list()
data = None
if six.PY2: # Generate payload for Python 2
newline = "\r\n"
data_parts.extend(["--%s" % boundary,
'Content-Disposition: form-data; name="file"; filename="banner.txt"',
"Content-Type: text/plain",
"",
self.login_banner_message])
data_parts.extend(["--%s--" % boundary, ""])
data = newline.join(data_parts)
else:
newline = six.b("\r\n")
data_parts.extend([six.b("--%s" % boundary),
six.b('Content-Disposition: form-data; name="file"; filename="banner.txt"'),
six.b("Content-Type: text/plain"),
six.b(""),
six.b(self.login_banner_message)])
data_parts.extend([six.b("--%s--" % boundary), b""])
data = newline.join(data_parts)
headers = {"Content-Type": "multipart/form-data; boundary=%s" % boundary, "Content-Length": str(len(data))}
try:
rc, result = self.request("storage-systems/%s/login-banner" % self.ssid, method="POST", headers=headers, data=data)
except Exception as err:
self.module.fail_json(msg="Failed to set the storage system login banner message! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
else:
try:
rc, result = self.request("storage-systems/%s/login-banner" % self.ssid, method="DELETE")
except Exception as err:
self.module.fail_json(msg="Failed to clear the storage system login banner message! Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
def update_controller_shelf_id(self):
"""Update controller shelf tray identifier."""
current_configuration = self.get_current_configuration()
try:
rc, tray = self.request("storage-systems/%s/symbol/updateTray?verboseErrorResponse=true" % self.ssid, method="POST",
data={"ref": current_configuration["controller_shelf_reference"], "trayID": self.controller_shelf_id})
except Exception as error:
self.module.fail_json(msg="Failed to update controller shelf identifier. Array [%s]. Error [%s]." % (self.ssid, to_native(error)))
def update(self):
"""Ensure the storage array's global setting are correctly set."""
change_required = False
if (self.change_autoload_enabled_required() or self.change_cache_block_size_required() or self.change_cache_flush_threshold_required() or
self.change_host_type_required() or self.change_name_required() or self.change_host_connectivity_reporting_enabled_required() or
self.change_login_banner_message_required() or self.change_controller_shelf_id_required()):
change_required = True
if change_required and not self.module.check_mode:
if self.change_autoload_enabled_required():
self.update_autoload()
if self.change_host_connectivity_reporting_enabled_required():
self.update_host_connectivity_reporting_enabled()
if self.change_cache_block_size_required() or self.change_cache_flush_threshold_required():
self.update_cache_settings()
if self.change_host_type_required():
self.update_host_type()
if self.change_name_required():
self.update_name()
if self.change_login_banner_message_required():
self.update_login_banner_message()
if self.change_controller_shelf_id_required():
self.update_controller_shelf_id()
current_configuration = self.get_current_configuration(update=True)
self.module.exit_json(changed=change_required,
cache_settings=current_configuration["cache_settings"],
default_host_type_index=current_configuration["default_host_type_index"],
automatic_load_balancing="enabled" if current_configuration["autoload_enabled"] else "disabled",
host_connectivity_reporting="enabled" if current_configuration["host_connectivity_reporting_enabled"] else "disabled",
array_name=current_configuration["name"],
login_banner_message=current_configuration["login_banner_message"],
controller_shelf_id=current_configuration["controller_shelf_id"])
def main():
global_settings = NetAppESeriesGlobalSettings()
global_settings.update()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,490 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_host
short_description: NetApp E-Series manage eseries hosts
description: Create, update, remove hosts on NetApp E-series storage arrays
author:
- Kevin Hulquest (@hulquest)
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
name:
description:
- If the host doesn't yet exist, the label/name to assign at creation time.
- If the hosts already exists, this will be used to uniquely identify the host to make any required changes
type: str
required: True
aliases:
- label
state:
description:
- Set to absent to remove an existing host
- Set to present to modify or create a new host definition
type: str
choices:
- absent
- present
default: present
host_type:
description:
- Host type includes operating system and multipath considerations.
- If not specified, the default host type will be utilized. Default host type can be set using M(netapp_eseries.santricity.na_santricity_global).
- For storage array specific options see M(netapp_eseries.santricity.na_santricity_facts).
- All values are case-insensitive.
- AIX MPIO - The Advanced Interactive Executive (AIX) OS and the native MPIO driver
- AVT 4M - Silicon Graphics, Inc. (SGI) proprietary multipath driver
- HP-UX - The HP-UX OS with native multipath driver
- Linux ATTO - The Linux OS and the ATTO Technology, Inc. driver (must use ATTO FC HBAs)
- Linux DM-MP - The Linux OS and the native DM-MP driver
- Linux Pathmanager - The Linux OS and the SGI proprietary multipath driver
- Mac - The Mac OS and the ATTO Technology, Inc. driver
- ONTAP - FlexArray
- Solaris 11 or later - The Solaris 11 or later OS and the native MPxIO driver
- Solaris 10 or earlier - The Solaris 10 or earlier OS and the native MPxIO driver
- SVC - IBM SAN Volume Controller
- VMware - ESXi OS
- Windows - Windows Server OS and Windows MPIO with a DSM driver
- Windows Clustered - Clustered Windows Server OS and Windows MPIO with a DSM driver
- Windows ATTO - Windows OS and the ATTO Technology, Inc. driver
type: str
required: False
aliases:
- host_type_index
ports:
description:
- A list of host ports you wish to associate with the host.
- Host ports are uniquely identified by their WWN or IQN. Their assignments to a particular host are
uniquely identified by a label and these must be unique.
type: list
required: False
suboptions:
type:
description:
- The interface type of the port to define.
- Acceptable choices depend on the capabilities of the target hardware/software platform.
required: true
choices:
- iscsi
- sas
- fc
- ib
- nvmeof
label:
description:
- A unique label to assign to this port assignment.
required: true
port:
description:
- The WWN or IQN of the hostPort to assign to this port definition.
required: true
force_port:
description:
- Allow ports that are already assigned to be re-assigned to your current host
required: false
type: bool
"""
EXAMPLES = """
- name: Define or update an existing host named "Host1"
na_santricity_host:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
name: "Host1"
state: present
host_type_index: Linux DM-MP
ports:
- type: "iscsi"
label: "PORT_1"
port: "iqn.1996-04.de.suse:01:56f86f9bd1fe"
- type: "fc"
label: "FC_1"
port: "10:00:FF:7C:FF:FF:FF:01"
- type: "fc"
label: "FC_2"
port: "10:00:FF:7C:FF:FF:FF:00"
- name: Ensure a host named "Host2" doesn"t exist
na_santricity_host:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
name: "Host2"
state: absent
"""
RETURN = """
msg:
description:
- A user-readable description of the actions performed.
returned: on success
type: str
sample: The host has been created.
id:
description:
- the unique identifier of the host on the E-Series storage-system
returned: on success when state=present
type: str
sample: 00000000600A098000AAC0C3003004700AD86A52
ssid:
description:
- the unique identifer of the E-Series storage-system with the current api
returned: on success
type: str
sample: 1
api_url:
description:
- the url of the API that this request was proccessed by
returned: on success
type: str
sample: https://webservices.example.com:8443
"""
import re
from ansible.module_utils._text import to_native
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
class NetAppESeriesHost(NetAppESeriesModule):
PORT_TYPES = ["iscsi", "sas", "fc", "ib", "nvmeof"]
def __init__(self):
ansible_options = dict(state=dict(type="str", default="present", choices=["absent", "present"]),
ports=dict(type="list", required=False),
force_port=dict(type="bool", default=False),
name=dict(type="str", required=True, aliases=["label"]),
host_type=dict(type="str", required=False, aliases=["host_type_index"]))
super(NetAppESeriesHost, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
self.check_mode = self.module.check_mode
args = self.module.params
self.ports = args["ports"]
self.force_port = args["force_port"]
self.name = args["name"]
self.state = args["state"]
self.post_body = dict()
self.all_hosts = list()
self.host_obj = dict()
self.new_ports = list()
self.ports_for_update = list()
self.ports_for_removal = list()
# Update host type with the corresponding index
host_type = args["host_type"]
if host_type:
host_type = host_type.lower()
if host_type in [key.lower() for key in list(self.HOST_TYPE_INDEXES.keys())]:
self.host_type_index = self.HOST_TYPE_INDEXES[host_type]
elif host_type.isdigit():
self.host_type_index = int(args["host_type"])
else:
self.module.fail_json(msg="host_type must be either a host type name or host type index found integer the documentation.")
else:
self.host_type_index = None
if not self.url.endswith("/"):
self.url += "/"
# Fix port representation if they are provided with colons
if self.ports is not None:
for port in self.ports:
port["type"] = port["type"].lower()
port["port"] = port["port"].lower()
if port["type"] not in self.PORT_TYPES:
self.module.fail_json(msg="Invalid port type! Port interface type must be one of [%s]." % ", ".join(self.PORT_TYPES))
# Determine whether address is 16-byte WWPN and, if so, remove
if re.match(r"^(0x)?[0-9a-f]{16}$", port["port"].replace(":", "")):
port["port"] = port["port"].replace(":", '').replace("0x", "")
if port["type"] == "ib":
port["port"] = "0" * (32 - len(port["port"])) + port["port"]
@property
def default_host_type(self):
"""Return the default host type index."""
try:
rc, default_index = self.request("storage-systems/%s/graph/xpath-filter?query=/sa/defaultHostTypeIndex" % self.ssid)
return default_index[0]
except Exception as error:
self.module.fail_json(msg="Failed to retrieve default host type index")
@property
def valid_host_type(self):
host_types = None
try:
rc, host_types = self.request("storage-systems/%s/host-types" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to get host types. Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
try:
match = list(filter(lambda host_type: host_type["index"] == self.host_type_index, host_types))[0]
return True
except IndexError:
self.module.fail_json(msg="There is no host type with index %s" % self.host_type_index)
def check_port_types(self):
"""Check to see whether the port interface types are available on storage system."""
try:
rc, interfaces = self.request("storage-systems/%s/interfaces?channelType=hostside" % self.ssid)
for port in self.ports:
for interface in interfaces:
# Check for IB iSER
if port["type"] == "ib" and "iqn" in port["port"]:
if ((interface["ioInterfaceTypeData"]["interfaceType"] == "iscsi" and
interface["ioInterfaceTypeData"]["iscsi"]["interfaceData"]["type"] == "infiniband" and
interface["ioInterfaceTypeData"]["iscsi"]["interfaceData"]["infinibandData"]["isIser"]) or
(interface["ioInterfaceTypeData"]["interfaceType"] == "ib" and
interface["ioInterfaceTypeData"]["ib"]["isISERSupported"])):
port["type"] = "iscsi"
break
# Check for NVMe
elif (port["type"] == "nvmeof" and "commandProtocolPropertiesList" in interface and
"commandProtocolProperties" in interface["commandProtocolPropertiesList"] and
interface["commandProtocolPropertiesList"]["commandProtocolProperties"]):
if interface["commandProtocolPropertiesList"]["commandProtocolProperties"][0]["commandProtocol"] == "nvme":
break
# Check SAS, FC, iSCSI
elif ((port["type"] == "fc" and interface["ioInterfaceTypeData"]["interfaceType"] == "fibre") or
(port["type"] == interface["ioInterfaceTypeData"]["interfaceType"])):
break
else:
# self.module.fail_json(msg="Invalid port type! Type [%s]. Port [%s]." % (port["type"], port["label"]))
self.module.warn("Port type not found in hostside interfaces! Type [%s]. Port [%s]." % (port["type"], port["label"]))
except Exception as error:
# For older versions of web services
for port in self.ports:
if port["type"] == "ib" and "iqn" in port["port"]:
port["type"] = "iscsi"
break
def assigned_host_ports(self, apply_unassigning=False):
"""Determine if the hostPorts requested have already been assigned and return list of required used ports."""
used_host_ports = {}
for host in self.all_hosts:
if host["label"].lower() != self.name.lower():
for host_port in host["hostSidePorts"]:
# Compare expected ports with those from other hosts definitions.
for port in self.ports:
if port["port"] == host_port["address"] or port["label"].lower() == host_port["label"].lower():
if not self.force_port:
self.module.fail_json(msg="Port label or address is already used and force_port option is set to false!")
else:
# Determine port reference
port_ref = [port["hostPortRef"] for port in host["ports"]
if port["hostPortName"] == host_port["address"]]
port_ref.extend([port["initiatorRef"] for port in host["initiators"]
if port["nodeName"]["iscsiNodeName"] == host_port["address"]])
# Create dictionary of hosts containing list of port references
if host["hostRef"] not in used_host_ports.keys():
used_host_ports.update({host["hostRef"]: port_ref})
else:
used_host_ports[host["hostRef"]].extend(port_ref)
# Unassign assigned ports
if apply_unassigning:
for host_ref in used_host_ports.keys():
try:
rc, resp = self.request("storage-systems/%s/hosts/%s" % (self.ssid, host_ref), method="POST",
data={"portsToRemove": used_host_ports[host_ref]})
except Exception as err:
self.module.fail_json(msg="Failed to unassign host port. Host Id [%s]. Array Id [%s]. Ports [%s]. Error [%s]."
% (self.host_obj["id"], self.ssid, used_host_ports[host_ref], to_native(err)))
@property
def host_exists(self):
"""Determine if the requested host exists
As a side effect, set the full list of defined hosts in "all_hosts", and the target host in "host_obj".
"""
match = False
all_hosts = list()
try:
rc, all_hosts = self.request("storage-systems/%s/hosts" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to determine host existence. Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
# Augment the host objects
for host in all_hosts:
for port in host["hostSidePorts"]:
port["type"] = port["type"].lower()
port["address"] = port["address"].lower()
# Augment hostSidePorts with their ID (this is an omission in the API)
ports = dict((port["label"], port["id"]) for port in host["ports"])
ports.update(dict((port["label"], port["id"]) for port in host["initiators"]))
for host_side_port in host["hostSidePorts"]:
if host_side_port["label"] in ports:
host_side_port["id"] = ports[host_side_port["label"]]
if host["label"].lower() == self.name.lower():
self.host_obj = host
match = True
self.all_hosts = all_hosts
return match
@property
def needs_update(self):
"""Determine whether we need to update the Host object
As a side effect, we will set the ports that we need to update (portsForUpdate), and the ports we need to add
(newPorts), on self.
"""
changed = False
if self.host_obj["hostTypeIndex"] != self.host_type_index:
changed = True
current_host_ports = dict((port["id"], {"type": port["type"], "port": port["address"], "label": port["label"]})
for port in self.host_obj["hostSidePorts"])
if self.ports:
for port in self.ports:
for current_host_port_id in current_host_ports.keys():
if port == current_host_ports[current_host_port_id]:
current_host_ports.pop(current_host_port_id)
break
elif port["port"] == current_host_ports[current_host_port_id]["port"]:
if self.port_on_diff_host(port) and not self.force_port:
self.module.fail_json(msg="The port you specified [%s] is associated with a different host."
" Specify force_port as True or try a different port spec" % port)
if (port["label"] != current_host_ports[current_host_port_id]["label"] or
port["type"] != current_host_ports[current_host_port_id]["type"]):
current_host_ports.pop(current_host_port_id)
self.ports_for_update.append({"portRef": current_host_port_id, "port": port["port"],
"label": port["label"], "hostRef": self.host_obj["hostRef"]})
break
else:
self.new_ports.append(port)
self.ports_for_removal = list(current_host_ports.keys())
changed = any([self.new_ports, self.ports_for_update, self.ports_for_removal, changed])
return changed
def port_on_diff_host(self, arg_port):
""" Checks to see if a passed in port arg is present on a different host"""
for host in self.all_hosts:
# Only check "other" hosts
if host["name"].lower() != self.name.lower():
for port in host["hostSidePorts"]:
# Check if the port label is found in the port dict list of each host
if arg_port["label"].lower() == port["label"].lower() or arg_port["port"].lower() == port["address"].lower():
return True
return False
def update_host(self):
self.post_body = {"name": self.name, "hostType": {"index": self.host_type_index}}
# Remove ports that need reassigning from their current host.
if self.ports:
self.assigned_host_ports(apply_unassigning=True)
self.post_body["portsToUpdate"] = self.ports_for_update
self.post_body["portsToRemove"] = self.ports_for_removal
self.post_body["ports"] = self.new_ports
if not self.check_mode:
try:
rc, self.host_obj = self.request("storage-systems/%s/hosts/%s" % (self.ssid, self.host_obj["id"]), method="POST",
data=self.post_body, ignore_errors=True)
except Exception as err:
self.module.fail_json(msg="Failed to update host. Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
self.module.exit_json(changed=True)
def create_host(self):
# Remove ports that need reassigning from their current host.
self.assigned_host_ports(apply_unassigning=True)
# needs_reassignment = False
post_body = dict(name=self.name,
hostType=dict(index=self.host_type_index))
if self.ports:
post_body.update(ports=self.ports)
if not self.host_exists:
if not self.check_mode:
try:
rc, self.host_obj = self.request("storage-systems/%s/hosts" % self.ssid, method="POST", data=post_body, ignore_errors=True)
except Exception as err:
self.module.fail_json(msg="Failed to create host. Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
else:
payload = self.build_success_payload(self.host_obj)
self.module.exit_json(changed=False, msg="Host already exists. Id [%s]. Host [%s]." % (self.ssid, self.name), **payload)
payload = self.build_success_payload(self.host_obj)
self.module.exit_json(changed=True, msg="Host created.")
def remove_host(self):
try:
rc, resp = self.request("storage-systems/%s/hosts/%s" % (self.ssid, self.host_obj["id"]), method="DELETE")
except Exception as err:
self.module.fail_json(msg="Failed to remove host. Host[%s]. Array Id [%s]. Error [%s]." % (self.host_obj["id"], self.ssid, to_native(err)))
def build_success_payload(self, host=None):
keys = [] # ["id"]
if host:
result = dict((key, host[key]) for key in keys)
else:
result = dict()
result["ssid"] = self.ssid
result["api_url"] = self.url
return result
def apply(self):
if self.state == "present":
if self.host_type_index is None:
self.host_type_index = self.default_host_type
self.check_port_types()
if self.host_exists:
if self.needs_update and self.valid_host_type:
self.update_host()
else:
payload = self.build_success_payload(self.host_obj)
self.module.exit_json(changed=False, msg="Host already present; no changes required.", **payload)
elif self.valid_host_type:
self.create_host()
else:
payload = self.build_success_payload()
if self.host_exists:
self.remove_host()
self.module.exit_json(changed=True, msg="Host removed.", **payload)
else:
self.module.exit_json(changed=False, msg="Host already absent.", **payload)
def main():
host = NetAppESeriesHost()
host.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,279 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_hostgroup
short_description: NetApp E-Series manage array host groups
author:
- Kevin Hulquest (@hulquest)
- Nathan Swartz (@ndswartz)
description: Create, update or destroy host groups on a NetApp E-Series storage array.
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
state:
description:
- Whether the specified host group should exist or not.
type: str
choices: ["present", "absent"]
default: present
name:
description:
- Name of the host group to manage
type: str
required: false
hosts:
description:
- List of host names/labels to add to the group
type: list
required: false
"""
EXAMPLES = """
- name: Configure Hostgroup
na_santricity_hostgroup:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
state: present
name: example_hostgroup
hosts:
- host01
- host02
"""
RETURN = """
clusterRef:
description: The unique identification value for this object. Other objects may use this reference value to refer to the cluster.
returned: always except when state is absent
type: str
sample: "3233343536373839303132333100000000000000"
confirmLUNMappingCreation:
description: If true, indicates that creation of LUN-to-volume mappings should require careful confirmation from the end-user, since such a mapping
will alter the volume access rights of other clusters, in addition to this one.
returned: always
type: bool
sample: false
hosts:
description: A list of the hosts that are part of the host group after all operations.
returned: always except when state is absent
type: list
sample: ["HostA","HostB"]
id:
description: The id number of the hostgroup
returned: always except when state is absent
type: str
sample: "3233343536373839303132333100000000000000"
isSAControlled:
description: If true, indicates that I/O accesses from this cluster are subject to the storage array's default LUN-to-volume mappings. If false,
indicates that I/O accesses from the cluster are subject to cluster-specific LUN-to-volume mappings.
returned: always except when state is absent
type: bool
sample: false
label:
description: The user-assigned, descriptive label string for the cluster.
returned: always
type: str
sample: "MyHostGroup"
name:
description: same as label
returned: always except when state is absent
type: str
sample: "MyHostGroup"
protectionInformationCapableAccessMethod:
description: This field is true if the host has a PI capable access method.
returned: always except when state is absent
type: bool
sample: true
"""
from ansible.module_utils._text import to_native
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule, create_multipart_formdata, request
class NetAppESeriesHostGroup(NetAppESeriesModule):
EXPANSION_TIMEOUT_SEC = 10
DEFAULT_DISK_POOL_MINIMUM_DISK_COUNT = 11
def __init__(self):
version = "02.00.0000.0000"
ansible_options = dict(
state=dict(choices=["present", "absent"], type="str", default="present"),
name=dict(required=True, type="str"),
hosts=dict(required=False, type="list"))
super(NetAppESeriesHostGroup, self).__init__(ansible_options=ansible_options,
web_services_version=version,
supports_check_mode=True)
args = self.module.params
self.state = args["state"]
self.name = args["name"]
self.hosts_list = args["hosts"]
self.current_host_group = None
self.hosts_cache = None
@property
def hosts(self):
"""Retrieve a list of host reference identifiers should be associated with the host group."""
if self.hosts_cache is None:
self.hosts_cache = []
existing_hosts = []
if self.hosts_list:
try:
rc, existing_hosts = self.request("storage-systems/%s/hosts" % self.ssid)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve hosts information. Array id [%s]. Error[%s]."
% (self.ssid, to_native(error)))
for host in self.hosts_list:
for existing_host in existing_hosts:
if host in existing_host["id"] or host.lower() in existing_host["name"].lower():
self.hosts_cache.append(existing_host["id"])
break
else:
self.module.fail_json(msg="Expected host does not exist. Array id [%s]. Host [%s]." % (self.ssid, host))
self.hosts_cache.sort()
return self.hosts_cache
@property
def host_groups(self):
"""Retrieve a list of existing host groups."""
host_groups = []
hosts = []
try:
rc, host_groups = self.request("storage-systems/%s/host-groups" % self.ssid)
rc, hosts = self.request("storage-systems/%s/hosts" % self.ssid)
except Exception as error:
self.module.fail_json(msg="Failed to retrieve host group information. Array id [%s]. Error[%s]."
% (self.ssid, to_native(error)))
host_groups = [{"id": group["clusterRef"], "name": group["name"]} for group in host_groups]
for group in host_groups:
hosts_ids = []
for host in hosts:
if group["id"] == host["clusterRef"]:
hosts_ids.append(host["hostRef"])
group.update({"hosts": hosts_ids})
return host_groups
@property
def current_hosts_in_host_group(self):
"""Retrieve the current hosts associated with the current hostgroup."""
current_hosts = []
for group in self.host_groups:
if group["name"] == self.name:
current_hosts = group["hosts"]
break
return current_hosts
def unassign_hosts(self, host_list=None):
"""Unassign hosts from host group."""
if host_list is None:
host_list = self.current_host_group["hosts"]
for host_id in host_list:
try:
rc, resp = self.request("storage-systems/%s/hosts/%s/move" % (self.ssid, host_id),
method="POST", data={"group": "0000000000000000000000000000000000000000"})
except Exception as error:
self.module.fail_json(msg="Failed to unassign hosts from host group. Array id [%s]. Host id [%s]."
" Error[%s]." % (self.ssid, host_id, to_native(error)))
def delete_host_group(self, unassign_hosts=True):
"""Delete host group"""
if unassign_hosts:
self.unassign_hosts()
try:
rc, resp = self.request("storage-systems/%s/host-groups/%s" % (self.ssid, self.current_host_group["id"]), method="DELETE")
except Exception as error:
self.module.fail_json(msg="Failed to delete host group. Array id [%s]. Error[%s]." % (self.ssid, to_native(error)))
def create_host_group(self):
"""Create host group."""
data = {"name": self.name, "hosts": self.hosts}
response = None
try:
rc, response = self.request("storage-systems/%s/host-groups" % self.ssid, method="POST", data=data)
except Exception as error:
self.module.fail_json(msg="Failed to create host group. Array id [%s]. Error[%s]." % (self.ssid, to_native(error)))
return response
def update_host_group(self):
"""Update host group."""
data = {"name": self.name, "hosts": self.hosts}
# unassign hosts that should not be part of the hostgroup
desired_host_ids = self.hosts
for host in self.current_hosts_in_host_group:
if host not in desired_host_ids:
self.unassign_hosts([host])
update_response = None
try:
rc, update_response = self.request("storage-systems/%s/host-groups/%s" % (self.ssid, self.current_host_group["id"]), method="POST", data=data)
except Exception as error:
self.module.fail_json(msg="Failed to create host group. Array id [%s]. Error[%s]." % (self.ssid, to_native(error)))
return update_response
def apply(self):
"""Apply desired host group state to the storage array."""
changes_required = False
# Search for existing host group match
for group in self.host_groups:
if group["name"] == self.name:
self.current_host_group = group
self.current_host_group["hosts"].sort()
break
# Determine whether changes are required
if self.state == "present":
if self.current_host_group:
if self.hosts and self.hosts != self.current_host_group["hosts"]:
changes_required = True
else:
if not self.name:
self.module.fail_json(msg="The option name must be supplied when creating a new host group. Array id [%s]." % self.ssid)
changes_required = True
elif self.current_host_group:
changes_required = True
# Apply any necessary changes
msg = ""
if changes_required and not self.module.check_mode:
msg = "No changes required."
if self.state == "present":
if self.current_host_group:
if self.hosts != self.current_host_group["hosts"]:
msg = self.update_host_group()
else:
msg = self.create_host_group()
elif self.current_host_group:
self.delete_host_group()
msg = "Host group deleted. Array Id [%s]. Host group [%s]." % (self.ssid, self.current_host_group["name"])
self.module.exit_json(msg=msg, changed=changes_required)
def main():
hostgroup = NetAppESeriesHostGroup()
hostgroup.apply()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,257 @@
#!/usr/bin/python
# (c) 2020, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: na_santricity_ib_iser_interface
short_description: NetApp E-Series manage InfiniBand iSER interface configuration
description:
- Configure settings of an E-Series InfiniBand iSER interface IPv4 address configuration.
author:
- Michael Price (@lmprice)
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp_eseries.santricity.santricity.santricity_doc
options:
controller:
description:
- The controller that owns the port you want to configure.
- Controller names are presented alphabetically, with the first controller as A, the second as B, and so on.
- Current hardware models have either 1 or 2 available controllers, but that is not a guaranteed hard limitation and could change in the future.
type: str
required: true
choices:
- A
- B
channel:
description:
- The InfiniBand HCA port you wish to modify.
- Ports start left to right and start with 1.
type: int
required: true
address:
description:
- The IPv4 address to assign to the interface.
- Should be specified in xx.xx.xx.xx form.
type: str
required: true
notes:
- Check mode is supported.
"""
EXAMPLES = """
- name: Configure the first port on the A controller with a static IPv4 address
na_santricity_ib_iser_interface:
ssid: "1"
api_url: "https://192.168.1.100:8443/devmgr/v2"
api_username: "admin"
api_password: "adminpass"
validate_certs: true
controller: "A"
channel: "1"
address: "192.168.1.100"
"""
RETURN = """
msg:
description: Success message
returned: on success
type: str
sample: The interface settings have been updated.
enabled:
description:
- Indicates whether IPv4 connectivity has been enabled or disabled.
- This does not necessarily indicate connectivity. If dhcp was enabled without a dhcp server, for instance,
it is unlikely that the configuration will actually be valid.
returned: on success
sample: True
type: bool
"""
import re
from ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity import NetAppESeriesModule
from ansible.module_utils._text import to_native
class NetAppESeriesIbIserInterface(NetAppESeriesModule):
def __init__(self):
ansible_options = dict(controller=dict(type="str", required=True, choices=["A", "B"]),
channel=dict(type="int"),
address=dict(type="str", required=True))
super(NetAppESeriesIbIserInterface, self).__init__(ansible_options=ansible_options,
web_services_version="02.00.0000.0000",
supports_check_mode=True)
args = self.module.params
self.controller = args["controller"]
self.channel = args["channel"]
self.address = args["address"]
self.check_mode = self.module.check_mode
self.get_target_interface_cache = None
# A relatively primitive regex to validate that the input is formatted like a valid ip address
address_regex = re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$")
if self.address and not address_regex.match(self.address):
self.module.fail_json(msg="An invalid ip address was provided for address.")
def get_interfaces(self):
"""Retrieve and filter all hostside interfaces for IB iSER."""
ifaces = []
try:
rc, ifaces = self.request("storage-systems/%s/interfaces?channelType=hostside" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to retrieve defined host interfaces. Array Id [%s]. Error [%s]." % (self.ssid, to_native(err)))
# Filter out non-ib-iser interfaces
ib_iser_ifaces = []
for iface in ifaces:
if ((iface["ioInterfaceTypeData"]["interfaceType"] == "iscsi" and
iface["ioInterfaceTypeData"]["iscsi"]["interfaceData"]["type"] == "infiniband" and
iface["ioInterfaceTypeData"]["iscsi"]["interfaceData"]["infinibandData"]["isIser"]) or
(iface["ioInterfaceTypeData"]["interfaceType"] == "ib" and
iface["ioInterfaceTypeData"]["ib"]["isISERSupported"])):
ib_iser_ifaces.append(iface)
if not ib_iser_ifaces:
self.module.fail_json(msg="Failed to detect any InfiniBand iSER interfaces! Array [%s] - %s." % self.ssid)
return ib_iser_ifaces
def get_controllers(self):
"""Retrieve a mapping of controller labels to their references
{
'A': '070000000000000000000001',
'B': '070000000000000000000002',
}
:return: the controllers defined on the system
"""
controllers = list()
try:
rc, controllers = self.request("storage-systems/%s/graph/xpath-filter?query=/controller/id" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to retrieve controller list! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(err)))
controllers.sort()
controllers_dict = {}
i = ord('A')
for controller in controllers:
label = chr(i)
controllers_dict[label] = controller
i += 1
return controllers_dict
def get_ib_link_status(self):
"""Determine the infiniband link status. Returns dictionary keyed by interface reference number."""
link_statuses = {}
try:
rc, result = self.request("storage-systems/%s/hardware-inventory" % self.ssid)
for link in result["ibPorts"]:
link_statuses.update({link["channelPortRef"]: link["linkState"]})
except Exception as error:
self.module.fail_json(msg="Failed to retrieve ib link status information! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(error)))
return link_statuses
def get_target_interface(self):
"""Search for the selected IB iSER interface"""
if self.get_target_interface_cache is None:
ifaces = self.get_interfaces()
ifaces_status = self.get_ib_link_status()
controller_id = self.get_controllers()[self.controller]
controller_ifaces = []
for iface in ifaces:
if iface["ioInterfaceTypeData"]["interfaceType"] == "iscsi" and iface["controllerRef"] == controller_id:
controller_ifaces.append([iface["ioInterfaceTypeData"]["iscsi"]["channel"], iface,
ifaces_status[iface["ioInterfaceTypeData"]["iscsi"]["channelPortRef"]]])
elif iface["ioInterfaceTypeData"]["interfaceType"] == "ib" and iface["controllerRef"] == controller_id:
controller_ifaces.append([iface["ioInterfaceTypeData"]["ib"]["channel"], iface,
iface["ioInterfaceTypeData"]["ib"]["linkState"]])
sorted_controller_ifaces = sorted(controller_ifaces)
if self.channel < 1 or self.channel > len(controller_ifaces):
status_msg = ", ".join(["%s (link %s)" % (index + 1, values[2])
for index, values in enumerate(sorted_controller_ifaces)])
self.module.fail_json(msg="Invalid controller %s HCA channel. Available channels: %s, Array Id [%s]."
% (self.controller, status_msg, self.ssid))
self.get_target_interface_cache = sorted_controller_ifaces[self.channel - 1][1]
return self.get_target_interface_cache
def is_change_required(self):
"""Determine whether change is required."""
changed_required = False
iface = self.get_target_interface()
if (iface["ioInterfaceTypeData"]["interfaceType"] == "iscsi" and
iface["ioInterfaceTypeData"]["iscsi"]["ipv4Data"]["ipv4AddressData"]["ipv4Address"] != self.address):
changed_required = True
elif iface["ioInterfaceTypeData"]["interfaceType"] == "ib" and iface["ioInterfaceTypeData"]["ib"]["isISERSupported"]:
for properties in iface["commandProtocolPropertiesList"]["commandProtocolProperties"]:
if (properties["commandProtocol"] == "scsi" and
properties["scsiProperties"]["scsiProtocolType"] == "iser" and
properties["scsiProperties"]["iserProperties"]["ipv4Data"]["ipv4AddressData"]["ipv4Address"] != self.address):
changed_required = True
return changed_required
def make_request_body(self):
iface = self.get_target_interface()
body = {"iscsiInterface": iface["ioInterfaceTypeData"][iface["ioInterfaceTypeData"]["interfaceType"]]["id"],
"settings": {"tcpListenPort": [],
"ipv4Address": [self.address],
"ipv4SubnetMask": [],
"ipv4GatewayAddress": [],
"ipv4AddressConfigMethod": [],
"maximumFramePayloadSize": [],
"ipv4VlanId": [],
"ipv4OutboundPacketPriority": [],
"ipv4Enabled": [],
"ipv6Enabled": [],
"ipv6LocalAddresses": [],
"ipv6RoutableAddresses": [],
"ipv6PortRouterAddress": [],
"ipv6AddressConfigMethod": [],
"ipv6OutboundPacketPriority": [],
"ipv6VlanId": [],
"ipv6HopLimit": [],
"ipv6NdReachableTime": [],
"ipv6NdRetransmitTime": [],
"ipv6NdStaleTimeout": [],
"ipv6DuplicateAddressDetectionAttempts": [],
"maximumInterfaceSpeed": []}}
return body
def update(self):
"""Make any necessary updates."""
update_required = self.is_change_required()
if update_required and not self.check_mode:
try:
rc, result = self.request("storage-systems/%s/symbol/setIscsiInterfaceProperties"
% self.ssid, method="POST", data=self.make_request_body())
except Exception as error:
self.module.fail_json(msg="Failed to modify the interface! Array Id [%s]. Error [%s]."
% (self.ssid, to_native(error)))
self.module.exit_json(msg="The interface settings have been updated.", changed=update_required)
self.module.exit_json(msg="No changes were required.", changed=update_required)
def main():
ib_iser = NetAppESeriesIbIserInterface()
ib_iser.update()
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More