File: //opt/saltstack/salt/lib/python3.10/site-packages/salt/states/__pycache__/pcs.cpython-310.pyc
o
�N�g� � @ s d Z ddlZddlZddlZddlZddlZe�e�Z dd� Z
dd� Zdd� Zd d
� Z
dd� Zd
d� Zdd� Zdd� Z d0dd�Zd1dd�Z d2dd�Zd3dd�Zd4d d!�Zd4d"d#�Zd4d$d%�Zd4d&d'�Zd4d(d)�Z d4d*d+�Z d4d,d-�Z d4d.d/�ZdS )5a�
Management of Pacemaker/Corosync clusters with PCS
==================================================
A state module to manage Pacemaker/Corosync clusters
with the Pacemaker/Corosync configuration system (PCS)
.. versionadded:: 2016.11.0
:depends: pcs
Walkthrough of a complete PCS cluster setup:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/
Requirements:
PCS is installed, pcs service is started and
the password for the hacluster user is set and known.
Remark on the cibname variable used in the examples:
The use of the cibname variable is optional.
Use it only if you want to deploy your changes into a cibfile first and then push it.
This makes only sense if you want to deploy multiple changes (which require each other) at once to the cluster.
At first the cibfile must be created:
.. code-block:: yaml
mysql_pcs__cib_present_cib_for_galera:
pcs.cib_present:
- cibname: cib_for_galera
- scope: None
- extra_args: None
Then the cibfile can be modified by creating resources (creating only 1 resource for demonstration, see also 7.):
.. code-block:: yaml
mysql_pcs__resource_present_galera:
pcs.resource_present:
- resource_id: galera
- resource_type: "ocf:heartbeat:galera"
- resource_options:
- 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
- '--master'
- cibname: cib_for_galera
After modifying the cibfile, it can be pushed to the live CIB in the cluster:
.. code-block:: yaml
mysql_pcs__cib_pushed_cib_for_galera:
pcs.cib_pushed:
- cibname: cib_for_galera
- scope: None
- extra_args: None
Create a cluster from scratch:
1. This authorizes nodes to each other. It probably won't work with Ubuntu as
it rolls out a default cluster that needs to be destroyed before the
new cluster can be created. This is a little complicated so it's best
to just run the cluster_setup below in most cases.:
.. code-block:: yaml
pcs_auth__auth:
pcs.auth:
- nodes:
- node1.example.com
- node2.example.com
- pcsuser: hacluster
- pcspasswd: hoonetorg
2. Do the initial cluster setup:
.. code-block:: yaml
pcs_setup__setup:
pcs.cluster_setup:
- nodes:
- node1.example.com
- node2.example.com
- pcsclustername: pcscluster
- extra_args:
- '--start'
- '--enable'
- pcsuser: hacluster
- pcspasswd: hoonetorg
3. Optional: Set cluster properties:
.. code-block:: yaml
pcs_properties__prop_has_value_no-quorum-policy:
pcs.prop_has_value:
- prop: no-quorum-policy
- value: ignore
- cibname: cib_for_cluster_settings
4. Optional: Set resource defaults:
.. code-block:: yaml
pcs_properties__resource_defaults_to_resource-stickiness:
pcs.resource_defaults_to:
- default: resource-stickiness
- value: 100
- cibname: cib_for_cluster_settings
5. Optional: Set resource op defaults:
.. code-block:: yaml
pcs_properties__resource_op_defaults_to_monitor-interval:
pcs.resource_op_defaults_to:
- op_default: monitor-interval
- value: 60s
- cibname: cib_for_cluster_settings
6. Configure Fencing (!is often not optional on production ready cluster!):
.. code-block:: yaml
pcs_stonith__created_eps_fence:
pcs.stonith_present:
- stonith_id: eps_fence
- stonith_device_type: fence_eps
- stonith_device_options:
- 'pcmk_host_map=node1.example.org:01;node2.example.org:02'
- 'ipaddr=myepsdevice.example.org'
- 'power_wait=5'
- 'verbose=1'
- 'debug=/var/log/pcsd/eps_fence.log'
- 'login=hidden'
- 'passwd=hoonetorg'
- cibname: cib_for_stonith
7. Add resources to your cluster:
.. code-block:: yaml
mysql_pcs__resource_present_galera:
pcs.resource_present:
- resource_id: galera
- resource_type: "ocf:heartbeat:galera"
- resource_options:
- 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
- '--master'
- cibname: cib_for_galera
8. Optional: Add constraints (locations, colocations, orders):
.. code-block:: yaml
haproxy_pcs__constraint_present_colocation-vip_galera-haproxy-clone-INFINITY:
pcs.constraint_present:
- constraint_id: colocation-vip_galera-haproxy-clone-INFINITY
- constraint_type: colocation
- constraint_options:
- 'add'
- 'vip_galera'
- 'with'
- 'haproxy-clone'
- cibname: cib_for_haproxy
.. versionadded:: 2016.3.0
� Nc C s t jj�d�r dS dS )z/
Only load if pcs package is installed
�pcs)FzUnable to locate command: pcs)�salt�utils�path�which� r r �C/opt/saltstack/salt/lib/python3.10/site-packages/salt/states/pcs.py�__virtual__� s r c C s^ d}t j�| �r-tjj�| d��}tjj�|� � �}W d � n1 s$w Y |�
� |S )z(
Read a file and return content
Fzr+N)�osr �existsr r �files�fopen�stringutilsZ
to_unicode�read�close�r �contentZfp_r r r �
_file_read� s �r c C sP t jj�| d��}|�t jj�|�� W d � n1 sw Y |�� dS )z!
Write content to a file
zw+N)r r r r
�writer Zto_strr r r r r �_file_write� s �r c C s$ t j�td dt�} t�d| � | S )zK
Get the path to the directory on the minion where CIB's are saved
Zcachedirr zcibpath: %s)r
r �join�__opts__Z__env__�log�trace)�cibpathr r r �_get_cibpath� s r c C s( t j�t� d�| d��}t�d|� |S )zI
Get the full path of a cached CIB-file with the name of the CIB
z{}.{}Zcibzcibfile: %s)r
r r r �formatr r )�cibname�cibfiler r r �_get_cibfile� s r c C � t | �� d�}t�d|� |S )zL
Get the full path of a temporary CIB-file with the name of the CIB
z.tmpzcibfile_tmp: %s�r r r )r �cibfile_tmpr r r �_get_cibfile_tmp� � r# c C r )zd
Get the full path of the file containing a checksum of a CIB-file with the name of the CIB
z.cksumzcibfile_cksum: %sr! )r �
cibfile_cksumr r r �_get_cibfile_cksum� r$ r&