HEX
Server: Apache
System: Linux server2.voipitup.com.au 4.18.0-553.109.1.lve.el8.x86_64 #1 SMP Thu Mar 5 20:23:46 UTC 2026 x86_64
User: posscale (1027)
PHP: 8.2.30
Disabled: exec,passthru,shell_exec,system
Upload Files
File: //opt/saltstack/salt/lib/python3.10/site-packages/salt/states/__pycache__/glusterfs.cpython-310.pyc
o

�N�gs0�@s�dZddlZddlmmZddlZddlm	Z	e�
e�Zgd�Z
dd�Zdd�Z							
						ddd�Zd
d�Zdd�Zdd�Zdd�ZdS)z
Manage GlusterFS pool.
�N)�SaltCloudException)zPeer {0} added successfully.zProbe on localhost not neededz%Host {0} is already in the peer groupz+Host {0} is already part of another clusterz-Volume on {0} conflicts with existing volumesz%UUID of {0} is the same as local uuidzZ{0} responded with "unknown peer". This could happen if {0} doesn't have localhost definedz-Failed to add peer. Information on {0}'s logsz9Cluster quorum is not met. Changing peers is not allowed.z2Failed to update list of missed snapshots from {0}z-Conflict comparing list of snapshots from {0}z,Peer is already being detached from cluster.cCsdtvrdSdS)z=
    Only load this module if the gluster command exists
    �glusterfs.list_volumesZ	glusterfs)Fz$glusterfs module could not be loaded)�__salt__�rr�I/opt/saltstack/salt/lib/python3.10/site-packages/salt/states/glusterfs.py�__virtual__srcs��iddd�}zt��d�Wntyd|d<|YSwtjj���}|durSttjj���}|�	tjj�
��tdd	�|D��sI|�|�rSd
|d<d|d<|St
d
�}|rut�fdd	�|��D��rud
|d<d��d�|d<|Stdr�d��d�|d<d|d<|St
d��s�d���|d<|St
d
�}|r�t�fdd	�|��D��r�d
|d<d��d�|d<||d�|d<|Sd���|d<|S)aY
    Check if node is peered.

    name
        The remote host with which to peer.

    .. code-block:: yaml

        peer-cluster:
          glusterfs.peered:
            - name: two

        peer-clusters:
          glusterfs.peered:
            - names:
              - one
              - two
              - three
              - four
    �F��name�changes�comment�result�a-zA-Z0-9._-z Invalid characters in peer name.rNcss�|]
}tjj�|�VqdS)N)�salt�utils�network�is_loopback)�.0�addrrrr�	<genexpr>Ks�
�zpeered.<locals>.<genexpr>Tr
z$Peering with localhost is not neededzglusterfs.peer_statusc3��|]	}�|dvVqdS�Z	hostnamesNr�r�v�r
rrrT��zHost z already peered�testzPeer z will be added.zglusterfs.peerz4Failed to peer with {}, please check logs for errorsc3rrrrrrrrfrz successfully peered��new�oldrzGHost {} was successfully peered but did not appear in the list of peers)�suc�
check_namerrrrZhost_to_ips�setZip_addrs�updateZ	ip_addrs6�any�intersectionr�values�__opts__�format)r
�retZname_ipsZthis_ipsZpeersZnewpeersrrr�peered'sX���
�
���r*F�tcpc	
Cs�|iddd�}	t�|d�rd|	d<|	Std�}
||
vrktdr6d	|�d
�}|r,|d7}||	d<d|	d
<|	Std|||||||||�	}|sPd|�d�|	d<|	S|
}
td�}
||
vrj|
|
d�|	d<d	|�d�|	d<nd	|�d�|	d<|r�tdr�|	dd|	d<d|	d
<|	Sttd�|d�dkr�d|	d
<|	dd|	d<n*td|�}|r�d|	d
<|	dd|	d<|	ds�ddd�|	d<n
|	dd|	d<|	Stdr�d|	d
<|	Sd|	d
<|	S) aQ
    Ensure that the volume exists

    name
        name of the volume

    bricks
        list of brick paths

    replica
        replica count for volume

    arbiter
        use every third brick as arbiter (metadata only)

        .. versionadded:: 2019.2.0

    start
        ensure that the volume is also started

    .. code-block:: yaml

        myvolume:
          glusterfs.volume_present:
            - bricks:
                - host1:/srv/gluster/drive1
                - host2:/srv/gluster/drive2

        Replicated Volume:
          glusterfs.volume_present:
            - name: volume2
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
            - replica: 2
            - start: True

        Replicated Volume with arbiter brick:
          glusterfs.volume_present:
            - name: volume3
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
              - host3:/srv/gluster/drive4
            - replica: 3
            - arbiter: True
            - start: True

    rFr	rz"Invalid characters in volume name.rrr�Volume z will be createdz and startedNr
zglusterfs.create_volumezCreation of volume � failedrrz is createdz already existsz and will be started�glusterfs.info�status�Tz and is started�glusterfs.start_volumez and is now started�started�stoppedz8 but failed to start. Check logs for further information)r r!rr'�int)r
�bricksZstripeZreplicaZ	device_vgZ	transport�start�forceZarbiterr)ZvolumesrZvol_createdZold_volumes�vol_startedrrr�volume_presentssf<
�
�����r9cCs�|iddd�}td�}||vrd|d<d|�d�|d<|St||d	�d
kr6d|�d�|d<d|d<|Std
rHd|�d�|d<d|d<|Std|�}|red|d<d|�d�|d<ddd�|d<|Sd|d<d|��|d<|S)z�
    Check if volume has been started

    name
        name of the volume

    .. code-block:: yaml

        mycluster:
          glusterfs.started: []
    rFr	r.r
r,� does not existrr/r0z is already startedTrz will be startedNr1z is startedr2r3rZchangezFailed to start volume )rr4r')r
r)�volinfor8rrrr2�s.
�r2cCs|iddd�}td�}||vrd|�d�|d<|St||d�d	kr.d|�d
�|d<|Sdd�||d
��D�}t|�t|�sPd|d<d|��|d<|Std||�}|r}d|d<d|��|d<dd�td�|d
��D�}||d�|d<|Sd|�d�|d<|S)a

    Add brick(s) to an existing volume

    name
        Volume name

    bricks
        List of bricks to add to the volume

    .. code-block:: yaml

        myvolume:
          glusterfs.add_volume_bricks:
            - bricks:
                - host1:/srv/gluster/drive1
                - host2:/srv/gluster/drive2

        Replicated Volume:
          glusterfs.add_volume_bricks:
            - name: volume2
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
    rFr	r.r,r:rr/r0z is not startedcS�g|]}|d�qS��pathr�rZbrickrrr�
<listcomp>:sz%add_volume_bricks.<locals>.<listcomp>r5Tr
zBricks already added in volume zglusterfs.add_volume_bricksz$Bricks successfully added to volume cSr<r=rr?rrrr@Ds��rrzAdding bricks to volume r-)rr4r&r")r
r5r)r;Zcurrent_bricksZbricks_addedZ
new_bricksrrr�add_volume_brickss0
�rAcCs�|iddd�}z
ttd|��}Wnty*d|d<td|�d|d<|YSw||kr=d�||�|d<d	|d<|Std
rOd�||�|d<d|d<|Std
|�}|ddurc|d|d<|S||d<||d�|d<d	|d<|S)a;
    .. versionadded:: 2019.2.0

    Add brick(s) to an existing volume

    name
        Volume name

    version
        Version to which the cluster.op-version should be set

    .. code-block:: yaml

        myvolume:
          glusterfs.op_version:
            - name: volume1
            - version: 30707
    rFr	�glusterfs.get_op_versionr
r0rz5Glusterfs cluster.op-version for {} already set to {}TrzDAn attempt would be made to set the cluster.op-version for {} to {}.N�glusterfs.set_op_versionr�rrr�r4r�	TypeErrorr(r')r
�versionr)�currentr
rrr�
op_versionOs:����rIcCs(|iddd�}z
ttd|��}Wnty*d|d<td|�d|d<|YSwz	ttd��}WntyLd|d<td�d|d<|YSw||kr^d	�|�|d<d
|d<|Stdrod�|�|d<d
|d<|Std|�}|ddur�|d|d<|S||d<||d�|d<d
|d<|S)z�
    .. versionadded:: 2019.2.0

    Add brick(s) to an existing volume

    name
        Volume name

    .. code-block:: yaml

        myvolume:
          glusterfs.max_op_version:
            - name: volume1
            - version: 30707
    rFr	rBr
r0rzglusterfs.get_max_op_versionzIThe cluster.op-version is already set to the cluster.max-op-version of {}Trz=An attempt would be made to set the cluster.op-version to {}.NrCrrDrrE)r
r)rHZmax_versionr
rrr�max_op_version�sJ������rJ)FFFr+FFF)�__doc__�loggingZsalt.utils.cloudrZcloudr Zsalt.utils.networkrZsalt.exceptionsr�	getLogger�__name__�logZRESULT_CODESrr*r9r2rArIrJrrrr�<module>s*
	O
�z)97