HEX
Server: Apache
System: Linux server2.voipitup.com.au 4.18.0-553.111.1.lve.el8.x86_64 #1 SMP Fri Mar 13 13:42:17 UTC 2026 x86_64
User: posscale (1027)
PHP: 8.2.30
Disabled: exec,passthru,shell_exec,system
Upload Files
File: //opt/saltstack/salt/lib/python3.10/site-packages/salt/pillar/__pycache__/s3.cpython-310.pyc
o

�N�g�9�@s�dZddlZddlZddlZddlZddlZddlmZddl	Z
ddlZ
ddlm
Z
e�e�ZGdd�d�Z								
						d dd
�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�ZdS)!a�
Copy pillar data from a bucket in Amazon S3

The S3 pillar can be configured in the master config file with the following
options

.. code-block:: yaml

    ext_pillar:
      - s3:
          bucket: my.fancy.pillar.bucket
          keyid: KASKFJWAKJASJKDAJKSD
          key: ksladfDLKDALSFKSD93q032sdDasdfasdflsadkf
          multiple_env: False
          environment: base
          prefix: somewhere/overthere
          verify_ssl: True
          service_url: s3.amazonaws.com
          kms_keyid: 01234567-89ab-cdef-0123-4567890abcde
          s3_cache_expire: 30
          s3_sync_on_update: True
          path_style: False
          https_enable: True

The ``bucket`` parameter specifies the target S3 bucket. It is required.

The ``keyid`` parameter specifies the key id to use when access the S3 bucket.
If it is not provided, an attempt to fetch it from EC2 instance meta-data will
be made.

The ``key`` parameter specifies the key to use when access the S3 bucket. If it
is not provided, an attempt to fetch it from EC2 instance meta-data will be made.

The ``multiple_env`` defaults to False. It specifies whether the pillar should
interpret top level folders as pillar environments (see mode section below).

The ``environment`` defaults to 'base'. It specifies which environment the
bucket represents when in single environments mode (see mode section below). It
is ignored if multiple_env is True.

The ``prefix`` defaults to ''. It specifies a key prefix to use when searching
for data in the bucket for the pillar. It works when multiple_env is True or False.
Essentially it tells ext_pillar to look for your pillar data in a 'subdirectory'
of your S3 bucket

The ``verify_ssl`` parameter defaults to True. It specifies whether to check for
valid S3 SSL certificates. *NOTE* If you use bucket names with periods, this
must be set to False else an invalid certificate error will be thrown (issue
#12200).

The ``service_url`` parameter defaults to 's3.amazonaws.com'. It specifies the
base url to use for accessing S3.

The ``kms_keyid`` parameter is optional. It specifies the ID of the Key
Management Service (KMS) master key that was used to encrypt the object.

The ``s3_cache_expire`` parameter defaults to 30s. It specifies expiration
time of S3 metadata cache file.

The ``s3_sync_on_update`` parameter defaults to True. It specifies if cache
is synced on update rather than jit.

The ``path_style`` parameter defaults to False. It specifies whether to use
path style requests or dns style requests

The ``https_enable`` parameter defaults to True. It specifies whether to use
https protocol or http protocol

This pillar can operate in two modes, single environment per bucket or multiple
environments per bucket.

Single environment mode must have this bucket structure:

.. code-block:: text

    s3://<bucket name>/<prefix>/<files>

Multiple environment mode must have this bucket structure:

.. code-block:: text

    s3://<bucket name>/<prefix>/<environment>/<files>

If you wish to define your pillar data entirely within S3 it's recommended
that you use the `prefix=` parameter and specify one entry in ext_pillar
for each environment rather than specifying multiple_env. This is due
to issue #22471 (https://github.com/saltstack/salt/issues/22471)
�N)�deepcopy)�Pillarc@s eZdZ					ddd�ZdS)�
S3CredentialsTNFc

Cs:||_||_||_||_||_||_||_||_|	|_dS�N)	�key�keyid�	kms_keyid�bucket�service_url�
verify_ssl�location�
path_style�https_enable)
�selfrrr	r
rrrr
r�r�B/opt/saltstack/salt/lib/python3.10/site-packages/salt/pillar/s3.py�__init__is
zS3Credentials.__init__)TNNFT)�__name__�
__module__�__qualname__rrrrrrhs�rTF�base��c
CsNt||||
|||||�	}tj�tj�t�||��}|	r&tj�tj�||	��}td�|g�|gkr3iSt|||||	|�}|
rxt	�
d�|��D]+\}}t|���D] \}}|D]}t
|||�}t	�
d|||�t||||||�qWqQqGt	�
d�tt�}|r�tj�||�gn|g|d|<dd�|dD�|d<t|t||�}|jdd	�}|S)
z7
    Execute a command and read the output as YAML
    Zpillar_rootsz%Syncing local pillar cache from S3...z%s - %s : %sz*Sync local pillar cache from S3 completed.cSsg|]}d|vr|�qS)Zs3r)�.0�xrrr�
<listcomp>��zext_pillar.<locals>.<listcomp>�
ext_pillarF)�ext)r�os�path�normpath�join�_get_cache_dir�__opts__�get�_init�log�info�items�_find_files�_get_cached_file_name�_get_file_from_s3rrZ
__grains__Zcompile_pillar)Z	minion_idZpillarr	rrrr�multiple_env�environment�prefixr
r�s3_cache_expireZs3_sync_on_updater
rZs3_credsZ
pillar_dir�metadata�saltenv�env_meta�files�	file_path�cached_file_path�optsZpilZcompiled_pillarrrrr�sN�
�
���
	
�rcCs�t||�}t��|}tj�|�rtj�|�}nd}||k}	t�d||	r&dnd|||�|	r8t|||||�}
nt	|�}
t�d|
�|
S)zx
    Connect to S3 and download the metadata for each file in all buckets
    specified and cache the data to disk.
    rzDS3 bucket cache file %s is %sexpired, mtime_diff=%ss, expiration=%ssrznot zS3 bucket retrieved pillars %s)
�_get_buckets_cache_filename�timerr �isfile�getmtimer'�debug�_refresh_buckets_cache_file�_read_buckets_cache_file)�credsr	r-r.r/r0�
cache_file�expZcache_file_mtimeZexpiredZpillarsrrrr&�s(

�
�r&cCs6tj�tdd�}tj�|�st�d�t�|�|S)zI
    Get pillar cache directory. Initialize it if it does not exist.
    ZcachedirZpillar_s3fszInitializing S3 Pillar Cache)rr r"r$�isdirr'r<�makedirs)�	cache_dirrrrr#�s


r#cCs>tj�t�|||�}tj�tj�|��st�tj�|��|S)z<
    Return the cached file name for a bucket path file
    )rr r"r#�exists�dirnamerC)r	r2r r5rrrr+�sr+cCs6t�}tj�|�st�|�tj�||�d|�d��S)zi
    Return the filename of the cache for bucket contents.
    Create the path if it does not exist.
    �-z-files.cache)r#rr rErCr")r	r/rDrrrr8
s
r8cs�d��fdd�	}dd�}dd�}dd	�}t�d
�i}	�j}
|sNt�d�i}|�}|rM||�||
<	||�}
|
s:n||
�}||
||�7<q3||	|<nQt�d
�|�}|r�||�}	||�}
|
sdn||
�}|||�7}q]||�}|D])��fdd�|D�}�|	vr�i|	�<|
|	�vr�g|	�|
<|	�|
|7<qutj�|�r�t�|�t�d�tjj	�
|d��}t�|	|�Wd�|	S1s�wY|	S)zb
    Retrieve the content of all buckets and cache the metadata to the buckets
    cache file
    Nc
sJ�dd�}|r||d<td�j�j�j�j�j�j�jd|�j�j	d�S)N�)r/z	list-typezcontinuation-token�s3.queryF)rrrr	r
rrZ
return_bin�paramsr
r)
�	__utils__rrrr	r
rrr
r)�continuation_tokenrJ)r?r/rr�
__get_s3_meta!s 
�z2_refresh_buckets_cache_file.<locals>.__get_s3_metacSsdd�|D�S)NcSsg|]}d|vr|�qS��Keyr�r�krrrr8rzX_refresh_buckets_cache_file.<locals>.__get_pillar_files_from_s3_meta.<locals>.<listcomp>r��s3_metarrr�__get_pillar_files_from_s3_meta7szD_refresh_buckets_cache_file.<locals>.__get_pillar_files_from_s3_metacSsdd�|D�}t|�S)NcSs(g|]}tj�|d��dd�d�qS)rO�/�r)rr rF�splitrPrrrr<s(zR_refresh_buckets_cache_file.<locals>.__get_pillar_environments.<locals>.<listcomp>)�set)r4�environmentsrrr�__get_pillar_environments;sz>_refresh_buckets_cache_file.<locals>.__get_pillar_environmentscSstdd�|D�d�S)Ncss$�|]
}|�d�r|�d�VqdS)ZNextContinuationTokenN)r%)r�itemrrr�	<genexpr>As��
�zP_refresh_buckets_cache_file.<locals>.__get_continuation_token.<locals>.<genexpr>)�nextrRrrr�__get_continuation_token?s��z=_refresh_buckets_cache_file.<locals>.__get_continuation_tokenz'Refreshing S3 buckets pillar cache filez"Single environment per bucket modeTz$Multiple environment per bucket modecsg|]}|d���r|�qSrN)�
startswithrP)r2rrrysz/_refresh_buckets_cache_file.<locals>.<listcomp>z$Writing S3 buckets pillar cache file�wbr)
r'r<r	rr r:�remove�salt�utilsr4�fopen�pickle�dump)r?r@r-r.r/rMrTrZr^r1r	Zbucket_filesrSrLr4rYZ	env_files�fp_r)r?r/r2rr=sd


��
�


��r=cCsJt�d�tjj�|d��}t�|�}Wd�|S1swY|S)z7
    Return the contents of the buckets cache file
    zReading buckets cache file�rbN)r'r<rbrcr4rdre�load)r@rg�datarrrr>�s

��r>cCsRi}|��D] \}}||vrg||<dd�|D�}||dd�|D�7<q|S)zA
    Looks for all the files in the S3 bucket cache metadata
    cSsg|]}|d�qSrNrrPrrrr�sz_find_files.<locals>.<listcomp>cSsg|]	}|�d�s|�qS)rU)�endswithrPrrrr�s)r))r1�retr	rjZ	filePathsrrrr*�sr*cCsh||vr||ni}||vr||ni}tttdd�|���}|D]}d|vr1|d|kr1|Sq!dS)zA
    Looks for a file's metadata in the S3 bucket cache file
    cSsd|vS)NrOr)rQrrr�<lambda>�sz!_find_file_meta.<locals>.<lambda>rON)�list�filter)r1r	r2r r3Zbucket_metaZ
files_metaZ	item_metarrr�_find_file_meta�s��rpc	
Cs�tj�|�r4t||||�}|rd�tttj|d���nd}t	j
j�|d�}t
�d|||�||kr4dStd|j|j|j||jtj�|�||j|j|j|jd�dS)zw
    Checks the local cache for the file, if it's old or missing go grab the
    file from S3 and update the cache
    rZETagN�md5z%Cached file: path=%s, md5=%s, etag=%srI)rrrr	r
r Z
local_filerrr
r)rr r:rpr"rnro�str�isalnumrbrcZ	hashutilsZget_hashr'r<rKrrrr
�urllib�parse�quoterrr
r)	r?r1r2r	r r6Z	file_metaZfile_md5Z
cached_md5rrrr,�s4 ��

�r,)
NNTNFrrNNrTFT)�__doc__�loggingrrer9�urllib.parsert�copyrZsalt.utils.filesrbZsalt.utils.hashutilsZsalt.pillarr�	getLoggerrr'rrr&r#r+r8r=r>r*rpr,rrrr�<module>sFY

�L%
u