Current File : //usr/lib/python3.6/site-packages/sos/collector/transports/__pycache__/oc.cpython-36.pyc |
3
\��h�% � @ sL d dl Z d dlZd dlZd dlmZ d dlmZmZmZ G dd� de�Z dS )� N)�RemoteTransport)�
is_executable�sos_get_command_output�SoSTimeoutErrorc s~ e Zd ZdZdZdZdd� Zedd� �Zdd � Z d
d� Z
� fdd
�Zd� fdd� Zdd� Z
edd� �Zdd� Zdd� Z� ZS )�OCTransportaq
This transport leverages the execution of commands via a locally
available and configured ``oc`` binary for OCPv4 environments.
The location of the oc binary MUST be in the $PATH used by the locally
loaded SoS policy. Specifically this means that the binary cannot be in the
running user's home directory, such as ~/.local/bin.
OCPv4 clusters generally discourage the use of SSH, so this transport may
be used to remove our use of SSH in favor of the environment provided
method of connecting to nodes and executing commands via debug pods.
The debug pod created will be a privileged pod that mounts the host's
filesystem internally so that sos report collections reflect the host, and
not the container in which it runs.
This transport will execute within a temporary 'sos-collect-tmp' project
created by the OCP cluster profile. The project will be removed at the end
of execution.
In the event of failures due to a misbehaving OCP API or oc binary, it is
recommended to fallback to the control_persist transport by manually
setting the --transport option.
�oczsos-collect-tmpc K s t d| j� d|� �f|�S )z\Format and run a command with `oc` in the project defined for our
execution
zoc -n � )r �project)�self�cmd�kwargs� r
�/usr/lib/python3.6/oc.py�run_oc1 s zOCTransport.run_occ C s | j d| j� ��}|d dkS )Nz,wait --timeout=0s --for=condition=ready pod/�statusr )r �pod_name)r
�upr
r
r � connected: s zOCTransport.connectedc C s� dd| j jd�d � d�| jd�ddd d
d�d�d
dd
d�d�ddd
d�d�dddd�d�gd| jjsjdn| jjdgddd�gi ddd�d
dd�ddd�ddd�gddd�dddd�
g| jjr�dndd | j dddd!�d"�S )#z�Based on our template for the debug container, add the node-specific
items so that we can deploy one of these on each node we're collecting
from
ZPodZv1�.r z-sos-collector)�name� namespacezsystem-cluster-critical�host�/Z Directory)�path�type)r ZhostPathZrunz/runZvarlogz/var/logz
machine-idz/etc/machine-idZFilezsos-collector-tmpz®istry.redhat.io/rhel8/support-toolsz /bin/bashZHOSTz/host)r �value)r Z mountPathT)Z
privilegedZ runAsUser)
r �imageZcommand�envZ resourcesZvolumeMountsZsecurityContext�stdinZ stdinOnceZttyZAlwaysZIfNotPresentZNever)ZvolumesZ
containersZimagePullPolicyZ
restartPolicyZnodeNameZhostNetworkZhostPIDZhostIPC)ZkindZ
apiVersion�metadataZpriorityClassName�spec)Zaddress�splitr Zoptsr Zforce_pull_image)r
r
r
r �get_node_pod_configA sT
zOCTransport.get_node_pod_configc C sf t d�sdS | j� }|d d | _tj| jd�\}| _t|ddd��}tj ||� W d Q R X | j
d | j� d
�� td| j� ��}|d d
ks�d| j� d�|d kr�| jd� | j
d|d � �� dS | j
d| j� d�� y8| j
d| j� d�dd�}|d d
k�s| jd� dS W nP tk
�r0 | jd� dS tk
�r` } z| jd|� �� dS d }~X nX dS )Nr Fr r )�dir�wzutf-8)�encodingz"Starting sos collector container '�'z
oc create -f r r zpod/z created�outputz Unable to deploy sos collect podzDebug pod deployment failed: zPod 'z=' successfully deployed, waiting for pod to enter ready statezwait --for=condition=Ready pod/z --timeout=30s�( )�timeoutz"Pod not available after 30 secondsz'Timeout while polling for pod readinessz)Error while waiting for pod to be ready: T)r r"