code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
from .version import __version__ as VERSION
from .sikulixregion import *
from .sikulixapp import *
from .sikuliximagepath import *
from .sikulixsettings import *
from .sikulixdebug import *
@library(scope='GLOBAL', version=VERSION)
class SikuliXLibrary(SikuliXRegion,
SikuliXApp,
SikuliXImagePath,
SikuliXSettings,
SikuliXDebug):
''' The all new, modern, SikuliX Robot Framework library for Python 3.x, based on JPype or Py4J Python modules.
It can be enabled to use by choice any of the JPype or Py4J modules. This is done by creating SIKULI_PY4J environment variable
and setting to 1 for using Py4J. When not defined or set to 0, JPype is used instead.
Please note that on MacOS, only Py4J can be used, while on Windows or Ubuntu, any of them is working.
So far, the only approach to use SikuliX Java library within Robot Framework was through Remote library and Jython 2.7.
The existing ``robotframework-SikuliLibrary`` and other known custom implementations (e.g. mostly based on old
http://blog.mykhailo.com/2011/02/how-to-sikuli-and-robot-framework.html) are using Remote library approach only, which is now obsolete.
In addition, also other popular libraries like ``ImageHorizonLibrary`` (built on top of pyautoguy), that is used currently due easier
usage in comparison with previous SikuliX remote server implementations, can now be easily switched to this new library.
With the help of this new library, SikuliX implementation can be used now natively with Robot Framework and Python 3.x:
- robotremoteserver and Remote library are not needed anymore
- debugging with some RF supporting tools
- very easy to extend the library with new keywords, or overwrite existing keywords and methods by extending the main class, e.g.
| class ImageHorizonLibraryMigration(SikuliXLibrary):
| def click_image(self, reference_image):
| self.region_click(target, 0, 0, False)
|
| class SikuliLibraryMigration(SikuliXLibrary):
| def click(self, image, xOffset, yOffset):
| self.region_click(image, xOffset, yOffset, False)
|
| class SikuliXCustomLibrary(SikuliXLibrary):
| def _passed(self, msg):
| logger.info('MY PASS MESSAGE: ' + msg)
This library is using:
| [https://github.com/RaiMan/SikuliX1]
| [https://github.com/jpype-project/jpype]
| [https://github.com/bartdag/py4j]
The keywords are matching as much as possible the original SikuliX functions so that it is easier to understand them from
the official documentation: https://sikulix-2014.readthedocs.io/en/latest/index.html
E.g. ``SikuliX class Region.find(PS)`` function is translated into Python and Robot keyword as
``region_find(target, onScreen)``
``region_find = Region.find(PS)``, where PS is a Pattern or String that define the path to an image file
Pattern will need the following parameters, provided as arguments to this keyword
- target - a string naming an image file from known image paths (with or without .png extension)
- similar - minimum similarity. If not given, the default is used. Can be set as ``img=similarity``
- mask - an image with transparent or black parts or 0 for default masked black parts. Should be set as img:mask, img:0, img:mask=similarity or img:0=similarity
- onScreen - reset the region to the whole screen, otherwise it will search on a region defined previously with set parameters keywords
e.g. `Region SetRect` where the parameters can be from a previous match or known dimension, etc.
Compared with other libraries, the import parameter ``centerMode`` will allow using click coordinates relative to center of the image,
otherwise the click coordinates are relative to upper left corner (default).
With this approach, it is very easy to capture a screenshot, open it e.g. in Paint in Windows and the coordinates shown in the lower left
corner are the click coordinates that should be given to the click keyword:
``region_click = Region.click(PSMRL[, modifiers])``, where PSMRL is a pattern, a string, a match, a region or a location that evaluates to a click point.
Currently only String, together with parameters that define a pattern will be accepted.
Pattern will need the following parameters, provided as arguments to this keyword
- target - a string naming an image file from known image paths (with or without .png extension)
- similar - minimum similarity. If not given, the default is used. Can be set as img=similarity
- mask - an image with transparent or black parts or 0 for default masked black parts. Should be set as img:mask, img:0, img:mask=similarity or img:0=similarity
- dx, dy - define click point, either relative to center or relative to upper left corner (default with set_offsetCenterMode)
Note: within RF, coordinates can be given both as string or numbers, for any keyword that needs coordinates, e.g.:
``Region Click 10 10`` or ``Region Click ${10} ${10}``
- useLastMatch - if True, will assume the LastMatch can be used otherwise SikuliX will do a find on the target image and click in the center of it.
If implicit find operation is needed, assume the region is the whole screen.
Region Click with no arguments will either click the center of the last used Region or the lastMatch, if any is available.
= Debugging =
When writing test cases and keywords it is important to understand the precise effect of the code written.
The following tools can help to understand what's going on, in order of detail level:
- Robot Framework's own
[https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#log-levels|`Set Log Level`]
- Vizualisation tools offered by SikuliXLibrary as `Settings Set Show Actions` and `Region Highlight`
- Additional logging of the SikuliX core engine, enabled by the keyword `Set Debug`.
- Once logging of the SikuliX core engine is enabled, more logging sections can be enabled using the
`DebugLogs`, `ProfileLogs` and `TraceLogs` switches, see `Settings Set`.
'''
@not_keyword
def __init__(self, sikuli_path='', image_path='', logImages=True, centerMode=False):
'''
| sikuli_path | Path to sikulix.jar file. If empty, it will try to use SIKULI_HOME environment variable. |
| image_path | Initial path to image library. More paths can be added later with the keyword `ImagePath Add` |
| logImages | Default True, if screen captures of found images and whole screen if not found, are logged in the final result log.html file |
| centerMode | Default False, if should calculate the click offset relative to center of the image or relative to upper left corner. |
'''
SikuliXJClass.__init__(self, sikuli_path)
SikuliXImagePath.__init__(self, image_path)
SikuliXRegion.__init__(self, logImages, centerMode) | /robotframework_sikulixlibrary-2.0.0-py3-none-any.whl/SikuliXLibrary/sikulixlibrary.py | 0.763219 | 0.388444 | sikulixlibrary.py | pypi |
import time
import warnings
import functools
import robot.utils
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
from pysnmp.carrier.asynsock.dispatch import AsynsockDispatcher
from pysnmp.carrier.asynsock.dgram import udp
from pysnmp.proto.api import decodeMessageVersion, v2c, protoVersion2c
from pyasn1.codec.ber import decoder
from . import utils
def _generic_trap_filter(domain, sock, pdu, **kwargs):
snmpTrapOID = (1, 3, 6, 1, 6, 3, 1, 1, 4, 1, 0)
if 'host' in kwargs and kwargs['host']:
if sock[0] != kwargs['host']:
return False
for oid, val in v2c.apiPDU.getVarBindList(pdu):
if 'oid' in kwargs and kwargs['oid']:
if oid == snmpTrapOID:
if val[0][0][2] != v2c.ObjectIdentifier(kwargs['oid']):
return False
return True
def _trap_receiver(trap_filter, host, port, timeout):
started = time.time()
def _trap_timer_cb(now):
if now - started > timeout:
raise AssertionError('No matching trap received in %s.' %
robot.utils.secs_to_timestr(timeout))
def _trap_receiver_cb(transport, domain, sock, msg):
if decodeMessageVersion(msg) != protoVersion2c:
raise RuntimeError('Only SNMP v2c traps are supported.')
req, msg = decoder.decode(msg, asn1Spec=v2c.Message())
pdu = v2c.apiMessage.getPDU(req)
# ignore any non trap PDUs
if not pdu.isSameTypeWith(v2c.TrapPDU()):
return
# Stop the receiver if the trap we are looking for was received.
if trap_filter(domain, sock, pdu):
transport.jobFinished(1)
dispatcher = AsynsockDispatcher()
dispatcher.registerRecvCbFun(_trap_receiver_cb)
dispatcher.registerTimerCbFun(_trap_timer_cb)
transport = udp.UdpSocketTransport().openServerMode((host, port))
dispatcher.registerTransport(udp.domainName, transport)
# we'll never finish, except through an exception
dispatcher.jobStarted(1)
try:
dispatcher.runDispatcher()
finally:
dispatcher.closeDispatcher()
class _Traps:
def __init__(self):
self._trap_filters = dict()
def new_trap_filter(self, name, host=None, oid=None):
"""Defines a new SNMP trap filter.
At the moment, you can only filter on the sending host and on the trap
OID.
"""
trap_filter = functools.partial(_generic_trap_filter,
host=host,
oid=utils.parse_oid(oid))
self._trap_filters[name] = trap_filter
def wait_until_trap_is_received(self, trap_filter_name, timeout=5.0,
host='0.0.0.0', port=1620):
"""Wait until the first matching trap is received."""
if trap_filter_name not in self._trap_filters:
raise RuntimeError('Trap filter "%s" not found.' % trap_filter_name)
trap_filter = self._trap_filters[trap_filter_name]
timeout = robot.utils.timestr_to_secs(timeout)
_trap_receiver(trap_filter, host, port, timeout) | /robotframework-snmplibrary-0.2.2.tar.gz/robotframework-snmplibrary-0.2.2/src/SnmpLibrary/traps.py | 0.4917 | 0.157363 | traps.py | pypi |
import os
from keywords import *
from version import VERSION
__version__ = VERSION
class AppiumLibrary(
_LoggingKeywords,
_RunOnFailureKeywords,
_ElementKeywords,
_ScreenshotKeywords,
_ApplicationManagementKeywords,
_WaitingKeywords,
_TouchKeywords,
_KeyeventKeywords,
_AndroidUtilsKeywords,
):
"""AppiumLibrary is a App testing library for Robot Framework.
*Locating elements*
All keywords in AppiumLibrary that need to find an element on the app
take an argument, `locator`. By default, when a locator value is provided,
it is matched against the key attributes of the particular element type.
For example, `id` and `name` are key attributes to all elements, and
locating elements is easy using just the `id` as a `locator`. For example:
``Click Element my_element``
Appium additionally supports some of the _Mobile JSON Wire Protocol_
(https://code.google.com/p/selenium/source/browse/spec-draft.md?repo=mobile) locator strategies
It is also possible to specify the approach AppiumLibrary should take
to find an element by specifying a lookup strategy with a locator
prefix. Supported strategies are:
| *Strategy* | *Example* | *Description* |
| identifier | Click Element `|` identifier=my_element | Matches by @id or @name attribute |
| id | Click Element `|` id=my_element | Matches by @id attribute |
| name | Click Element `|` name=my_element | Matches by @name attribute |
| xpath | Click Element `|` xpath=//UIATableView/UIATableCell/UIAButton | Matches with arbitrary XPath |
| class | Click Element `|` class=UIAPickerWheel | Matches by class |
| accessibility_id | Click Element `|` accessibility_id=t | Accessibility options utilize. |
| android | Click Element `|` android=UiSelector().description('Apps') | Matches by Android UI Automator |
| ios | Click Element `|` ios=.buttons().withName('Apps') | Matches by iOS UI Automation |
| css | Click Element `|` css=.green_button | Matches by css in webview |
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = VERSION
def __init__(self, timeout=5, run_on_failure='Capture Page Screenshot'):
"""AppiumLibrary can be imported with optional arguments.
`timeout` is the default timeout used to wait for all waiting actions.
It can be later set with `Set Appium Timeout`.
`run_on_failure` specifies the name of a keyword (from any available
libraries) to execute when a AppiumLibrary keyword fails. By default
`Capture Page Screenshot` will be used to take a screenshot of the current page.
Using the value `No Operation` will disable this feature altogether. See
`Register Keyword To Run On Failure` keyword for more information about this
functionality.
Examples:
| Library | AppiumLibrary | 10 | # Sets default timeout to 10 seconds |
| Library | AppiumLibrary | timeout=10 | run_on_failure=No Operation | # Sets default timeout to 10 seconds and does nothing on failure |
"""
for base in AppiumLibrary.__bases__:
base.__init__(self)
self.set_appium_timeout(timeout)
self.register_keyword_to_run_on_failure(run_on_failure) | /robotframework-sofrecomappiumlibrary-1.0.1.zip/robotframework-sofrecomappiumlibrary-1.0.1/src/AppiumLibrary/__init__.py | 0.724675 | 0.330431 | __init__.py | pypi |
from appium.webdriver.common.touch_action import TouchAction
from AppiumLibrary.locators import ElementFinder
from keywordgroup import KeywordGroup
class _TouchKeywords(KeywordGroup):
def __init__(self):
self._element_finder = ElementFinder()
# Public, element lookups
def zoom(self, locator, percent="200%", steps=1):
"""
Zooms in on an element a certain amount.
"""
driver = self._current_application()
element = self._element_find(locator, True, True)
driver.zoom(element=element, percent=percent, steps=steps)
def pinch(self, locator, percent="200%", steps=1):
"""
Pinch in on an element a certain amount.
"""
driver = self._current_application()
element = self._element_find(locator, True, True)
driver.pinch(element=element, percent=percent, steps=steps)
def swipe(self, start_x, start_y, end_x, end_y, duration=1000):
"""
Swipe from one point to another point, for an optional duration.
"""
driver = self._current_application()
driver.swipe(start_x, start_y, end_x, end_y, duration)
def scroll(self, start_locator, end_locator):
"""
Scrolls from one element to another
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
el1 = self._element_find(start_locator, True, True)
el2 = self._element_find(end_locator, True, True)
driver = self._current_application()
driver.scroll(el1, el2)
def scroll_to(self, locator):
"""Scrolls to element"""
driver = self._current_application()
element = self._element_find(locator, True, True)
driver.execute_script("mobile: scrollTo", {"element": element.id})
def long_press(self, locator):
""" Long press the element """
driver = self._current_application()
element = self._element_find(locator, True, True)
long_press = TouchAction(driver).long_press(element)
long_press.perform()
def tap(self, locator):
""" Tap on element """
driver = self._current_application()
el = self._element_find(locator, True, True)
action = TouchAction(driver)
action.tap(el).perform()
def click_a_point(self, x=0, y=0, duration=100):
""" Click on a point"""
self._info("Clicking on a point (%s,%s)." % (x,y))
driver = self._current_application()
action = TouchAction(driver)
try:
action.press(x=float(x), y=float(y)).wait(float(duration)).release().perform()
except:
assert False, "Can't click on a point at (%s,%s)" % (x,y)
def click_element_at_coordinates(self, coordinate_X, coordinate_Y):
""" click element at a certain coordinate """
self._info("Pressing at (%s, %s)." % (coordinate_X, coordinate_Y))
driver = self._current_application()
action = TouchAction(driver)
action.press(x=coordinate_X, y=coordinate_Y).release().perform() | /robotframework-sofrecomappiumlibrary-1.0.1.zip/robotframework-sofrecomappiumlibrary-1.0.1/src/AppiumLibrary/keywords/_touch.py | 0.780495 | 0.308307 | _touch.py | pypi |
import base64
from keywordgroup import KeywordGroup
from appium.webdriver.connectiontype import ConnectionType
class _AndroidUtilsKeywords(KeywordGroup):
# Public
def get_network_connection_status(self):
"""Returns an integer bitmask specifying the network connection type.
Android only.
See `set network connection status` for more details.
"""
driver = self._current_application()
return driver.network_connection
def set_network_connection_status(self, connectionStatus):
"""Sets the network connection Status.
Android only.
Possible values:
| =Value= | =Alias= | =Data= | =Wifi= | =Airplane Mode= |
| 0 | (None) | 0 | 0 | 0 |
| 1 | (Airplane Mode) | 0 | 0 | 1 |
| 2 | (Wifi only) | 0 | 1 | 0 |
| 4 | (Data only) | 1 | 0 | 0 |
| 6 | (All network on) | 1 | 1 | 0 |
"""
driver = self._current_application()
return driver.set_network_connection(int(connectionStatus))
def pull_file(self, path, decode=False):
"""Retrieves the file at `path` and return it's content.
Android only.
- _path_ - the path to the file on the device
- _decode_ - True/False decode the data (base64) before returning it (default=False)
"""
driver = self._current_application()
theFile = driver.pull_file(path)
if decode:
theFile = base64.b64decode(theFile)
return theFile
def pull_folder(self, path, decode=False):
"""Retrieves a folder at `path`. Returns the folder's contents zipped.
Android only.
- _path_ - the path to the folder on the device
- _decode_ - True/False decode the data (base64) before returning it (default=False)
"""
driver = self._current_application()
theFolder = driver.pull_folder(path)
if decode:
theFolder = base64.b64decode(theFolder)
return theFolder
def push_file(self, path, data, encode=False):
"""Puts the data in the file specified as `path`.
Android only.
- _path_ - the path on the device
- _data_ - data to be written to the file
- _encode_ - True/False encode the data as base64 before writing it to the file (default=False)
"""
driver = self._current_application()
if encode:
data = base64.b64encode(data)
driver.push_file(path, data) | /robotframework-sofrecomappiumlibrary-1.0.1.zip/robotframework-sofrecomappiumlibrary-1.0.1/src/AppiumLibrary/keywords/_android_utils.py | 0.692642 | 0.233248 | _android_utils.py | pypi |
from robot.libraries import BuiltIn
from keywordgroup import KeywordGroup
BUILTIN = BuiltIn.BuiltIn()
class _RunOnFailureKeywords(KeywordGroup):
def __init__(self):
self._run_on_failure_keyword = None
self._running_on_failure_routine = False
# Public
def register_keyword_to_run_on_failure(self, keyword):
"""Sets the keyword to execute when a AppiumLibrary keyword fails.
`keyword_name` is the name of a keyword (from any available
libraries) that will be executed if a AppiumLibrary keyword fails.
It is not possible to use a keyword that requires arguments.
Using the value "Nothing" will disable this feature altogether.
The initial keyword to use is set in `importing`, and the
keyword that is used by default is `Capture Page Screenshot`.
Taking a screenshot when something failed is a very useful
feature, but notice that it can slow down the execution.
This keyword returns the name of the previously registered
failure keyword. It can be used to restore the original
value later.
Example:
| Register Keyword To Run On Failure | Log Source | # Run `Log Source` on failure. |
| ${previous kw}= | Register Keyword To Run On Failure | Nothing | # Disables run-on-failure functionality and stores the previous kw name in a variable. |
| Register Keyword To Run On Failure | ${previous kw} | # Restore to the previous keyword. |
This run-on-failure functionality only works when running tests on Python/Jython 2.4
or newer and it does not work on IronPython at all.
"""
old_keyword = self._run_on_failure_keyword
old_keyword_text = old_keyword if old_keyword is not None else "No keyword"
new_keyword = keyword if keyword.strip().lower() != "nothing" else None
new_keyword_text = new_keyword if new_keyword is not None else "No keyword"
self._run_on_failure_keyword = new_keyword
self._info('%s will be run on failure.' % new_keyword_text)
return old_keyword_text
# Private
def _run_on_failure(self):
if self._run_on_failure_keyword is None:
return
if self._running_on_failure_routine:
return
self._running_on_failure_routine = True
try:
BUILTIN.run_keyword(self._run_on_failure_keyword)
except Exception, err:
self._run_on_failure_error(err)
finally:
self._running_on_failure_routine = False
def _run_on_failure_error(self, err):
err = "Keyword '%s' could not be run on failure: %s" % (self._run_on_failure_keyword, err)
if hasattr(self, '_warn'):
self._warn(err)
return
raise Exception(err) | /robotframework-sofrecomappiumlibrary-1.0.1.zip/robotframework-sofrecomappiumlibrary-1.0.1/src/AppiumLibrary/keywords/_runonfailure.py | 0.806815 | 0.19521 | _runonfailure.py | pypi |
import time
import robot
from keywordgroup import KeywordGroup
class _WaitingKeywords(KeywordGroup):
def wait_until_page_contains(self, text, timeout=None, error=None):
"""Waits until `text` appears on current page.
Fails if `timeout` expires before the text appears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Does Not Contain`,
`Wait Until Page Contains Element`,
`Wait Until Page Does Not Contain Element` and
BuiltIn keyword `Wait Until Keyword Succeeds`.
"""
if not error:
error = "Text '%s' did not appear in <TIMEOUT>" % text
self._wait_until(timeout, error, self._is_text_present, text)
def wait_until_page_does_not_contain(self, text, timeout=None, error=None):
"""Waits until `text` disappears from current page.
Fails if `timeout` expires before the `text` disappears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`,
`Wait Until Page Contains Element`,
`Wait Until Page Does Not Contain Element` and
BuiltIn keyword `Wait Until Keyword Succeeds`.
"""
def check_present():
present = self._is_text_present(text)
if not present:
return
else:
return error or "Text '%s' did not disappear in %s" % (text, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_present)
def wait_until_page_contains_element(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` appears on current page.
Fails if `timeout` expires before the element appears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`,
`Wait Until Page Does Not Contain`
`Wait Until Page Does Not Contain Element`
and BuiltIn keyword `Wait Until Keyword Succeeds`.
"""
if not error:
error = "Element '%s' did not appear in <TIMEOUT>" % locator
self._wait_until(timeout, error, self._is_element_present, locator)
def wait_until_page_does_not_contain_element(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` disappears from current page.
Fails if `timeout` expires before the element disappears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`,
`Wait Until Page Does Not Contain`,
`Wait Until Page Contains Element` and
BuiltIn keyword `Wait Until Keyword Succeeds`.
"""
def check_present():
present = self._is_element_present(locator)
if not present:
return
else:
return error or "Element '%s' did not disappear in %s" % (locator, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_present)
# Private
def _wait_until(self, timeout, error, function, *args):
error = error.replace('<TIMEOUT>', self._format_timeout(timeout))
def wait_func():
return None if function(*args) else error
self._wait_until_no_error(timeout, wait_func)
def _wait_until_no_error(self, timeout, wait_func, *args):
timeout = robot.utils.timestr_to_secs(timeout) if timeout is not None else self._timeout_in_secs
maxtime = time.time() + timeout
while True:
timeout_error = wait_func(*args)
if not timeout_error:
return
if time.time() > maxtime:
self.log_source()
raise AssertionError(timeout_error)
time.sleep(0.2)
def _format_timeout(self, timeout):
timeout = robot.utils.timestr_to_secs(timeout) if timeout is not None else self._timeout_in_secs
return robot.utils.secs_to_timestr(timeout) | /robotframework-sofrecomappiumlibrary-1.0.1.zip/robotframework-sofrecomappiumlibrary-1.0.1/src/AppiumLibrary/keywords/_waiting.py | 0.746971 | 0.180251 | _waiting.py | pypi |
import os
import yaml
from robot.api import logger
class SQLessKeywords(object):
ROBOT_LIBRARY_SCOPE = 'Global'
def __init__(self, schema_path='schema.yml', db_config_path=None):
logger.console(db_config_path)
self.schema = self._read_schema(schema_path)
self.database_config = self._read_config(db_config_path)
self.adaptor = self._get_adaptor()
def _get_adaptor(self):
"""
Helper method to get the correct adaptor
"""
if self.database_config['dbms'] == 'sqlite':
from SQLess.adapters.sqlite import SQLiteAdapter
adaptor = SQLiteAdapter
elif self.database_config['dbms'] == 'mysql':
from SQLess.adapters.mysql import MysqlAdapter
adaptor = MysqlAdapter
elif self.database_config['dbms'] == 'postgres':
from SQLess.adapters.postgres import PostgresqlAdapter
adaptor = PostgresqlAdapter
elif self.database_config['dbms'] == 'oracle':
from SQLess.adapters.oracle import OracleAdapter
adaptor = OracleAdapter
return adaptor(**self.database_config)
def _read_config(self, db_config_path):
"""
Reads the config from the config file
:returns: dict
"""
with open(db_config_path) as file:
database_config = yaml.load(file, Loader=yaml.FullLoader)
return database_config
def _read_schema(self, schema_path):
"""
Reads the schema from the schema defintion file
:returns: dict
"""
with open(schema_path) as file:
schema_defintion = yaml.load(file, Loader=yaml.FullLoader)
return schema_defintion
def _get_tablename_and_fields(self, identifier):
tablename = self.schema.get(identifier.lower())['tablename']
fields = self.schema.get(identifier.lower())['fields']
return (tablename, fields)
def execute_sql_string(self, query):
"""
Passes the query to the adaptor and returns the result
"""
return self.adaptor.execute_sql(query)
def get_all(self, identifier):
"""
Returns all rows from the table identified by the `identifier`.
Keyword usage example:
${users} Get All Users
The `identifier` must match a table defintion in the schema defintion file.
:returns: list of dicts
example:
[
{
'id': 1,
'username': 'TestUser1'
},
...
]
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.get_all(tablename, fields)
def get_by_filter(self, identifier, **filters):
"""
Returns the rows from the table identified by the `identifier`, where the filter matches.
Keyword usage example:
${users} Get By Filter Users email=someothername@someotherdomain.tld
The `identifier` must match a table defintion in the schema defintion file and the filter keys must
match row names in the schema defintion file.
:returns: list of dicts
example:
[
{
'id': 1,
'username': 'TestUser1'
},
...
]
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.get_by_filter(tablename, fields, **filters)
def count(self, identifier, **filters):
"""
Counts the matching rows and returns.
:returns: integer
"""
tablename, _ = self._get_tablename_and_fields(identifier)
return self.adaptor.count(tablename, **filters)
def create(self, identifier, **attributes):
"""
Creates a row in the database identified by the `identifier`.
Keyword usage:
${user} Create Users username=AnotherUser
:returns: dict
example:
{
'id': 1,
'username': 'TestUser1'
}
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.create(tablename, fields, **attributes)
def delete_all(self, identifier):
"""
Deletes all rows in the database identified by the `identifier`.
Keyword usage:
${amount} Delete All Users
:returns: None
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.delete_all(tablename)
def delete_by_filter(self, identifier, **filters):
"""
Deletes all rows in the database identified by the `identifier`.
Keyword usage:
${amount} Delete By Filter Users username=TestUser1
:returns: None
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.delete_by_filter(tablename, **filters)
def update_all(self, identifier, **attributes):
"""
Updates all rows in the database identified by the `identifier`
with the passed attributes.
Keyword usage:
Update All Songs in_collection=1
:returns: None
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.update_all(tablename, **attributes)
def update_by_filter(self, identifier, filters, **attributes):
"""
Updates the rows in the database identified by the `identifier`
and filtered by the passed filters with the passed attributes.
The filters must be a dict containing the keys and values to identify rows.
Keyword usage:
${filter} Create Dictionary artist=Nightwish
Update By Filter Songs ${filter} album=Decades: Live in Buenos Aires
:returns: None
"""
tablename, fields = self._get_tablename_and_fields(identifier)
return self.adaptor.update_by_filter(tablename, filters, **attributes) | /robotframework_sqless-0.1.0-py3-none-any.whl/SQLess/SQLessKeywords.py | 0.557364 | 0.271068 | SQLessKeywords.py | pypi |
class BaseAdapter:
"""
Base class which defines the minimal set of functions, that all
inherhting classes have to implement and some common methods.
"""
def __init__(self, **config):
raise NotImplementedError()
@staticmethod
def make_list(result, fieldnames):
result_list = []
for item in result:
result_list.append(dict(zip(fieldnames, item)))
return result_list
@staticmethod
def make_update_partial(tablename, **attributes):
settings = ", ".join([f"{key}='{value}'" for key, value in attributes.items()])
return f"UPDATE {tablename} SET {settings}"
@staticmethod
def make_select_partial(tablename, fields):
return "SELECT %s FROM %s" % (', '.join(fields.keys()) ,tablename)
@staticmethod
def make_single_filter_partial(filters):
filter = " AND ".join(f"{key}='{value}'" for key, value in filters.items())
return filter
@staticmethod
def make_delete_partial(tablename):
return "DELETE FROM %s" % (tablename)
def make_where_partial(self, filters):
where_partial = ""
if filters:
if isinstance(filters, dict):
filter = self.make_single_filter_partial(filters)
if isinstance(filters, list):
clauses = []
for clause in filters:
clauses.append(self.make_single_filter_partial(clause))
filter = " OR ".join(clauses)
where_partial = f"WHERE {filter}"
return where_partial
def execute_sql(self, query):
with self.database_cursor(self.database_settings) as cursor:
cursor.execute(query)
result = cursor.fetchall()
return result
def get_all(self, tablename, fields):
with self.database_cursor(self.database_settings) as cursor:
query = self.make_select_partial(tablename, fields)
cursor.execute(query)
result = self.make_list(cursor.fetchall(), fields.keys())
return result
def get_by_filter(self, tablename, fields, **filters):
select_partial = self.make_select_partial(tablename, fields)
where_partial = self.make_where_partial(filters)
query = f"{select_partial} {where_partial}"
with self.database_cursor(self.database_settings) as cursor:
cursor.execute(query)
result = self.make_list(cursor.fetchall(), fields.keys())
return result
def count(self, tablename, **filters):
count_partial = self.make_count_partial(tablename)
where_partial = self.make_where_partial(filters)
query = f"{count_partial} {where_partial}"
with self.database_cursor(self.database_settings) as cursor:
cursor.execute(query)
result = cursor.fetchone()
return result[0]
def create(self, tablename, fields, **attributes):
raise NotImplementedError()
def delete_all(self, tablename, **filters):
with self.database_cursor(self.database_settings) as cursor:
query = self.make_delete_partial(tablename, **filters)
cursor.execute(query)
def delete_by_filter(self, tablename, **filters):
with self.database_cursor(self.database_settings) as cursor:
delete_partial = self.make_delete_partial(tablename)
where_partial = self.make_where_partial(filters)
cursor.execute(f"{delete_partial} {where_partial}")
def update_all(self, tablename, **attributes):
with self.database_cursor(self.database_settings) as cursor:
update_partial = self.make_update_partial(tablename, **attributes)
cursor.execute(update_partial)
def update_by_filter(self, tablename, filters, **attributes):
with self.database_cursor(self.database_settings) as cursor:
update_partial = self.make_update_partial(tablename, **attributes)
where_partial = self.make_where_partial(filters)
cursor.execute(f"{update_partial} {where_partial}") | /robotframework_sqless-0.1.0-py3-none-any.whl/SQLess/adapters/adapter.py | 0.783575 | 0.444263 | adapter.py | pypi |
try:
from robot.api import logger
except ImportError:
logger = None
from robot.utils import ConnectionCache
from .abstractclient import SSHClientException
from .client import SSHClient
from .config import (Configuration, IntegerEntry, LogLevelEntry, NewlineEntry,
StringEntry, TimeEntry)
from .version import VERSION
__version__ = VERSION
plural_or_not = lambda count: '' if count == 1 else 's'
class SSHLibrary(object):
"""Robot Framework test library for SSH and SFTP.
The library has the following main usages:
- Executing commands on the remote machine, either with blocking or
non-blocking behaviour (see `Execute Command` and `Start Command`,
respectively).
- Writing and reading in an interactive shell (e.g. `Read` and `Write`).
- Transferring files and directories over SFTP (e.g. `Get File` and
`Put Directory`).
- Ensuring that files or directories exist on the remote machine
(e.g. `File Should Exist` and `Directory Should Not Exist`).
This library works both with Python and Jython, but uses different
tools internally depending on the interpreter. See
[http://code.google.com/p/robotframework-sshlibrary/wiki/InstallationInstructions|installation instructions]
for more details about the dependencies. IronPython is unfortunately not
supported.
== Table of contents ==
- `Connections and login`
- `Configuration`
- `Executing commands`
- `Interactive shells`
- `Pattern matching`
- `Example`
- `Importing`
- `Shortcuts`
- `Keywords`
= Connections and login =
The library supports multiple connections to different hosts.
New connections are opened with `Open Connection`.
Logging into the host is done either with username and password
(`Login`) or with public/private key pair (`Login With Public key`).
Only one connection can be active at a time. This means that most of the
keywords only affect the active connection. Active connection can be
changed with `Switch Connection`.
= Configuration =
Default settings for all the upcoming connections can be configured on
`library importing` or later with `Set Default Configuration`.
All the settings are listed further below.
Using `Set Default Configuration` does not affect the already open
connections. Settings of the current connection can be configured
with `Set Client Configuration`. Settings of another, non-active connection,
can be configured by first using `Switch Connection` and then
`Set Client Configuration`.
Most of the defaults can be overridden per connection by defining them
as arguments to `Open Connection`. Otherwise the defaults are used.
== Configurable per connection ==
=== Default prompt ===
Argument `prompt` defines the character sequence used by `Read Until Prompt`
and must be set before that keyword can be used.
If you know the prompt on the remote machine, it is recommended to set it
to ease reading output from the server after using `Write`. In addition to
that, `Login` and `Login With Public Key` can read the server output more
efficiently when the prompt is set.
=== Default encoding ===
Argument `encoding` defines the
[http://docs.python.org/2/library/codecs.html#standard-encodings|
character encoding] of input and output sequences.
Starting from SSHLibrary 2.0, the default value is `UTF-8`.
=== Default path separator ===
Argument `path_separator` must be set to the one known by the operating
system and the SSH server on the remote machine. The path separator is
used by keywords `Get File`, `Put File`, `Get Directory` and
`Put Directory` for joining paths correctly on the remote host.
The default path separator is forward slash (`/`) which works on
Unix-like machines. On Windows the path separator to use depends on
the SSH server. Some servers use forward slash and others backslash,
and users need to configure the `path_separator` accordingly. Notice
that using a backslash in Robot Framework test data requires doubling
it like `\\\\`.
Configuring the library and connection specific path separator is a new
feature in SSHLibrary 2.0. Prior to it `Get File` and `Put File` had
their own `path_separator` arguments. These keyword specific arguments
were deprecated in 2.0 and will be removed in the future.
=== Default timeout ===
Argument `timeout` is used by `Read Until` variants. The default value is
`3 seconds`.
Value must be in Robot Framework's time format, e.g. `3`, `4.5s`, `1 minute`
and `2 min 3 s` are all accepted. See section `Time Format` in the
Robot Framework User Guide for details.
=== Default newline ===
Argument `newline` is the line break sequence used by `Write` keyword and
must be set according to the operating system on the remote machine.
The default value is `LF` (same as `\\n`) which is used on Unix-like
operating systems. With Windows remote machines, you need to set this to
`CRLF` (`\\r\\n`).
=== Default terminal settings ===
Argument `term_type` defines the virtual terminal type, and arguments
`width` and `height` can be used to control its virtual size.
== Not configurable per connection ==
=== Default loglevel ===
Argument `loglevel` sets the log level used to log the output read by
`Read`, `Read Until`, `Read Until Prompt`, `Read Until Regexp`, `Write`,
`Write Until Expected Output`, `Login` and `Login With Public Key`.
The default level is `INFO`.
`loglevel` is not configurable per connection but can be overridden by
passing it as an argument to the most of the mentioned keywords.
Possible argument values are `TRACE`, `DEBUG`, `INFO` and `WARN`.
= Executing commands =
For executing commands on the remote machine, there are two possibilities:
- `Execute Command` and `Start Command`.
The command is executed in a new shell on the remote machine,
which means that possible changes to the environment
(e.g. changing working directory, setting environment variables, etc.)
are not visible to the subsequent keywords.
- `Write`, `Write Bare`, `Write Until Expected Output`, `Read`,
`Read Until`, `Read Until Prompt` and `Read Until Regexp`.
These keywords operate in an interactive shell, which means that changes
to the environment are visible to the subsequent keywords.
= Interactive shells =
`Write`, `Write Bare`, `Write Until Expected Output`, `Read`,
`Read Until`, `Read Until Prompt` and `Read Until Regexp` can be used
to interact with the server within the same shell.
== Consumed output ==
All of these keywords, except `Write Bare`, consume the read or the written
text from the server output before returning. In practice this means that
the text is removed from the server output, i.e. subsequent calls to
`Read` keywords do not return text that was already read. This is
illustrated by the example below.
| `Write` | echo hello | | # consumes written `echo hello` |
| ${stdout}= | `Read Until` | hello | # consumes read `hello` and everything before it |
| `Should Contain` | ${stdout} | hello |
| ${stdout}= | `Read` | | # consumes everything available |
| `Should Not Contain` | ${stdout} | hello | # `hello` was already consumed earlier |
The consumed text is logged by the keywords and their argument `loglevel`
can be used to override [#Default loglevel|the default log level].
`Login` and `Login With Public Key` consume everything on the server output
or if [#Default prompt|the prompt is set], everything until the prompt.
== Reading ==
`Read`, `Read Until`, `Read Until Prompt` and `Read Until Regexp` can be
used to read from the server. The read text is also consumed from
the server output.
`Read` reads everything available on the server output, thus clearing it.
`Read Until` variants read output up until and *including* `expected` text.
These keywords will fail if [#Default timeout|the timeout] expires before
`expected` is found.
== Writing ==
`Write` and `Write Until Expected Output` consume the written text
from the server output while `Write Bare` does not.
These keywords do not return any output triggered by the written text.
To get the output, one of the `Read` keywords must be explicitly used.
= Pattern matching =
Some keywords allow their arguments to be specified as _glob patterns_
where:
| * | matches anything, even an empty string |
| ? | matches any single character |
| [chars] | matches any character inside square brackets (e.g. `[abc]` matches either `a`, `b` or `c`) |
| [!chars] | matches any character not inside square brackets |
Pattern matching is case-sensitive regardless the local or remote
operating system. Matching is implemented using Python's
[http://docs.python.org/library/fnmatch.html|fnmatch module].
= Example =
| ***** Settings *****
| Documentation This example demonstrates executing commands on a remote machine
| ... and getting their output and the return code.
| ...
| ... Notice how connections are handled as part of the suite setup and
| ... teardown. This saves some time when executing several test cases.
|
| Library `SSHLibrary`
| Suite Setup `Open Connection And Log In`
| Suite Teardown `Close All Connections`
|
| ***** Variables *****
| ${HOST} localhost
| ${USERNAME} test
| ${PASSWORD} test
|
| ***** Test Cases *****
| Execute Command And Verify Output
| [Documentation] `Execute Command` can be used to ran commands on the remote machine.
| ... The keyword returns the standard output by default.
| ${output}= `Execute Command` echo Hello SSHLibrary!
| `Should Be Equal` ${output} Hello SSHLibrary!
|
| Execute Command And Verify Return Code
| [Documentation] Often getting the return code of the command is enough.
| ... This behaviour can be adjusted as `Execute Command` arguments.
| ${rc}= `Execute Command` echo Success guaranteed. return_stdout=False return_rc=True
| `Should Be Equal` ${rc} ${0}
|
| Executing Commands In An Interactive Session
| [Documentation] `Execute Command` always executes the command in a new shell.
| ... This means that changes to the environment are not persisted
| ... between subsequent `Execute Command` keyword calls.
| ... `Write` and `Read Until` variants can be used to operate in the same shell.
| `Write` cd ..
| `Write` echo Hello from the parent directory!
| ${output}= `Read Until` directory!
| `Should End With` ${output} Hello from the parent directory!
|
| ***** Keywords *****
| Open Connection And Log In
| `Open Connection` ${HOST}
| `Login` ${USERNAME} ${PASSWORD}
Save the content as file `executing_command.txt` and run:
| pybot executing_commands.txt
You may want to override the variables from commandline to try this out on
your remote machine:
| pybot -v HOST:my.server.com -v USERNAME:johndoe -v PASSWORD:secretpasswd executing_commands.txt
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = __version__
DEFAULT_TIMEOUT = '3 seconds'
DEFAULT_NEWLINE = 'LF'
DEFAULT_PROMPT = None
DEFAULT_LOGLEVEL = 'INFO'
DEFAULT_TERM_TYPE = 'vt100'
DEFAULT_TERM_WIDTH = 80
DEFAULT_TERM_HEIGHT = 24
DEFAULT_PATH_SEPARATOR = '/'
DEFAULT_ENCODING = 'UTF-8'
def __init__(self,
timeout=DEFAULT_TIMEOUT,
newline=DEFAULT_NEWLINE,
prompt=DEFAULT_PROMPT,
loglevel=DEFAULT_LOGLEVEL,
term_type=DEFAULT_TERM_TYPE,
width=DEFAULT_TERM_WIDTH,
height=DEFAULT_TERM_HEIGHT,
path_separator=DEFAULT_PATH_SEPARATOR,
encoding=DEFAULT_ENCODING):
"""SSHLibrary allows some import time `configuration`.
If the library is imported without any arguments, the library
defaults are used:
| Library | SSHLibrary |
Only arguments that are given are changed. In this example,
[#Default timeout|the timeout] is changed to `10 seconds` but
other settings are left to the library defaults:
| Library | SSHLibrary | 10 seconds |
[#Default prompt|Prompt] does not have a default value and
must be explicitly set to use `Read Until Prompt`.
In this example, the prompt is set to `$`:
| Library | SSHLibrary | prompt=$ |
Multiple settings are possible. In this example, the library is brought
into use with [#Default newline|newline] and [#Default path separator|
path_separator] known by Windows:
| Library | SSHLibrary | newline=CRLF | path_separator=\\\\ |
Arguments [#Default terminal settings|`term_type`],
[#Default terminal settings|`width`],
[#Default terminal settings|`height`],
[#Default path separator|`path separator`] and
[#Default encoding|`encoding`]
were added in SSHLibrary 2.0.
"""
self._connections = ConnectionCache()
self._config = _DefaultConfiguration(
timeout or self.DEFAULT_TIMEOUT,
newline or self.DEFAULT_NEWLINE,
prompt or self.DEFAULT_PROMPT,
loglevel or self.DEFAULT_LOGLEVEL,
term_type or self.DEFAULT_TERM_TYPE,
width or self.DEFAULT_TERM_WIDTH,
height or self.DEFAULT_TERM_HEIGHT,
path_separator or self.DEFAULT_PATH_SEPARATOR,
encoding or self.DEFAULT_ENCODING
)
@property
def current(self):
return self._connections.current
def set_default_configuration(self, timeout=None, newline=None, prompt=None,
loglevel=None, term_type=None, width=None,
height=None, path_separator=None,
encoding=None):
"""Update the default `configuration`.
Please note that using this keyword does not affect the already
open connections. Use `Set Client Configuration` to configure the
active connection.
Only parameters whose value is other than `None` are updated.
This example sets [#Default prompt|`prompt`] to `$`:
| Set Default Configuration | prompt=$ |
This example sets [#Default newline|`newline`] and [#Default path
separator| `path_separator`] to the ones known by Windows:
| Set Default Configuration | newline=CRLF | path_separator=\\\\ |
Sometimes you might want to use longer [#Default timeout|`timeout`]
for all the subsequent connections without affecting the existing ones:
| Set Default Configuration | timeout=5 seconds |
| Open Connection | local.server.com |
| Set Default Configuration | timeout=20 seconds |
| Open Connection | emea.server.com |
| Open Connection | apac.server.com |
| ${local} | ${emea} | ${apac}= | Get Connections |
| Should Be Equal As Integers | ${local.timeout} | 5 |
| Should Be Equal As Integers | ${emea.timeout} | 20 |
| Should Be Equal As Integers | ${apac.timeout} | 20 |
Arguments [#Default terminal settings|`term_type`],
[#Default terminal settings|`width`],
[#Default terminal settings|`height`],
[#Default path separator|`path_separator`] and
[#Default encoding|`encoding`]
were added in SSHLibrary 2.0.
"""
self._config.update(timeout=timeout, newline=newline, prompt=prompt,
loglevel=loglevel, term_type=term_type, width=width,
height=height, path_separator=path_separator,
encoding=encoding)
def set_client_configuration(self, timeout=None, newline=None, prompt=None,
term_type=None, width=None, height=None,
path_separator=None, encoding=None):
"""Update the `configuration` of the current connection.
Only parameters whose value is other than `None` are updated.
In the following example, [#Default prompt|`prompt`] is set for
the current connection. Other settings are left intact:
| Open Connection | my.server.com |
| Set Client Configuration | prompt=$ |
| ${myserver}= | Get Connection |
| Should Be Equal | ${myserver.prompt} | $ |
Using keyword does not affect the other connections:
| Open Connection | linux.server.com | |
| Set Client Configuration | prompt=$ | | # Only linux.server.com affected |
| Open Connection | windows.server.com | |
| Set Client Configuration | prompt=> | | # Only windows.server.com affected |
| ${linux} | ${windows}= | Get Connections |
| Should Be Equal | ${linux.prompt} | $ |
| Should Be Equal | ${windows.prompt} | > |
Multiple settings are possible. This example updates [#Default terminal
settings|the terminal settings] of the current connection:
| Open Connection | 192.168.1.1 |
| Set Client Configuration | term_type=ansi | width=40 |
Arguments [#Default path separator|`path_separator`] and
[#Default encoding|`encoding`]
were added in SSHLibrary 2.0.
"""
self.current.config.update(timeout=timeout, newline=newline,
prompt=prompt, term_type=term_type,
width=width, height=height,
path_separator=path_separator,
encoding=encoding)
def enable_ssh_logging(self, logfile):
"""Enables logging of SSH protocol output to given `logfile`.
All the existing and upcoming connections are logged onwards from
the moment the keyword was called.
`logfile` is path to a file that is writable by the current local user.
If the file already exists, it will be overwritten.
*Note:* This keyword only works with Python, i.e. when executing tests
with `pybot`.
Example:
| Open Connection | my.server.com | # Not logged |
| Enable SSH Logging | myserver.log |
| Login | johndoe | secretpasswd |
| Open Connection | build.local.net | # Logged |
| # Do something with the connections |
| # Check myserver.log for detailed debug information |
"""
if SSHClient.enable_logging(logfile):
self._log('SSH log is written to <a href="%s">file</a>.' % logfile,
'HTML')
def open_connection(self, host, alias=None, port=22, timeout=None,
newline=None, prompt=None, term_type=None, width=None,
height=None, path_separator=None, encoding=None):
"""Opens a new SSH connection to the given `host` and `port`.
The new connection is made active. Possible existing connections
are left open in the background.
Note that on Jython this keyword actually opens a connection and
will fail immediately on unreachable hosts. On Python the actual
connection attempt will not be done until `Login` is called.
This keyword returns the index of the new connection which can be used
later to switch back to it. Indices start from `1` and are reset
when `Close All Connections` is used.
Optional `alias` can be given for the connection and can be used for
switching between connections, similarly as the index.
See `Switch Connection` for more details.
Connection parameters, like [#Default timeout|`timeout`] and
[#Default newline|`newline`] are documented in `configuration`.
If they are not defined as arguments, [#Configuration|the library
defaults] are used for the connection.
All the arguments, except `host`, `alias` and `port`
can be later updated with `Set Client Configuration`.
Starting from SSHLibrary 1.1, a shell is automatically opened
by this keyword.
Port `22` is assumed by default:
| ${index}= | Open Connection | my.server.com |
Non-standard port may be given as an argument:
| ${index}= | Open Connection | 192.168.1.1 | port=23 |
Aliases are handy, if you need to switch back to the connection later:
| Open Connection | my.server.com | alias=myserver |
| # Do something with my.server.com |
| Open Connection | 192.168.1.1 |
| Switch Connection | myserver | | # Back to my.server.com |
Settings can be overridden per connection, otherwise the ones set on
`library importing` or with `Set Default Configuration` are used:
| Open Connection | 192.168.1.1 | timeout=1 hour | newline=CRLF |
| # Do something with the connection |
| Open Connection | my.server.com | # Default timeout | # Default line breaks |
[#Default terminal settings|The terminal settings] are also configurable
per connection:
| Open Connection | 192.168.1.1 | term_type=ansi | width=40 |
Arguments [#Default path separator|`path_separator`] and
[#Default encoding|`encoding`]
were added in SSHLibrary 2.0.
"""
timeout = timeout or self._config.timeout
newline = newline or self._config.newline
prompt = prompt or self._config.prompt
term_type = term_type or self._config.term_type
width = width or self._config.width
height = height or self._config.height
path_separator = path_separator or self._config.path_separator
encoding = encoding or self._config.encoding
client = SSHClient(host, alias, port, timeout, newline, prompt,
term_type, width, height, path_separator, encoding)
connection_index = self._connections.register(client, alias)
client.config.update(index=connection_index)
return connection_index
def switch_connection(self, index_or_alias):
"""Switches the active connection by index or alias.
`index_or_alias` is either connection index (an integer) or alias
(a string). Index is got as the return value of `Open Connection`.
Alternatively, both index and alias can queried as attributes
of the object returned by `Get Connection`.
This keyword returns the index of the previous active connection,
which can be used to switch back to that connection later.
Example:
| ${myserver}= | Open Connection | my.server.com |
| Login | johndoe | secretpasswd |
| Open Connection | build.local.net | alias=Build |
| Login | jenkins | jenkins |
| Switch Connection | ${myserver} | | # Switch using index |
| ${username}= | Execute Command | whoami | # Executed on my.server.com |
| Should Be Equal | ${username} | johndoe |
| Switch Connection | Build | | # Switch using alias |
| ${username}= | Execute Command | whoami | # Executed on build.local.net |
| Should Be Equal | ${username} | jenkins |
"""
old_index = self._connections.current_index
if index_or_alias is None:
self.close_connection()
else:
self._connections.switch(index_or_alias)
return old_index
def close_connection(self):
"""Closes the current connection.
No other connection is made active by this keyword. Manually use
`Switch Connection` to switch to another connection.
Example:
| Open Connection | my.server.com |
| Login | johndoe | secretpasswd |
| Get File | results.txt | /tmp |
| Close Connection |
| # Do something with /tmp/results.txt |
"""
self.current.close()
self._connections.current = self._connections._no_current
def close_all_connections(self):
"""Closes all open connections.
This keyword is ought to be used either in test or suite teardown to
make sure all the connections are closed before the test execution
finishes.
After this keyword, the connection indices returned by `Open Connection`
are reset and start from `1`.
Example:
| Open Connection | my.server.com |
| Open Connection | build.local.net |
| # Do something with the connections |
| [Teardown] | Close all connections |
"""
self._connections.close_all()
def get_connection(self, index_or_alias=None, index=False, host=False,
alias=False, port=False, timeout=False, newline=False,
prompt=False, term_type=False, width=False, height=False,
encoding=False):
"""Return information about the connection.
Connection is not changed by this keyword, use `Switch Connection` to
change the active connection.
If `index_or_alias` is not given, the information of the current
connection is returned.
This keyword returns an object that has the following attributes:
| = Name = | = Type = | = Explanation = |
| index | integer | Number of the connection. Numbering starts from `1`. |
| host | string | Destination hostname. |
| alias | string | An optional alias given when creating the connection. |
| port | integer | Destination port. |
| timeout | string | [#Default timeout|Timeout] length in textual representation. |
| newline | string | [#Default newline|The line break sequence] used by `Write` keyword. |
| prompt | string | [#Default prompt|Prompt character sequence] for `Read Until Prompt`. |
| term_type | string | Type of the [#Default terminal settings|virtual terminal]. |
| width | integer | Width of the [#Default terminal settings|virtual terminal]. |
| height | integer | Height of the [#Default terminal settings|virtual terminal]. |
| path_separator | string | [#Default path separator|The path separator] used on the remote host. |
| encoding | string | [#Default encoding|The encoding] used for inputs and outputs. |
If there is no connection, an object having `index` and `host` as `None`
is returned, rest of its attributes having their values as configuration
defaults.
If you want the information for all the open connections, use
`Get Connections`.
Getting connection information of the current connection:
| Open Connection | far.server.com |
| Open Connection | near.server.com | prompt=>> | # Current connection |
| ${nearhost}= | Get Connection | |
| Should Be Equal | ${nearhost.host} | near.server.com |
| Should Be Equal | ${nearhost.index} | 2 |
| Should Be Equal | ${nearhost.prompt} | >> |
| Should Be Equal | ${nearhost.term_type} | vt100 | # From defaults |
Getting connection information using an index:
| Open Connection | far.server.com |
| Open Connection | near.server.com | # Current connection |
| ${farhost}= | Get Connection | 1 |
| Should Be Equal | ${farhost.host} | far.server.com |
Getting connection information using an alias:
| Open Connection | far.server.com | alias=far |
| Open Connection | near.server.com | # Current connection |
| ${farhost}= | Get Connection | far |
| Should Be Equal | ${farhost.host} | far.server.com |
| Should Be Equal | ${farhost.alias} | far |
This keyword can also return plain connection attributes instead of
the whole connection object. This can be adjusted using the boolean
arguments `index`, `host`, `alias`, and so on, that correspond to
the attribute names of the object. If such arguments are given, and
they evaluate to true (e.g. any non-empty string except `false` or
`False`), only the respective connection attributes are returned.
Note that attributes are always returned in the same order arguments
are specified in the signature.
| Open Connection | my.server.com | alias=example |
| ${host}= | Get Connection | host=True |
| Should Be Equal | ${host} | my.server.com |
| ${host} | ${alias}= | Get Connection | host=yes | alias=please |
| Should Be Equal | ${host} | my.server.com |
| Should Be Equal | ${alias} | example |
Getting only certain attributes is especially useful when using this
library via the Remote library interface. This interface does not
support returning custom objects, but individual attributes can be
returned just fine.
This keyword logs the connection information with log level `INFO`.
New in SSHLibrary 2.0.
"""
if not index_or_alias:
index_or_alias = self._connections.current_index
try:
config = self._connections.get_connection(index_or_alias).config
except RuntimeError:
config = SSHClient(None).config
self._info(str(config))
return_values = tuple(self._get_config_values(config, index, host,
alias, port, timeout,
newline, prompt,
term_type, width, height,
encoding))
if not return_values:
return config
if len(return_values) == 1:
return return_values[0]
return return_values
def _info(self, msg):
self._log(msg, 'INFO')
def _log(self, msg, level=None):
level = self._active_loglevel(level)
msg = msg.strip()
if not msg:
return
if logger:
logger.write(msg, level)
else:
print '*%s* %s' % (level, msg)
def _active_loglevel(self, level):
if level is None:
return self._config.loglevel
if isinstance(level, basestring) and \
level.upper() in ['TRACE', 'DEBUG', 'INFO', 'WARN', 'HTML']:
return level.upper()
raise AssertionError("Invalid log level '%s'." % level)
def _get_config_values(self, config, index, host, alias, port, timeout,
newline, prompt, term_type, width, height, encoding):
if self._output_wanted(index):
yield config.index
if self._output_wanted(host):
yield config.host
if self._output_wanted(alias):
yield config.alias
if self._output_wanted(port):
yield config.port
if self._output_wanted(timeout):
yield config.timeout
if self._output_wanted(newline):
yield config.newline
if self._output_wanted(prompt):
yield config.prompt
if self._output_wanted(term_type):
yield config.term_type
if self._output_wanted(width):
yield config.width
if self._output_wanted(height):
yield config.height
if self._output_wanted(encoding):
yield config.encoding
def _output_wanted(self, value):
return value and str(value).lower() != 'false'
def get_connections(self):
"""Return information about all the open connections.
This keyword returns a list of objects that are identical to the ones
returned by `Get Connection`.
Example:
| Open Connection | near.server.com | timeout=10s |
| Open Connection | far.server.com | timeout=5s |
| ${nearhost} | ${farhost}= | Get Connections |
| Should Be Equal | ${nearhost.host} | near.server.com |
| Should Be Equal As Integers | ${nearhost.timeout} | 10 |
| Should Be Equal As Integers | ${farhost.port} | 22 |
| Should Be Equal As Integers | ${farhost.timeout} | 5 |
This keyword logs the information of connections with log level `INFO`.
"""
configs = [c.config for c in self._connections._connections]
for c in configs:
self._info(str(c))
return configs
def login(self, username, password, delay='0.5 seconds'):
"""Logs into the SSH server with the given `username` and `password`.
Connection must be opened before using this keyword.
This keyword reads, returns and logs the server output after logging in.
If the [#Default prompt|prompt is set], everything until the prompt
is read. Otherwise the output is read using the `Read` keyword with
the given `delay`. The output is logged using the [#Default loglevel|
default log level].
Example that logs in and returns the output:
| Open Connection | linux.server.com |
| ${output}= | Login | johndoe | secretpasswd |
| Should Contain | ${output} | Last login at |
Example that logs in and returns everything until the prompt:
| Open Connection | linux.server.com | prompt=$ |
| ${output}= | Login | johndoe | secretpasswd |
| Should Contain | ${output} | johndoe@linux:~$ |
Argument `delay` was added in SSHLibrary 2.0.
"""
return self._login(self.current.login, username, password, delay)
def login_with_public_key(self, username, keyfile=None, password='',
delay='0.5 seconds'):
"""Logs into the SSH server using key-based authentication.
Connection must be opened before using this keyword.
`username` is the username on the remote machine.
`keyfile` is a path to a valid OpenSSH private key file on the local
filesystem.
`password` is used to unlock the `keyfile` if unlocking is required.
This keyword reads, returns and logs the server output after logging in.
If the [#Default prompt|prompt is set], everything until the prompt
is read. Otherwise the output is read using the `Read` keyword with
the given `delay`. The output is logged using the [#Default loglevel|
default log level].
Example that logs in using a private key and returns the output:
| Open Connection | linux.server.com |
| ${output}= | Login With Public Key | johndoe | /home/johndoe/.ssh/id_rsa |
| Should Contain | ${motd} | Last login at |
With locked private keys, the keyring `password` is required:
| Open Connection | linux.server.com |
| Login With Public Key | johndoe | /home/johndoe/.ssh/id_dsa | keyringpasswd |
Argument `delay` was added in SSHLibrary 2.0.
"""
return self._login(self.current.login_with_public_key, username,
keyfile, password, delay)
def _login(self, login_method, username, *args):
self._info("Logging into '%s:%s' as '%s'."
% (self.current.config.host, self.current.config.port,
username))
try:
login_output = login_method(username, *args)
self._log('Read output: %s' % login_output)
return login_output
except SSHClientException, e:
raise RuntimeError(e)
def execute_command(self, command, return_stdout=True, return_stderr=False,
return_rc=False):
"""Executes `command` on the remote machine and returns its outputs.
This keyword executes the `command` and returns after the execution
has been finished. Use `Start Command` if the command should be
started on the background.
By default, only the standard output is returned:
| ${stdout}= | Execute Command | echo 'Hello John!' |
| Should Contain | ${stdout} | Hello John! |
Arguments `return_stdout`, `return_stderr` and `return_rc` are used
to specify, what is returned by this keyword.
If several arguments evaluate to true, multiple values are returned.
Non-empty strings, except `false` and `False`, evaluate to true.
If errors are needed as well, set the respective argument value to true:
| ${stdout} | ${stderr}= | Execute Command | echo 'Hello John!' | return_stderr=True |
| Should Be Empty | ${stderr} |
Often checking the return code is enough:
| ${rc}= | Execute Command | echo 'Hello John!' | return_stdout=False | return_rc=True |
| Should Be Equal As Integers | ${rc} | 0 | # succeeded |
The `command` is always executed in a new shell. Thus possible changes
to the environment (e.g. changing working directory) are not visible
to the later keywords:
| ${pwd}= | Execute Command | pwd |
| Should Be Equal | ${pwd} | /home/johndoe |
| Execute Command | cd /tmp |
| ${pwd}= | Execute Command | pwd |
| Should Be Equal | ${pwd} | /home/johndoe |
`Write` and `Read` can be used for
[#Interactive shells|running multiple commands in the same shell].
This keyword logs the executed command and its exit status with
log level `INFO`.
"""
self._info("Executing command '%s'." % command)
opts = self._legacy_output_options(return_stdout, return_stderr,
return_rc)
stdout, stderr, rc = self.current.execute_command(command)
return self._return_command_output(stdout, stderr, rc, *opts)
def start_command(self, command):
"""Starts execution of the `command` on the remote machine and returns immediately.
This keyword returns nothing and does not wait for the `command`
execution to be finished. If waiting for the output is required,
use `Execute Command` instead.
This keyword does not return any output generated by the started
`command`. Use `Read Command Output` to read the output:
| Start Command | echo 'Hello John!' |
| ${stdout}= | Read Command Output |
| Should Contain | ${stdout} | Hello John! |
The `command` is always executed in a new shell, similarly as with
`Execute Command`. Thus possible changes to the environment
(e.g. changing working directory) are not visible to the later keywords:
| Start Command | pwd |
| ${pwd}= | Read Command Output |
| Should Be Equal | ${pwd} | /home/johndoe |
| Start Command | cd /tmp |
| Start Command | pwd |
| ${pwd}= | Read Command Output |
| Should Be Equal | ${pwd} | /home/johndoe |
`Write` and `Read` can be used for
[#Interactive shells|running multiple commands in the same shell].
This keyword logs the started command with log level `INFO`.
"""
self._info("Starting command '%s'." % command)
self._last_command = command
self.current.start_command(command)
def read_command_output(self, return_stdout=True, return_stderr=False,
return_rc=False):
"""Returns outputs of the most recent started command.
At least one command must have been started using `Start Command`
before this keyword can be used.
By default, only the standard output of the started command is returned:
| Start Command | echo 'Hello John!' |
| ${stdout}= | Read Command Output |
| Should Contain | ${stdout} | Hello John! |
Arguments `return_stdout`, `return_stderr` and `return_rc` are used
to specify, what is returned by this keyword.
If several arguments evaluate to true, multiple values are returned.
Non-empty strings, except `false` and `False`, evaluate to true.
If errors are needed as well, set the argument value to true:
| Start Command | echo 'Hello John!' |
| ${stdout} | ${stderr}= | Read Command Output | return_stderr=True |
| Should Be Empty | ${stderr} |
Often checking the return code is enough:
| Start Command | echo 'Hello John!' |
| ${rc}= | Read Command Output | return_stdout=False | return_rc=True |
| Should Be Equal As Integers | ${rc} | 0 | # succeeded |
Using `Start Command` and `Read Command Output` follows
'last in, first out' (LIFO) policy, meaning that `Read Command Output`
operates on the most recent started command, after which that command
is discarded and its output cannot be read again.
If several commands have been started, the output of the last started
command is returned. After that, a subsequent call will return the
output of the new last (originally the second last) command:
| Start Command | echo 'HELLO' |
| Start Command | echo 'SECOND' |
| ${stdout}= | Read Command Output |
| Should Contain | ${stdout} | 'SECOND' |
| ${stdout}= | Read Command Output |
| Should Contain | ${stdout} | 'HELLO' |
This keyword logs the read command with log level `INFO`.
"""
self._info("Reading output of command '%s'." % self._last_command)
opts = self._legacy_output_options(return_stdout, return_stderr,
return_rc)
try:
stdout, stderr, rc = self.current.read_command_output()
except SSHClientException, msg:
raise RuntimeError(msg)
return self._return_command_output(stdout, stderr, rc, *opts)
def _legacy_output_options(self, stdout, stderr, rc):
if not isinstance(stdout, basestring):
return stdout, stderr, rc
stdout = stdout.lower()
if stdout == 'stderr':
return False, True, rc
if stdout == 'both':
return True, True, rc
return stdout, stderr, rc
def _return_command_output(self, stdout, stderr, rc, return_stdout,
return_stderr, return_rc):
self._info("Command exited with return code %d." % rc)
ret = []
if self._output_wanted(return_stdout):
ret.append(stdout.rstrip('\n'))
if self._output_wanted(return_stderr):
ret.append(stderr.rstrip('\n'))
if self._output_wanted(return_rc):
ret.append(rc)
if len(ret) == 1:
return ret[0]
return ret
def write(self, text, loglevel=None):
"""Writes the given `text` on the remote machine and appends a newline.
Appended [#Default newline|newline] can be configured.
This keyword returns and [#Interactive shells|consumes] the written
`text` (including the appended newline) from the server output.
The written `text` is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
Example:
| ${written}= | Write | su |
| Should Contain | ${written} | su | # Returns the consumed output |
| ${output}= | Read |
| Should Not Contain | ${output} | ${written} | # Was consumed from the output |
| Should Contain | ${output} | Password: |
| Write | invalidpasswd |
| ${output}= | Read |
| Should Contain | ${output} | su: Authentication failure |
See also `Write Bare`.
"""
self._write(text, add_newline=True)
return self._read_and_log(loglevel, self.current.read_until_newline)
def write_bare(self, text):
"""Writes the given `text` on the remote machine without appending a newline.
Unlike `Write`, this keyword returns and [#Interactive shells|consumes]
nothing.
Example:
| Write Bare | su\\n |
| ${output}= | Read |
| Should Contain | ${output} | su | # Was not consumed from output |
| Should Contain | ${output} | Password: |
| Write Bare | invalidpasswd\\n |
| ${output}= | Read |
| Should Contain | ${output} | su: Authentication failure |
See also `Write`.
"""
self._write(text)
def _write(self, text, add_newline=False):
try:
self.current.write(text, add_newline)
except SSHClientException, e:
raise RuntimeError(e)
def write_until_expected_output(self, text, expected, timeout,
retry_interval, loglevel=None):
"""Writes the given `text` repeatedly until `expected` appears in the server output.
This keyword returns nothing.
`text` is written without appending a newline and is
[#Interactive shells|consumed] from the server output before
`expected` is read.
If `expected` does not appear in output within `timeout`, this keyword
fails. `retry_interval` defines the time before writing `text` again.
Both `timeout` and `retry_interval` must be given in Robot Framework's
time format (e.g. `5`, `1 minute`, `2 min 3 s`, `4.5`).
The written `text` is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
This example will write `lsof -c python26\\n` (list all files
currently opened by python 2.6), until `myscript.py` appears in the
output. The command is written every 0.5 seconds. The keyword fails if
`myscript.py` does not appear in the server output in 5 seconds:
| Write Until Expected Output | lsof -c python26\\n | expected=myscript.py | timeout=5s | retry_interval=0.5s |
"""
self._read_and_log(loglevel, self.current.write_until_expected, text,
expected, timeout, retry_interval)
def read(self, loglevel=None, delay=None):
"""Consumes and returns everything available on the server output.
If `delay` is given, this keyword waits that amount of time and reads
output again. This wait-read cycle is repeated as long as further reads
return more output or the [#Default timeout|timeout] expires.
`delay` must be given in Robot Framework's time format (e.g. `5`,
`4.5s`, `3 minutes`, `2 min 3 sec`) that is explained in detail in
the User Guide.
This keyword is most useful for reading everything from
the server output, thus clearing it.
The read output is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
Example:
| Open Connection | my.server.com |
| Login | johndoe | secretpasswd |
| Write | sudo su - | |
| ${output}= | Read | delay=0.5s |
| Should Contain | ${output} | [sudo] password for johndoe: |
| Write | secretpasswd | |
| ${output}= | Read | loglevel=WARN | # Shown in the console due to loglevel |
| Should Contain | ${output} | root@ |
See `interactive shells` for more information about writing and reading
in general.
Argument `delay` was added in SSHLibrary 2.0.
"""
return self._read_and_log(loglevel, self.current.read, delay)
def read_until(self, expected, loglevel=None):
"""Consumes and returns the server output until `expected` is encountered.
Text up until and including the `expected` will be returned.
If [#Default timeout|the timeout] expires before the match is found,
this keyword fails.
The read output is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
Example:
| Open Connection | my.server.com |
| Login | johndoe | ${PASSWORD} |
| Write | sudo su - | |
| ${output}= | Read Until | : |
| Should Contain | ${output} | [sudo] password for johndoe: |
| Write | ${PASSWORD} | |
| ${output}= | Read Until | @ |
| Should End With | ${output} | root@ |
See also `Read Until Prompt` and `Read Until Regexp` keywords. For more
details about reading and writing in general, see `interactive shells`
section.
"""
return self._read_and_log(loglevel, self.current.read_until, expected)
def read_until_prompt(self, loglevel=None):
"""Consumes and returns the server output until the prompt is found.
Text up and until prompt is returned. [#Default prompt|The prompt must
be set] before this keyword is used.
If [#Default timeout|the timeout] expires before the match is found,
this keyword fails.
This keyword is useful for reading output of a single command when
output of previous command has been read and that command does not
produce prompt characters in its output.
The read output is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
Example:
| Open Connection | my.server.com | prompt=$ |
| Login | johndoe | ${PASSWORD} |
| Write | sudo su - | |
| Write | ${PASSWORD} | |
| Set Client Configuration | prompt=# | # For root, the prompt is # |
| ${output}= | Read Until Prompt | |
| Should End With | ${output} | root@myserver:~# |
See also `Read Until` and `Read Until Regexp` keywords. For more
details about reading and writing in general, see `interactive shells`
section.
"""
return self._read_and_log(loglevel, self.current.read_until_prompt)
def read_until_regexp(self, regexp, loglevel=None):
"""Consumes and returns the server output until a match to `regexp` is found.
`regexp` can be a pattern or a compiled regexp object.
Text up until and including the `regexp` will be returned.
Regular expression check is implemented using the Python
[http://docs.python.org/2/library/re.html|re module]. Python's regular
expression syntax is derived from Perl, and it is thus also very
similar to the syntax used, for example, in Java, Ruby and .NET.
Things to note about the `regexp` syntax:
- Backslash is an escape character in the test data, and possible
backslashes in the pattern must thus be escaped with another backslash
(e.g. '\\\\d\\\\w+').
- Possible flags altering how the expression is parsed (e.g.
re.IGNORECASE, re.MULTILINE) can be set by prefixing the pattern with
the '(?iLmsux)' group (e.g. '(?im)pattern'). The available flags are
'IGNORECASE': 'i', 'MULTILINE': 'm', 'DOTALL': 's', 'VERBOSE': 'x',
'UNICODE': 'u', and 'LOCALE': 'L'.
If [#Default timeout|the timeout] expires before the match is found,
this keyword fails.
The read output is logged. `loglevel` can be used to override
the [#Default loglevel|default log level].
Example:
| Open Connection | my.server.com |
| Login | johndoe | ${PASSWORD} |
| Write | sudo su - | |
| ${output}= | Read Until Regexp | \\\\[.*\\\\].*: |
| Should Contain | ${output} | [sudo] password for johndoe: |
| Write | ${PASSWORD} | |
| ${output}= | Read Until Regexp | .*@ |
| Should Contain | ${output} | root@ |
See also `Read Until` and `Read Until Prompt` keywords. For more
details about reading and writing in general, see `interactive shells`
section.
"""
return self._read_and_log(loglevel, self.current.read_until_regexp,
regexp)
def _read_and_log(self, loglevel, reader, *args):
try:
output = reader(*args)
except SSHClientException, e:
raise RuntimeError(e)
self._log(output, loglevel)
return output
def get_file(self, source, destination='.'):
"""Downloads file(s) from the remote machine to the local machine.
`source` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
If the source contains wildcards explained in `pattern matching`,
all files matching it are downloaded. In this case `destination`
must always be a directory.
`destination` is the target path on the local machine. Both absolute
paths and paths relative to the current working directory are supported.
`path_separator` was *removed* in SSHLibrary 2.1. Use [#Default
path separator|the library or the connection specific setting] instead.
Examples:
| Get File | /var/log/auth.log | /tmp/ |
| Get File | /tmp/example.txt | C:\\\\temp\\\\new_name.txt |
| Get File | /path/to/*.txt |
The local `destination` is created using the rules explained below:
1. If the `destination` is an existing file, the `source` file is
downloaded over it.
2. If the `destination` is an existing directory, the `source` file is
downloaded into it. Possible file with the same name is overwritten.
3. If the `destination` does not exist and it ends with the path
separator of the local operating system, it is considered a
directory. The directory is then created and the `source` file is
downloaded into it. Possible missing intermediate directories
are also created.
4. If the `destination` does not exist and does not end with the local
path separator, it is considered a file. The `source` file is
downloaded and saved using that file name, and possible missing
intermediate directories are also created.
5. If `destination` is not given, the current working directory on
the local machine is used as the destination. This is typically
the directory where the test execution was started and thus
accessible using built-in `${EXECDIR}` variable.
Argument `path_separator` was deprecated in SSHLibrary 2.0.
See also `Get Directory`.
"""
return self._run_sftp_command(self.current.get_file, source,
destination)
def get_directory(self, source, destination='.', recursive=False):
"""Downloads a directory, including its content, from the remote machine to the local machine.
`source` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
`destination` is the target path on the local machine. Both absolute
paths and paths relative to the current working directory are supported.
`recursive` specifies, whether to recursively download all
subdirectories inside `source`. Subdirectories are downloaded if
the argument value evaluates to true.
Examples:
| Get Directory | /var/logs | /tmp |
| Get Directory | /var/logs | /tmp/non/existing |
| Get Directory | /var/logs |
| Get Directory | /var/logs | recursive=True |
The local `destination` is created as following:
1. If `destination` is an existing path on the local machine,
`source` directory is downloaded into it.
2. If `destination` does not exist on the local machine, it is created
and the content of `source` directory is downloaded into it.
3. If `destination` is not given, `source` directory is downloaded into
the current working directory on the local machine. This is typically
the directory where the test execution was started and thus
accessible using built-in `${EXECDIR}` variable.
New in SSHLibrary 2.0.
See also `Get File`.
"""
return self._run_sftp_command(self.current.get_directory, source,
destination, recursive)
def put_file(self, source, destination='.', mode='0744', newline=''):
"""Uploads file(s) from the local machine to the remote machine.
`source` is the path on the local machine. Both absolute paths and
paths relative to the current working directory are supported.
If the source contains wildcards explained in `pattern matching`,
all files matching it are uploaded. In this case `destination`
must always be a directory.
`destination` is the target path on the remote machine. Both absolute
paths and paths relative to the current working directory are supported.
`mode` can be used to set the target file permission.
Numeric values are accepted. The default value is `0744` (-rwxr--r--).
`newline` can be used to force the line break characters that are
written to the remote files. Valid values are `LF` and `CRLF`.
`path_separator` was *removed* in SSHLibrary 2.1. Use [#Default
path separator|the library or the connection specific setting] instead.
Examples:
| Put File | /path/to/*.txt |
| Put File | /path/to/*.txt | /home/groups/robot | mode=0770 |
| Put File | /path/to/*.txt | newline=CRLF |
The remote `destination` is created as following:
1. If `destination` is an existing file, `source` file is uploaded
over it.
2. If `destination` is an existing directory, `source` file is
uploaded into it. Possible file with same name is overwritten.
3. If `destination` does not exist and it ends with [#Default path
separator|the path separator], it is considered a directory.
The directory is then created and `source` file uploaded into it.
Possibly missing intermediate directories are also created.
4. If `destination` does not exist and it does not end with [#Default
path separator|the path separator], it is considered a file.
If the path to the file does not exist, it is created.
5. If `destination` is not given, the user's home directory
on the remote machine is used as the destination.
See also `Put Directory`.
"""
return self._run_sftp_command(self.current.put_file, source,
destination, mode, newline)
def put_directory(self, source, destination='.', mode='0744', newline='',
recursive=False):
"""Uploads a directory, including its content, from the local machine to the remote machine.
`source` is the path on the local machine. Both absolute paths and
paths relative to the current working directory are supported.
`destination` is the target path on the remote machine. Both absolute
paths and paths relative to the current working directory are supported.
`mode` can be used to set the target file permission.
Numeric values are accepted. The default value is `0744` (-rwxr--r--).
`newline` can be used to force the line break characters that are
written to the remote files. Valid values are `LF` and `CRLF`.
`recursive` specifies, whether to recursively upload all
subdirectories inside `source`. Subdirectories are uploaded if the
argument value evaluates to true.
Examples:
| Put Directory | /var/logs | /tmp |
| Put Directory | /var/logs | /tmp/non/existing |
| Put Directory | /var/logs |
| Put Directory | /var/logs | recursive=True |
| Put Directory | /var/logs | /home/groups/robot | mode=0770 |
| Put Directory | /var/logs | newline=CRLF |
The remote `destination` is created as following:
1. If `destination` is an existing path on the remote machine,
`source` directory is uploaded into it.
2. If `destination` does not exist on the remote machine, it is
created and the content of `source` directory is uploaded into it.
3. If `destination` is not given, `source` directory is typically
uploaded to user's home directory on the remote machine.
New in SSHLibrary 2.0.
See also `Put File`.
"""
return self._run_sftp_command(self.current.put_directory, source,
destination, mode, newline, recursive)
def _run_sftp_command(self, command, *args):
try:
files = command(*args)
except SSHClientException, e:
raise RuntimeError(e)
for src, dst in files:
self._info("'%s' -> '%s'" % (src, dst))
def file_should_exist(self, path):
"""Fails if the given `path` does NOT point to an existing file.
Example:
| File Should Exist | /boot/initrd.img |
Note that symlinks are followed:
| File Should Exist | /initrd.img | # Points to boot/initrd.img |
New in SSHLibrary 2.0.
"""
if not self.current.is_file(path):
raise AssertionError("File '%s' does not exist." % path)
def file_should_not_exist(self, path):
"""Fails if the given `path` points to an existing file.
Example:
| File Should Not Exist | /non/existing |
Note that this keyword follows symlinks. Thus the example fails if
`/non/existing` is a link that points an existing file.
New in SSHLibrary 2.0.
"""
if self.current.is_file(path):
raise AssertionError("File '%s' exists." % path)
def directory_should_exist(self, path):
"""Fails if the given `path` does not point to an existing directory.
Example:
| Directory Should Exist | /usr/share/man |
Note that symlinks are followed:
| Directory Should Exist | /usr/local/man | # Points to /usr/share/man/ |
New in SSHLibrary 2.0.
"""
if not self.current.is_dir(path):
raise AssertionError("Directory '%s' does not exist." % path)
def directory_should_not_exist(self, path):
"""Fails if the given `path` points to an existing directory.
Example:
| Directory Should Not Exist | /non/existing |
Note that this keyword follows symlinks. Thus the example fails if
`/non/existing` is a link that points to an existing directory.
New in SSHLibrary 2.0.
"""
if self.current.is_dir(path):
raise AssertionError("Directory '%s' exists." % path)
def list_directory(self, path, pattern=None, absolute=False):
"""Returns and logs items in the remote `path`, optionally filtered with `pattern`.
`path` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
If `path` is a symlink, it is followed.
Item names are returned in case-sensitive alphabetical order,
e.g. ['A Name', 'Second', 'a lower case name', 'one more'].
Implicit directories `.` and `..` are not returned. The returned items
are automatically logged.
By default, the item names are returned relative to the given
remote path (e.g. `file.txt`). If you want them be returned in the
absolute format (e.g. `/home/johndoe/file.txt`), set the `absolute`
argument to any non-empty string.
If `pattern` is given, only items matching it are returned. The pattern
matching syntax is explained in `pattern matching`.
Examples (using also other `List Directory` variants):
| @{items}= | List Directory | /home/johndoe |
| @{files}= | List Files In Directory | /tmp | *.txt | absolute=True |
If you are only interested in directories or files,
use `List Files In Directory` or `List Directories In Directory`,
respectively.
New in SSHLibrary 2.0.
"""
try:
items = self.current.list_dir(path, pattern, absolute)
except SSHClientException, msg:
raise RuntimeError(msg)
self._info('%d item%s:\n%s' % (len(items), plural_or_not(items),
'\n'.join(items)))
return items
def list_files_in_directory(self, path, pattern=None, absolute=False):
"""A wrapper for `List Directory` that returns only files.
New in SSHLibrary 2.0.
"""
try:
files = self.current.list_files_in_dir(path, pattern, absolute)
except SSHClientException, msg:
raise RuntimeError(msg)
files = self.current.list_files_in_dir(path, pattern, absolute)
self._info('%d file%s:\n%s' % (len(files), plural_or_not(files),
'\n'.join(files)))
return files
def list_directories_in_directory(self, path, pattern=None, absolute=False):
"""A wrapper for `List Directory` that returns only directories.
New in SSHLibrary 2.0.
"""
try:
dirs = self.current.list_dirs_in_dir(path, pattern, absolute)
except SSHClientException, msg:
raise RuntimeError(msg)
self._info('%d director%s:\n%s' % (len(dirs),
'y' if len(dirs) == 1 else 'ies',
'\n'.join(dirs)))
return dirs
class _DefaultConfiguration(Configuration):
def __init__(self, timeout, newline, prompt, loglevel, term_type, width,
height, path_separator, encoding):
super(_DefaultConfiguration, self).__init__(
timeout=TimeEntry(timeout),
newline=NewlineEntry(newline),
prompt=StringEntry(prompt),
loglevel=LogLevelEntry(loglevel),
term_type=StringEntry(term_type),
width=IntegerEntry(width),
height=IntegerEntry(height),
path_separator=StringEntry(path_separator),
encoding=StringEntry(encoding)
) | /robotframework-sshlibrary-forwardagent-2.1.3.1.tar.gz/robotframework-sshlibrary-forwardagent-2.1.3.1/src/SSHLibrary/library.py | 0.798737 | 0.511229 | library.py | pypi |
from robot import utils
class ConfigurationException(Exception):
"""Raised when creating, updating or accessing a Configuration entry fails.
"""
pass
class Configuration(object):
"""A simple configuration class.
Configuration is defined with keyword arguments, in which the value must
be an instance of :py:class:`Entry`. Different subclasses of `Entry` can
be used to handle common types and conversions.
Example::
cfg = Configuration(name=StringEntry('initial'),
age=IntegerEntry('42'))
assert cfg.name == initial
assert cfg.age == 42
cfg.update(name='John Doe')
assert cfg.name == 'John Doe'
"""
def __init__(self, **entries):
self._config = entries
def __str__(self):
return '\n'.join(['%s=%s' % (k, v) for k, v in self._config.items()])
def update(self, **entries):
"""Update configuration entries.
:param entries: entries to be updated, keyword argument names must
match existing entry names. If any value in `**entries` is None,
the corresponding entry is *not* updated.
See `__init__` for an example.
"""
for name, value in entries.items():
if value is not None:
self._config[name].set(value)
def get(self, name):
"""Return entry corresponding to name."""
return self._config[name]
def __getattr__(self, name):
if name in self._config:
return self._config[name].value
msg = "Configuration parameter '%s' is not defined." % name
raise ConfigurationException(msg)
class Entry(object):
"""A base class for values stored in :py:class:`Configuration`.
:param:`initial` the initial value of this entry.
"""
def __init__(self, initial=None):
self._value = self._create_value(initial)
def __str__(self):
return str(self._value)
@property
def value(self):
return self._value
def set(self, value):
self._value = self._parse_value(value)
def _create_value(self, value):
if value is None:
return None
return self._parse_value(value)
class StringEntry(Entry):
"""String value to be stored in :py:class:`Configuration`."""
def _parse_value(self, value):
return str(value)
class IntegerEntry(Entry):
"""Integer value to be stored in stored in :py:class:`Configuration`.
Given value is converted to string using `int()`.
"""
def _parse_value(self, value):
return int(value)
class TimeEntry(Entry):
"""Time string to be stored in :py:class:`Configuration`.
Given time string will be converted to seconds using
:py:func:`robot.utils.timestr_to_secs`.
"""
def _parse_value(self, value):
value = str(value)
return utils.timestr_to_secs(value) if value else None
def __str__(self):
return utils.secs_to_timestr(self._value)
class LogLevelEntry(Entry):
"""Log level to be stored in :py:class:`Configuration`.
Given string must be on of 'TRACE', 'DEBUG', 'INFO' or 'WARN', case
insensitively.
"""
LEVELS = ('TRACE', 'DEBUG', 'INFO', 'WARN')
def _parse_value(self, value):
value = str(value).upper()
if value not in self.LEVELS:
raise ConfigurationException("Invalid log level '%s'." % value)
return value
class NewlineEntry(Entry):
"""New line sequence to be stored in :py:class:`Configuration`.
Following conversion are performed on the given string:
* 'LF' -> '\n'
* 'CR' -> '\r'
"""
def _parse_value(self, value):
value = str(value).upper()
return value.replace('LF', '\n').replace('CR', '\r') | /robotframework-sshlibrary-forwardagent-2.1.3.1.tar.gz/robotframework-sshlibrary-forwardagent-2.1.3.1/src/SSHLibrary/config.py | 0.942915 | 0.561455 | config.py | pypi |
from __future__ import print_function
import re
import os
from .deco import keyword
try:
from robot.api import logger
except ImportError:
logger = None
from .sshconnectioncache import SSHConnectionCache
from .abstractclient import SSHClientException
from .client import SSHClient
from .config import (Configuration, IntegerEntry, LogLevelEntry, NewlineEntry,
StringEntry, TimeEntry)
from .utils import ConnectionCache, is_string, is_truthy, plural_or_not
from .version import VERSION
__version__ = VERSION
class SSHLibrary(object):
"""SSHLibrary is a Robot Framework test library for SSH and SFTP.
This document explains how to use keywords provided by SSHLibrary.
For information about installation, support, and more please visit the
[https://github.com/robotframework/SSHLibrary|project page].
For more information about Robot Framework, see http://robotframework.org.
The library has the following main usages:
- Executing commands on the remote machine, either with blocking or
non-blocking behaviour (see `Execute Command` and `Start Command`,
respectively).
- Writing and reading in an interactive shell (e.g. `Read` and `Write`).
- Transferring files and directories over SFTP (e.g. `Get File` and
`Put Directory`).
- Ensuring that files or directories exist on the remote machine
(e.g. `File Should Exist` and `Directory Should Not Exist`).
This library works both with Python and Jython, but uses different
SSH modules internally depending on the interpreter. See
[http://robotframework.org/SSHLibrary/#installation|installation instructions]
for more details about the dependencies. IronPython is unfortunately
not supported. Python 3 is supported starting from SSHLibrary 3.0.0.
== Table of contents ==
- `Connections and login`
- `Configuration`
- `Executing commands`
- `Interactive shells`
- `Pattern matching`
- `Example`
- `Importing`
- `Time format`
- `Boolean arguments`
- `Shortcuts`
- `Keywords`
= Connections and login =
SSHLibrary supports multiple connections to different hosts.
New connections are opened with `Open Connection`.
Login into the host is done either with username and password
(`Login`) or with public/private key pair (`Login With Public key`).
Only one connection can be active at a time. This means that most of the
keywords only affect the active connection. Active connection can be
changed with `Switch Connection`.
= Configuration =
Default settings for all the upcoming connections can be configured on
`library importing` or later with `Set Default Configuration`.
Using `Set Default Configuration` does not affect the already open
connections. Settings of the current connection can be configured
with `Set Client Configuration`. Settings of another, non-active connection,
can be configured by first using `Switch Connection` and then
`Set Client Configuration`.
Most of the defaults can be overridden per connection by defining them
as arguments to `Open Connection`. Otherwise the defaults are used.
== Configurable per connection ==
=== Prompt ===
Argument ``prompt`` defines the character sequence used by `Read Until Prompt`
and must be set before that keyword can be used.
If you know the prompt on the remote machine, it is recommended to set it
to ease reading output from the server after using `Write`. In addition to
that, `Login` and `Login With Public Key` can read the server output more
efficiently when the prompt is set.
Prompt can be specified either as a normal string or as a regular expression.
The latter is especially useful if the prompt changes as a result of
the executed commands. Prompt can be set to be a regular expression by
giving the prompt argument a value starting with ``REGEXP:`` followed by
the actual regular expression like ``prompt=REGEXP:[$#]``. See the
`Regular expressions` section for more details about the syntax.
The support for regular expressions is new in SSHLibrary 3.0.0.
=== Encoding ===
Argument ``encoding`` defines the
[https://docs.python.org/3/library/codecs.html#standard-encodings|
character encoding] of input and output sequences. The default encoding
is UTF-8.
It is also possible to configure the error handler to use if encoding or
decoding characters fails. Accepted values are the same that encode/decode
functions in Python strings accept. In practice the following values
are the most useful:
- ``ignore``: ignore characters that cannot be decoded
- ``strict``: fail if characters cannot be decoded
- ``replace``: replace characters that cannot be decoded with replacement
character
By default ``encoding_errors`` is set to ``strict``. ``encoding_errors``
is new in SSHLibrary 3.7.0.
=== Path separator ===
Argument ``path_separator`` must be set to the one known by the operating
system and the SSH server on the remote machine. The path separator is
used by keywords `Get File`, `Put File`, `Get Directory` and
`Put Directory` for joining paths correctly on the remote host.
The default path separator is forward slash ``/`` which works on
Unix-like machines. On Windows the path separator to use depends on
the SSH server. Some servers use forward slash and others backslash,
and users need to configure the ``path_separator`` accordingly. Notice
that using a backslash in Robot Framework test data requires doubling
it like ``\\\\``.
The path separator can be configured on `library importing` or later,
using `Set Default Configuration`, `Set Client Configuration` and `Open
Connection`.
=== Timeout ===
Argument ``timeout`` is used by `Read Until` variants. The default value
is ``3 seconds``. See `time format` below for supported timeout syntax.
=== Newline ===
Argument ``newline`` is the line break sequence used by `Write` keyword
and must be set according to the operating system on the remote machine.
The default value is ``LF`` (same as ``\\n``) which is used on Unix-like
operating systems. With Windows remote machines, you need to set this to
``CRLF`` (``\\r\\n``).
=== Terminal settings ===
Argument ``term_type`` defines the virtual terminal type, and arguments
``width`` and ``height`` can be used to control its virtual size.
=== Escape ansi sequneces ===
Argument ``escape_ansi`` is a parameter used in order to escape ansi
sequences that appear in the output when the remote machine has
Windows as operating system.
== Not configurable per connection ==
=== Loglevel ===
Argument ``loglevel`` sets the log level used to log the output read by
`Read`, `Read Until`, `Read Until Prompt`, `Read Until Regexp`, `Write`,
`Write Until Expected Output`, `Login` and `Login With Public Key`.
The default level is ``INFO``.
``loglevel`` is not configurable per connection but can be overridden by
passing it as an argument to the most of the aforementioned keywords.
Possible argument values are ``TRACE``, ``DEBUG``, ``INFO``, ``WARN``
and ``NONE`` (no logging).
= Executing commands =
For executing commands on the remote machine, there are two possibilities:
- `Execute Command` and `Start Command`.
The command is executed in a new shell on the remote machine,
which means that possible changes to the environment
(e.g. changing working directory, setting environment variables, etc.)
are not visible to the subsequent keywords.
- `Write`, `Write Bare`, `Write Until Expected Output`, `Read`,
`Read Until`, `Read Until Prompt` and `Read Until Regexp`.
These keywords operate in an interactive shell, which means that changes
to the environment are visible to the subsequent keywords.
= Interactive shells =
`Write`, `Write Bare`, `Write Until Expected Output`, `Read`,
`Read Until`, `Read Until Prompt` and `Read Until Regexp` can be used
to interact with the server within the same shell.
== Consumed output ==
All of these keywords, except `Write Bare`, consume the read or the written
text from the server output before returning. In practice this means that
the text is removed from the server output, i.e. subsequent calls to
`Read` keywords do not return text that was already read. This is
illustrated by the example below.
| `Write` | echo hello | | # Consumes written ``echo hello`` |
| ${stdout}= | `Read Until` | hello | # Consumes read ``hello`` and everything before it |
| `Should Contain` | ${stdout} | hello |
| ${stdout}= | `Read` | | # Consumes everything available |
| `Should Not Contain` | ${stdout} | hello | # ``hello`` was already consumed earlier |
The consumed text is logged by the keywords and their argument
``loglevel`` can be used to override the default `log level`.
`Login` and `Login With Public Key` consume everything on the server
output or if the `prompt` is set, everything until the prompt.
== Reading ==
`Read`, `Read Until`, `Read Until Prompt` and `Read Until Regexp` can be
used to read from the server. The read text is also consumed from
the server output.
`Read` reads everything available on the server output, thus clearing it.
`Read Until` variants read output up until and *including* ``expected``
text. These keywords will fail if the `timeout` expires before
``expected`` is found.
== Writing ==
`Write` and `Write Until Expected Output` consume the written text
from the server output while `Write Bare` does not.
These keywords do not return any output triggered by the written text.
To get the output, one of the `Read` keywords must be explicitly used.
= Pattern matching =
== Glob patterns ==
Some keywords allow their arguments to be specified as _glob patterns_
where:
| * | matches anything, even an empty string |
| ? | matches any single character |
| [chars] | matches any character inside square brackets (e.g. ``[abc]`` matches either ``a``, ``b`` or ``c``) |
| [!chars] | matches any character not inside square brackets |
Pattern matching is case-sensitive regardless the local or remote
operating system. Matching is implemented using Python's
[https://docs.python.org/3/library/fnmatch.html|fnmatch module].
== Regular expressions ==
Some keywords support pattern matching using regular expressions, which
are more powerful but also more complicated than `glob patterns`. This
library uses Python's regular expressions, which are introduced in the
[https://docs.python.org/3/howto/regex.html|Regular Expression HOWTO].
Remember that in Robot Framework data the backslash that is used a lot
in regular expressions is an escape character and needs to be doubled
to get a literal backslash. For example, ``\\\\d\\\\d\\\\s`` matches
two digits followed by a whitespace character.
Possible flags altering how the expression is parsed (e.g.
``re.IGNORECASE``, ``re.MULTILINE``) can be set by prefixing the pattern
with the ``(?iLmsux)`` group. The available flags are ``IGNORECASE``:
``i``, ``MULTILINE``: ``m``, ``DOTALL``: ``s``, ``VERBOSE``: ``x``,
``UNICODE``: ``u``, and ``LOCALE``: ``L``. For example, ``(?is)pat.ern``
uses ``IGNORECASE`` and ``DOTALL`` flags.
= Example =
| ***** Settings *****
| Documentation This example demonstrates executing commands on a remote machine
| ... and getting their output and the return code.
| ...
| ... Notice how connections are handled as part of the suite setup and
| ... teardown. This saves some time when executing several test cases.
|
| Library `SSHLibrary`
| Suite Setup `Open Connection And Log In`
| Suite Teardown `Close All Connections`
|
| ***** Variables *****
| ${HOST} localhost
| ${USERNAME} test
| ${PASSWORD} test
|
| ***** Test Cases *****
| Execute Command And Verify Output
| [Documentation] Execute Command can be used to ran commands on the remote machine.
| ... The keyword returns the standard output by default.
| ${output}= `Execute Command` echo Hello SSHLibrary!
| `Should Be Equal` ${output} Hello SSHLibrary!
|
| Execute Command And Verify Return Code
| [Documentation] Often getting the return code of the command is enough.
| ... This behaviour can be adjusted as Execute Command arguments.
| ${rc}= `Execute Command` echo Success guaranteed. return_stdout=False return_rc=True
| `Should Be Equal` ${rc} ${0}
|
| Executing Commands In An Interactive Session
| [Documentation] Execute Command always executes the command in a new shell.
| ... This means that changes to the environment are not persisted
| ... between subsequent Execute Command keyword calls.
| ... Write and Read Until variants can be used to operate in the same shell.
| `Write` cd ..
| `Write` echo Hello from the parent directory!
| ${output}= `Read Until` directory!
| `Should End With` ${output} Hello from the parent directory!
|
| ***** Keywords *****
| Open Connection And Log In
| `Open Connection` ${HOST}
| `Login` ${USERNAME} ${PASSWORD}
Save the content as file ``executing_command.txt`` and run:
``robot executing_commands.txt``
You may want to override the variables from commandline to try this out on
your remote machine:
``robot -v HOST:my.server.com -v USERNAME:johndoe -v PASSWORD:secretpasswd executing_commands.txt``
== Time format ==
All timeouts, delays or retry intervals can be given as numbers considered seconds
(e.g. ``0.5`` or ``42``) or in Robot Framework's time syntax
(e.g. ``1.5 seconds`` or ``1 min 30 s``). For more information about
the time syntax see the
[http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#time-format|Robot Framework User Guide].
= Boolean arguments =
Some keywords accept arguments that are handled as Boolean values true or
false. If such an argument is given as a string, it is considered false if
it is either an empty string or case-insensitively equal to ``false``,
``none`` or ``no``. Other strings are considered true regardless
their value, and other argument types are tested using the same
[http://docs.python.org/2/library/stdtypes.html#truth-value-testing|rules
as in Python].
True examples:
| `List Directory` | ${path} | recursive=True | # Strings are generally true. |
| `List Directory` | ${path} | recursive=yes | # Same as the above. |
| `List Directory` | ${path} | recursive=${TRUE} | # Python ``True`` is true. |
| `List Directory` | ${path} | recursive=${42} | # Numbers other than 0 are true. |
False examples:
| `List Directory` | ${path} | recursive=False | # String ``false`` is false. |
| `List Directory` | ${path} | recursive=no | # Also string ``no`` is false. |
| `List Directory` | ${path} | recursive=${EMPTY} | # Empty string is false. |
| `List Directory` | ${path} | recursive=${FALSE} | # Python ``False`` is false. |
Prior to SSHLibrary 3.1.0, all non-empty strings, including ``no`` and ``none``
were considered to be true. Considering ``none`` false is new in Robot Framework 3.0.3.
= Transfer files with SCP =
Secure Copy Protocol (SCP) is a way of secure transfer of files between hosts. It is based
on SSH protocol. The advantage it brings over SFTP is transfer speed. SFTP however can also be
used for directory listings and even editing files while transferring.
SCP can be enabled on keywords used for file transfer: `Get File`, `Get Directory`, `Put File`,
`Put Directory` by setting the ``scp`` value to ``TRANSFER`` or ``ALL``.
| OFF | Transfer is done using SFTP only. This is the default value |
| TRANSFER | Directory listings (needed for logging) will be done using SFTP. Actual file transfer is done with SCP. |
| ALL | Only SCP is used for file transfer. No logging available. |
There are some limitations to the current SCP implementation::
- When using SCP, files cannot be altered during transfer and ``newline`` argument does not work.
- If ``scp=ALL`` only ``source`` and ``destination`` arguments will work on the keywords. The directories are
transferred recursively. Also, when running with Jython `Put Directory` and `Get Directory` won't work due to
current Trilead implementation.
- If running with Jython you can encounter some encoding issues when transferring files with non-ascii characters.
SCP transfer was introduced in SSHLibrary 3.3.0.
== Preserving original times ==
SCP allows some configuration when transferring files and directories. One of this configuration is whether to
preserve the original modify time and access time of transferred files and directories. This is done using the
``scp_preserve_times`` argument. This argument works only when ``scp`` argument is set to ``TRANSFER`` or ``ALL``.
When moving directory with ``scp`` set to ``TRANSFER`` and ``scp_preserve_times`` is enabled only the files inside
the director will keep their original timestamps. Also, when running with Jython ``scp_preserve_times`` won't work
due to current current Trilead implementation.
``scp_preserve_times`` was introduced in SSHLibrary 3.6.0.
= Aliases =
SSHLibrary allows the use of an alias when opening a new connection using the parameter ``alias``.
| `Open Connection` | alias=connection1 |
These aliases can later be used with other keywords like `Get Connection` or `Switch Connection` in order to
get information respectively to switch to a certain connection that has that alias.
When a connection is closed, it is no longer possible to switch or get information about the other connections that
have the same alias as the closed one. If the same ``alias`` is used for more connections, keywords
`Switch Connection` and `Get Connection` will switch/get information only about the last opened connection with
that ``alias``.
| `Open Connection` | my.server.com | alias=conn |
| `Open Connection` | my.server.com | alias=conn |
| `Open Connection` | my.server.com | alias=conn2 |
| ${conn_info}= | `Get Connection` | conn |
| `Should Be Equal As Integers` | ${conn_info.index} | 2 |
| `Switch Connection` | conn |
| ${current_conn}= | `Get Connection` | conn |
| `Should Be Equal As Integers` | ${current_conn.index} | 2 |
Note that if a connection that has the same alias as other connections is closed trying to switch or get information
about the other connections that have the same alias is impossible.
| 'Open Connection` | my.server.com | alias=conn |
| 'Open Connection` | my.server.com | alias=conn |
| `Close Connection` |
| `Switch Connection` | conn |
| `Run Keyword And Expect Error` | Non-existing index or alias 'conn'. | `Switch Connection` | conn |
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = __version__
DEFAULT_TIMEOUT = '3 seconds'
DEFAULT_NEWLINE = 'LF'
DEFAULT_PROMPT = None
DEFAULT_LOGLEVEL = 'INFO'
DEFAULT_TERM_TYPE = 'vt100'
DEFAULT_TERM_WIDTH = 80
DEFAULT_TERM_HEIGHT = 24
DEFAULT_PATH_SEPARATOR = '/'
DEFAULT_ENCODING = 'UTF-8'
DEFAULT_ESCAPE_ANSI = False
DEFAULT_ENCODING_ERRORS = 'strict'
def __init__(self,
timeout=DEFAULT_TIMEOUT,
newline=DEFAULT_NEWLINE,
prompt=DEFAULT_PROMPT,
loglevel=DEFAULT_LOGLEVEL,
term_type=DEFAULT_TERM_TYPE,
width=DEFAULT_TERM_WIDTH,
height=DEFAULT_TERM_HEIGHT,
path_separator=DEFAULT_PATH_SEPARATOR,
encoding=DEFAULT_ENCODING,
escape_ansi=DEFAULT_ESCAPE_ANSI,
encoding_errors=DEFAULT_ENCODING_ERRORS):
"""SSHLibrary allows some import time `configuration`.
If the library is imported without any arguments, the library
defaults are used:
| Library | SSHLibrary |
Only arguments that are given are changed. In this example the
`timeout` is changed to ``10 seconds`` but other settings are left
to the library defaults:
| Library | SSHLibrary | 10 seconds |
The `prompt` does not have a default value and
must be explicitly set to be able to use `Read Until Prompt`.
Since SSHLibrary 3.0.0, the prompt can also be a regular expression:
| Library | SSHLibrary | prompt=REGEXP:[$#] |
Multiple settings are also possible. In the example below, the library
is brought into use with `newline` and `path separator` known by
Windows:
| Library | SSHLibrary | newline=CRLF | path_separator=\\\\ |
"""
self._connections = SSHConnectionCache()
self._config = _DefaultConfiguration(
timeout or self.DEFAULT_TIMEOUT,
newline or self.DEFAULT_NEWLINE,
prompt or self.DEFAULT_PROMPT,
loglevel or self.DEFAULT_LOGLEVEL,
term_type or self.DEFAULT_TERM_TYPE,
width or self.DEFAULT_TERM_WIDTH,
height or self.DEFAULT_TERM_HEIGHT,
path_separator or self.DEFAULT_PATH_SEPARATOR,
encoding or self.DEFAULT_ENCODING,
escape_ansi or self.DEFAULT_ESCAPE_ANSI,
encoding_errors or self.DEFAULT_ENCODING_ERRORS
)
self._last_commands = dict()
@property
def current(self):
return self._connections.current
@keyword(types=None)
def set_default_configuration(self, timeout=None, newline=None, prompt=None,
loglevel=None, term_type=None, width=None,
height=None, path_separator=None,
encoding=None, escape_ansi=None, encoding_errors=None):
"""Update the default `configuration`.
Please note that using this keyword does not affect the already
opened connections. Use `Set Client Configuration` to configure the
active connection.
Only parameters whose value is other than ``None`` are updated.
This example sets `prompt` to ``$``:
| `Set Default Configuration` | prompt=$ |
This example sets `newline` and `path separator` to the ones known
by Windows:
| `Set Default Configuration` | newline=CRLF | path_separator=\\\\ |
Sometimes you might want to use longer `timeout` for all the
subsequent connections without affecting the existing ones:
| `Set Default Configuration` | timeout=5 seconds |
| `Open Connection ` | local.server.com |
| `Set Default Configuration` | timeout=20 seconds |
| `Open Connection` | emea.server.com |
| `Open Connection` | apac.server.com |
| ${local} | ${emea} | ${apac}= | `Get Connections` |
| `Should Be Equal As Integers` | ${local.timeout} | 5 |
| `Should Be Equal As Integers` | ${emea.timeout} | 20 |
| `Should Be Equal As Integers` | ${apac.timeout} | 20 |
"""
self._config.update(timeout=timeout, newline=newline, prompt=prompt,
loglevel=loglevel, term_type=term_type, width=width,
height=height, path_separator=path_separator,
encoding=encoding, escape_ansi=escape_ansi, encoding_errors=encoding_errors)
def set_client_configuration(self, timeout=None, newline=None, prompt=None,
term_type=None, width=None, height=None,
path_separator=None, encoding=None, escape_ansi=None, encoding_errors=None):
"""Update the `configuration` of the current connection.
Only parameters whose value is other than ``None`` are updated.
In the following example, `prompt` is set for
the current connection. Other settings are left intact:
| `Open Connection ` | my.server.com |
| `Set Client Configuration` | prompt=$ |
| ${myserver}= | `Get Connection` |
| `Should Be Equal` | ${myserver.prompt} | $ |
Using keyword does not affect the other connections:
| `Open Connection` | linux.server.com | |
| `Set Client Configuration` | prompt=$ | | # Only linux.server.com affected |
| `Open Connection ` | windows.server.com | |
| `Set Client Configuration` | prompt=> | | # Only windows.server.com affected |
| ${linux} | ${windows}= | `Get Connections` |
| `Should Be Equal` | ${linux.prompt} | $ |
| `Should Be Equal` | ${windows.prompt} | > |
Multiple settings are possible. This example updates the
`terminal settings` of the current connection:
| `Open Connection` | 192.168.1.1 |
| `Set Client Configuration` | term_type=ansi | width=40 |
*Note:* Setting ``width`` and ``height`` does not work when using Jython.
"""
self.current.config.update(timeout=timeout, newline=newline,
prompt=prompt, term_type=term_type,
width=width, height=height,
path_separator=path_separator,
encoding=encoding, escape_ansi=escape_ansi,
encoding_errors=encoding_errors)
def enable_ssh_logging(self, logfile):
"""Enables logging of SSH protocol output to given ``logfile``.
All the existing and upcoming connections are logged onwards from
the moment the keyword was called.
``logfile`` is path to a file that is writable by the current local
user. If the file already exists, it will be overwritten.
Example:
| `Open Connection` | my.server.com | # Not logged |
| `Enable SSH Logging` | myserver.log |
| `Login` | johndoe | secretpasswd |
| `Open Connection` | build.local.net | # Logged |
| # Do something with the connections |
| # Check myserver.log for detailed debug information |
*Note:* This keyword does not work when using Jython.
"""
if SSHClient.enable_logging(logfile):
self._log('SSH log is written to <a href="%s">file</a>.' % logfile,
'HTML')
def open_connection(self, host, alias=None, port=22, timeout=None,
newline=None, prompt=None, term_type=None, width=None,
height=None, path_separator=None, encoding=None, escape_ansi=None, encoding_errors=None):
"""Opens a new SSH connection to the given ``host`` and ``port``.
The new connection is made active. Possible existing connections
are left open in the background.
Note that on Jython this keyword actually opens a connection and
will fail immediately on unreachable hosts. On Python the actual
connection attempt will not be done until `Login` is called.
This keyword returns the index of the new connection which can be used
later to switch back to it. Indices start from ``1`` and are reset
when `Close All Connections` is used.
Optional ``alias`` can be given for the connection and can be used for
switching between connections, similarly as the index. Multiple
connections with the same ``alias`` are allowed.
See `Switch Connection` for more details.
Connection parameters, like `timeout` and `newline` are documented in
`configuration`. If they are not defined as arguments, the library
defaults are used for the connection.
All the arguments, except ``host``, ``alias`` and ``port``
can be later updated with `Set Client Configuration`.
Port ``22`` is assumed by default:
| ${index}= | `Open Connection` | my.server.com |
Non-standard port may be given as an argument:
| ${index}= | `Open Connection` | 192.168.1.1 | port=23 |
Aliases are handy, if you need to switch back to the connection later:
| `Open Connection` | my.server.com | alias=myserver |
| # Do something with my.server.com |
| `Open Connection` | 192.168.1.1 |
| `Switch Connection` | myserver | | # Back to my.server.com |
Settings can be overridden per connection, otherwise the ones set on
`library importing` or with `Set Default Configuration` are used:
| Open Connection | 192.168.1.1 | timeout=1 hour | newline=CRLF |
| # Do something with the connection |
| `Open Connection` | my.server.com | # Default timeout | # Default line breaks |
The `terminal settings` are also configurable per connection:
| `Open Connection` | 192.168.1.1 | term_type=ansi | width=40 |
Starting with version 3.3.0, SSHLibrary understands ``Host`` entries from
``~/.ssh/config``. For instance, if the config file contains:
| Host | my_custom_hostname |
| | Hostname my.server.com |
The connection to the server can also be made like this:
| `Open connection` | my_custom_hostname |
``Host`` entries are not read from config file when running with Jython.
"""
timeout = timeout or self._config.timeout
newline = newline or self._config.newline
prompt = prompt or self._config.prompt
term_type = term_type or self._config.term_type
width = width or self._config.width
height = height or self._config.height
path_separator = path_separator or self._config.path_separator
encoding = encoding or self._config.encoding
escape_ansi = escape_ansi or self._config.escape_ansi
encoding_errors = encoding_errors or self._config.encoding_errors
client = SSHClient(host, alias, port, timeout, newline, prompt,
term_type, width, height, path_separator, encoding, escape_ansi, encoding_errors)
connection_index = self._connections.register(client, alias)
client.config.update(index=connection_index)
return connection_index
def switch_connection(self, index_or_alias):
"""Switches the active connection by index or alias.
``index_or_alias`` is either connection index (an integer) or alias
(a string). Index is got as the return value of `Open Connection`.
Alternatively, both index and alias can queried as attributes
of the object returned by `Get Connection`. If there exists more
connections with the same alias the keyword will switch to the last
opened connection that has that alias.
This keyword returns the index of the previous active connection,
which can be used to switch back to that connection later.
Example:
| ${myserver}= | `Open Connection` | my.server.com |
| `Login` | johndoe | secretpasswd |
| `Open Connection` | build.local.net | alias=Build |
| `Login` | jenkins | jenkins |
| `Switch Connection` | ${myserver} | | # Switch using index |
| ${username}= | `Execute Command` | whoami | # Executed on my.server.com |
| `Should Be Equal` | ${username} | johndoe |
| `Switch Connection` | Build | | # Switch using alias |
| ${username}= | `Execute Command` | whoami | # Executed on build.local.net |
| `Should Be Equal` | ${username} | jenkins |
"""
old_index = self._connections.current_index
if index_or_alias is None:
self.close_connection()
else:
self._connections.switch(index_or_alias)
return old_index
def close_connection(self):
"""Closes the current connection.
No other connection is made active by this keyword. Manually use
`Switch Connection` to switch to another connection.
Example:
| `Open Connection` | my.server.com |
| `Login` | johndoe | secretpasswd |
| `Get File` | results.txt | /tmp |
| `Close Connection` |
| # Do something with /tmp/results.txt |
"""
connections = self._connections
connections.close_current()
def close_all_connections(self):
"""Closes all open connections.
This keyword is ought to be used either in test or suite teardown to
make sure all the connections are closed before the test execution
finishes.
After this keyword, the connection indices returned by
`Open Connection` are reset and start from ``1``.
Example:
| `Open Connection` | my.server.com |
| `Open Connection` | build.local.net |
| # Do something with the connections |
| [Teardown] | `Close all connections` |
"""
self._connections.close_all()
def get_connection(self, index_or_alias=None, index=False, host=False,
alias=False, port=False, timeout=False, newline=False,
prompt=False, term_type=False, width=False, height=False,
encoding=False, escape_ansi=False):
"""Returns information about the connection.
Connection is not changed by this keyword, use `Switch Connection` to
change the active connection.
If ``index_or_alias`` is not given, the information of the current
connection is returned. If there exists more connections with the same alias
the keyword will return last opened connection that has that alias.
This keyword returns an object that has the following attributes:
| = Name = | = Type = | = Explanation = |
| index | integer | Number of the connection. Numbering starts from ``1``. |
| host | string | Destination hostname. |
| alias | string | An optional alias given when creating the connection. |
| port | integer | Destination port. |
| timeout | string | `Timeout` length in textual representation. |
| newline | string | The line break sequence used by `Write` keyword. See `newline`. |
| prompt | string | `Prompt` character sequence for `Read Until Prompt`. |
| term_type | string | Type of the virtual terminal. See `terminal settings`. |
| width | integer | Width of the virtual terminal. See `terminal settings`. |
| height | integer | Height of the virtual terminal. See `terminal settings`. |
| path_separator | string | The `path separator` used on the remote host. |
| encoding | string | The `encoding` used for inputs and outputs. |
If there is no connection, an object having ``index`` and ``host``
as ``None`` is returned, rest of its attributes having their values
as configuration defaults.
If you want the information for all the open connections, use
`Get Connections`.
Getting connection information of the current connection:
| `Open Connection` | far.server.com |
| `Open Connection` | near.server.com | prompt=>> | # Current connection |
| ${nearhost}= | `Get Connection` | |
| `Should Be Equal` | ${nearhost.host} | near.server.com |
| `Should Be Equal` | ${nearhost.index} | 2 |
| `Should Be Equal` | ${nearhost.prompt} | >> |
| `Should Be Equal` | ${nearhost.term_type} | vt100 | # From defaults |
Getting connection information using an index:
| `Open Connection` | far.server.com |
| `Open Connection` | near.server.com | # Current connection |
| ${farhost}= | `Get Connection` | 1 |
| `Should Be Equal` | ${farhost.host} | far.server.com |
Getting connection information using an alias:
| `Open Connection` | far.server.com | alias=far |
| `Open Connection` | near.server.com | # Current connection |
| ${farhost}= | `Get Connection` | far |
| `Should Be Equal` | ${farhost.host} | far.server.com |
| `Should Be Equal` | ${farhost.alias} | far |
This keyword can also return plain connection attributes instead of
the whole connection object. This can be adjusted using the boolean
arguments ``index``, ``host``, ``alias``, and so on, that correspond
to the attribute names of the object. If such arguments are given, and
they evaluate to true (see `Boolean arguments`), only the respective
connection attributes are returned. Note that attributes are always
returned in the same order arguments are specified in the signature.
| `Open Connection` | my.server.com | alias=example |
| ${host}= | `Get Connection` | host=True |
| `Should Be Equal` | ${host} | my.server.com |
| ${host} | ${alias}= | `Get Connection` | host=yes | alias=please |
| `Should Be Equal` | ${host} | my.server.com |
| `Should Be Equal` | ${alias} | example |
Getting only certain attributes is especially useful when using this
library via the Remote library interface. This interface does not
support returning custom objects, but individual attributes can be
returned just fine.
This keyword logs the connection information with log level ``INFO``.
"""
if not index_or_alias:
index_or_alias = self._connections.current_index
try:
config = self._connections.get_connection(index_or_alias).config
except RuntimeError:
config = SSHClient(None).config
except AttributeError:
config = SSHClient(None).config
self._log(str(config), self._config.loglevel)
return_values = tuple(self._get_config_values(config, index, host,
alias, port, timeout,
newline, prompt,
term_type, width, height,
encoding, escape_ansi))
if not return_values:
return config
if len(return_values) == 1:
return return_values[0]
return return_values
def _log(self, msg, level='INFO'):
level = self._active_loglevel(level)
if level != 'NONE':
msg = msg.strip()
if not msg:
return
if logger:
logger.write(msg, level)
else:
print('*%s* %s' % (level, msg))
def _active_loglevel(self, level):
if level is None:
return self._config.loglevel
if is_string(level) and \
level.upper() in ['TRACE', 'DEBUG', 'INFO', 'WARN', 'HTML', 'NONE']:
return level.upper()
raise AssertionError("Invalid log level '%s'." % level)
def _get_config_values(self, config, index, host, alias, port, timeout,
newline, prompt, term_type, width, height, encoding, escape_ansi):
if is_truthy(index):
yield config.index
if is_truthy(host):
yield config.host
if is_truthy(alias):
yield config.alias
if is_truthy(port):
yield config.port
if is_truthy(timeout):
yield config.timeout
if is_truthy(newline):
yield config.newline
if is_truthy(prompt):
yield config.prompt
if is_truthy(term_type):
yield config.term_type
if is_truthy(width):
yield config.width
if is_truthy(height):
yield config.height
if is_truthy(encoding):
yield config.encoding
if is_truthy(escape_ansi):
yield config.escape_ansi
def get_connections(self):
"""Returns information about all the open connections.
This keyword returns a list of objects that are identical to the ones
returned by `Get Connection`.
Example:
| `Open Connection` | near.server.com | timeout=10s |
| `Open Connection` | far.server.com | timeout=5s |
| ${nearhost} | ${farhost}= | `Get Connections` |
| `Should Be Equal` | ${nearhost.host} | near.server.com |
| `Should Be Equal As Integers` | ${nearhost.timeout} | 10 |
| `Should Be Equal As Integers` | ${farhost.port} | 22 |
| `Should Be Equal As Integers` | ${farhost.timeout} | 5 |
This keyword logs the information of connections with log level
``INFO``.
"""
configs = [c.config for c in self._connections._connections if c]
for c in configs:
self._log(str(c), self._config.loglevel)
return configs
def login(self, username=None, password=None, allow_agent=False, look_for_keys=False, delay='0.5 seconds',
proxy_cmd=None, read_config=False, jumphost_index_or_alias=None, keep_alive_interval='0 seconds'):
"""Logs into the SSH server with the given ``username`` and ``password``.
Connection must be opened before using this keyword.
This keyword reads, returns and logs the server output after logging
in. If the `prompt` is set, everything until the prompt is read.
Otherwise the output is read using the `Read` keyword with the given
``delay``. The output is logged using the default `log level`.
``proxy_cmd`` is used to connect through a SSH proxy.
``jumphost_index_or_alias`` is used to connect through an intermediary
SSH connection that has been assigned an Index or Alias. Note that
this requires a Connection that has been logged in prior to use.
*Note:* ``proxy_cmd`` and ``jumphost_index_or_alias`` are mutually
exclusive SSH features. If you wish to use them both, create the
jump-host's Connection using the proxy_cmd first, then use jump-host
for secondary Connection.
``allow_agent`` enables the connection to the SSH agent.
``look_for_keys`` enables the searching for discoverable private key files in ``~/.ssh/``.
``read_config`` reads or ignores entries from ``~/.ssh/config`` file. This parameter will read the hostname,
port number, username and proxy command.
``read_config`` is new in SSHLibrary 3.7.0.
``keep_alive_interval`` is used to specify after which idle interval of time a
``keepalive`` packet will be sent to remote host. By default ``keep_alive_interval`` is
set to ``0``, which means sending the ``keepalive`` packet is disabled.
``keep_alive_interval`` is new in SSHLibrary 3.7.0.
*Note:* ``allow_agent``, ``look_for_keys``, ``proxy_cmd``, ``jumphost_index_or_alias``,
``read_config`` and ``keep_alive_interval`` do not work when using Jython.
Example that logs in and returns the output:
| `Open Connection` | linux.server.com |
| ${output}= | `Login` | johndoe | secretpasswd |
| `Should Contain` | ${output} | Last login at |
Example that logs in and returns everything until the prompt:
| `Open Connection` | linux.server.com | prompt=$ |
| ${output}= | `Login` | johndoe | secretpasswd |
| `Should Contain` | ${output} | johndoe@linux:~$ |
Example that logs in a remote server (linux.server.com) through a proxy server (proxy.server.com)
| `Open Connection` | linux.server.com |
| ${output}= | `Login` | johndoe | secretpasswd | \
proxy_cmd=ssh -l user -i keyfile -W linux.server.com:22 proxy.server.com |
| `Should Contain` | ${output} | Last login at |
Example usage of SSH Agent:
First, add the key to the authentication agent with: ``ssh-add /path/to/keyfile``.
| `Open Connection` | linux.server.com |
| `Login` | johndoe | allow_agent=True |
"""
jumphost_connection_conf = self.get_connection(index_or_alias=jumphost_index_or_alias) \
if jumphost_index_or_alias else None
jumphost_connection = self._connections.connections[jumphost_connection_conf.index-1] \
if jumphost_connection_conf and jumphost_connection_conf.index else None
return self._login(self.current.login, username, password, is_truthy(allow_agent),
is_truthy(look_for_keys), delay, proxy_cmd, is_truthy(read_config),
jumphost_connection, keep_alive_interval)
def login_with_public_key(self, username=None, keyfile=None, password='',
allow_agent=False, look_for_keys=False,
delay='0.5 seconds', proxy_cmd=None,
jumphost_index_or_alias=None,
read_config=False, keep_alive_interval='0 seconds'):
"""Logs into the SSH server using key-based authentication.
Connection must be opened before using this keyword.
``username`` is the username on the remote machine.
``keyfile`` is a path to a valid OpenSSH private key file on the local
filesystem.
``password`` is used to unlock the ``keyfile`` if needed. If the keyfile is
invalid a username-password authentication will be attempted.
``proxy_cmd`` is used to connect through a SSH proxy.
``jumphost_index_or_alias`` is used to connect through an intermediary
SSH connection that has been assigned an Index or Alias. Note that
this requires a Connection that has been logged in prior to use.
*Note:* ``proxy_cmd`` and ``jumphost_index_or_alias`` are mutually
exclusive SSH features. If you wish to use them both, create the
jump-host's Connection using the proxy_cmd first, then use jump-host
for secondary Connection.
This keyword reads, returns and logs the server output after logging
in. If the `prompt` is set, everything until the prompt is read.
Otherwise the output is read using the `Read` keyword with the given
``delay``. The output is logged using the default `log level`.
Example that logs in using a private key and returns the output:
| `Open Connection` | linux.server.com |
| ${output}= | `Login With Public Key` | johndoe | /home/johndoe/.ssh/id_rsa |
| `Should Contain` | ${motd} | Last login at |
With locked private keys, the keyring ``password`` is required:
| `Open Connection` | linux.server.com |
| `Login With Public Key` | johndoe | /home/johndoe/.ssh/id_dsa | keyringpasswd |
``allow_agent`` enables the connection to the SSH agent.
``look_for_keys`` enables the searching for discoverable private key
files in ``~/.ssh/``.
``read_config`` reads or ignores entries from ``~/.ssh/config`` file. This parameter will read the hostname,
port number, username, identity file and proxy command.
``read_config`` is new in SSHLibrary 3.7.0.
``keep_alive_interval`` is used to specify after which idle interval of time a
``keepalive`` packet will be sent to remote host. By default ``keep_alive_interval`` is
set to ``0``, which means sending the ``keepalive`` packet is disabled.
``keep_alive_interval`` is new in SSHLibrary 3.7.0.
*Note:* ``allow_agent``, ``look_for_keys``, ``proxy_cmd``, ``jumphost_index_or_alias``,
``read_config`` and ``keep_alive_interval`` do not work when using Jython.
"""
if proxy_cmd and jumphost_index_or_alias:
raise ValueError("`proxy_cmd` and `jumphost_connection` are mutually exclusive SSH features.")
jumphost_connection_conf = self.get_connection(index_or_alias=jumphost_index_or_alias) if jumphost_index_or_alias else None
jumphost_connection = self._connections.connections[jumphost_connection_conf.index-1] if jumphost_connection_conf and jumphost_connection_conf.index else None
return self._login(self.current.login_with_public_key, username,
keyfile, password, is_truthy(allow_agent),
is_truthy(look_for_keys), delay, proxy_cmd,
jumphost_connection, is_truthy(read_config), keep_alive_interval)
def _login(self, login_method, username, *args):
self._log("Logging into '%s:%s' as '%s'."
% (self.current.config.host, self.current.config.port,
username), self._config.loglevel)
try:
login_output = login_method(username, *args)
if is_truthy(self.current.config.escape_ansi):
login_output = self._escape_ansi_sequences(login_output)
self._log('Read output: %s' % login_output, self._config.loglevel)
return login_output
except SSHClientException as e:
raise RuntimeError(e)
def get_pre_login_banner(self, host=None, port=22):
"""Returns the banner supplied by the server upon connect.
There are 2 ways of getting banner information.
1. Independent of any connection:
| ${banner} = | `Get Pre Login Banner` | ${HOST} |
| `Should Be Equal` | ${banner} | Testing pre-login banner |
The argument ``host`` is mandatory for getting banner key without
an open connection.
2. From the current connection:
| `Open Connection` | ${HOST} | prompt=${PROMPT} |
| `Login` | ${USERNAME} | ${PASSWORD} |
| ${banner} = | `Get Pre Login Banner` |
| `Should Be Equal` | ${banner} | Testing pre-login banner |
New in SSHLibrary 3.0.0.
*Note:* This keyword does not work when using Jython.
"""
if host:
banner = SSHClient.get_banner_without_login(host, port)
elif self.current:
banner = self.current.get_banner()
else:
raise RuntimeError("'host' argument is mandatory if there is no open connection.")
return banner.decode(self.DEFAULT_ENCODING)
def execute_command(self, command, return_stdout=True, return_stderr=False,
return_rc=False, sudo=False, sudo_password=None, timeout=None, output_during_execution=False,
output_if_timeout=False, invoke_subsystem=False, forward_agent=False):
"""Executes ``command`` on the remote machine and returns its outputs.
This keyword executes the ``command`` and returns after the execution
has been finished. Use `Start Command` if the command should be
started in the background.
By default, only the standard output is returned:
| ${stdout}= | `Execute Command` | echo 'Hello John!' |
| `Should Contain` | ${stdout} | Hello John! |
Arguments ``return_stdout``, ``return_stderr`` and ``return_rc`` are
used to specify, what is returned by this keyword.
If several arguments evaluate to a true value (see `Boolean arguments`),
multiple values are returned.
If errors are needed as well, set the respective argument value to
true:
| ${stdout} | ${stderr}= | `Execute Command` | echo 'Hello John!' | return_stderr=True |
| `Should Be Empty` | ${stderr} |
Often checking the return code is enough:
| ${rc}= | `Execute Command` | echo 'Hello John!' | return_stdout=False | return_rc=True |
| `Should Be Equal As Integers` | ${rc} | 0 | # succeeded |
Arguments ``sudo`` and ``sudo_password`` are used for executing
commands within a sudo session. Due to different permission elevation
in Cygwin, these two arguments will not work when using it.
| `Execute Command` | pwd | sudo=True | sudo_password=test |
The ``command`` is always executed in a new shell. Thus possible
changes to the environment (e.g. changing working directory) are not
visible to the later keywords:
| ${pwd}= | `Execute Command` | pwd |
| `Should Be Equal` | ${pwd} | /home/johndoe |
| `Execute Command` | cd /tmp |
| ${pwd}= | `Execute Command` | pwd |
| `Should Be Equal` | ${pwd} | /home/johndoe |
`Write` and `Read` can be used for running multiple commands in the
same shell. See `interactive shells` section for more information.
This keyword logs the executed command and its exit status with
log level ``INFO``.
If the `timeout` expires before the command is executed, this keyword fails.
``invoke_subsystem`` will request a subsystem on the server, given by the
``command`` argument. If the server allows it, the channel will then be
directly connected to the requested subsystem.
``forward_agent`` determines whether to forward the local SSH Agent process to the process being executed.
This assumes that there is an agent in use (i.e. `eval $(ssh-agent)`). Setting ``forward_agent`` does not
work with Jython.
| `Execute Command` | ssh-add -L | forward_agent=True |
``invoke_subsystem`` and ``forward_agent`` are new in SSHLibrary 3.4.0.
``output_during_execution`` enable logging the output of the command as it is generated, into the console.
``output_if_timeout`` if the executed command doesn't end before reaching timeout, the parameter will log the
output of the command at the moment of timeout.
``output_during_execution`` and ``output_if_timeout`` are not working with Jython. New in SSHLibrary 3.5.0.
"""
if not is_truthy(sudo):
self._log("Executing command '%s'." % command, self._config.loglevel)
else:
self._log("Executing command 'sudo %s'." % command, self._config.loglevel)
opts = self._legacy_output_options(return_stdout, return_stderr,
return_rc)
stdout, stderr, rc = self.current.execute_command(command, sudo, sudo_password,
timeout, output_during_execution, output_if_timeout,
is_truthy(invoke_subsystem), forward_agent)
return self._return_command_output(stdout, stderr, rc, *opts)
def start_command(self, command, sudo=False, sudo_password=None, invoke_subsystem=False, forward_agent=False):
"""Starts execution of the ``command`` on the remote machine and returns immediately.
This keyword returns nothing and does not wait for the ``command``
execution to be finished. If waiting for the output is required,
use `Execute Command` instead.
This keyword does not return any output generated by the started
``command``. Use `Read Command Output` to read the output:
| `Start Command` | echo 'Hello John!' |
| ${stdout}= | `Read Command Output` |
| `Should Contain` | ${stdout} | Hello John! |
The ``command`` is always executed in a new shell, similarly as with
`Execute Command`. Thus possible changes to the environment (e.g.
changing working directory) are not visible to the later keywords:
| `Start Command` | pwd |
| ${pwd}= | `Read Command Output` |
| `Should Be Equal` | ${pwd} | /home/johndoe |
| `Start Command` | cd /tmp |
| `Start Command` | pwd |
| ${pwd}= | `Read Command Output` |
| `Should Be Equal` | ${pwd} | /home/johndoe |
Arguments ``sudo`` and ``sudo_password`` are used for executing
commands within a sudo session. Due to different permission elevation
in Cygwin, these two arguments will not when using it.
| `Start Command` | pwd | sudo=True | sudo_password=test |
`Write` and `Read` can be used for running multiple commands in the
same shell. See `interactive shells` section for more information.
This keyword logs the started command with log level ``INFO``.
``invoke_subsystem`` argument behaves similarly as with `Execute Command` keyword.
``forward_agent`` argument behaves similarly as with `Execute Command` keyword.
``invoke_subsystem`` is new in SSHLibrary 3.4.0.
"""
if not is_truthy(sudo):
self._log("Starting command '%s'." % command, self._config.loglevel)
else:
self._log("Starting command 'sudo %s'." % command, self._config.loglevel)
if self.current.config.index not in self._last_commands.keys():
self._last_commands[self.current.config.index] = command
else:
temp_dict = {self.current.config.index: command}
self._last_commands.update(temp_dict)
self.current.start_command(command, sudo, sudo_password, is_truthy(invoke_subsystem), is_truthy(forward_agent))
def read_command_output(self, return_stdout=True, return_stderr=False,
return_rc=False, timeout=None):
"""Returns outputs of the most recent started command.
At least one command must have been started using `Start Command`
before this keyword can be used.
By default, only the standard output of the started command is
returned:
| `Start Command` | echo 'Hello John!' |
| ${stdout}= | `Read Command Output` |
| `Should Contain` | ${stdout} | Hello John! |
Arguments ``return_stdout``, ``return_stderr`` and ``return_rc`` are
used to specify, what is returned by this keyword.
If several arguments evaluate to a true value (see `Boolean arguments`),
multiple values are returned.
If errors are needed as well, set the argument value to true:
| `Start Command` | echo 'Hello John!' |
| ${stdout} | ${stderr}= | `Read Command Output` | return_stderr=True |
| `Should Be Empty` | ${stderr} |
Often checking the return code is enough:
| `Start Command` | echo 'Hello John!' |
| ${rc}= | `Read Command Output` | return_stdout=False | return_rc=True |
| `Should Be Equal As Integers` | ${rc} | 0 | # succeeded |
Using `Start Command` and `Read Command Output` follows
LIFO (last in, first out) policy, meaning that `Read Command Output`
operates on the most recent started command, after which that command
is discarded and its output cannot be read again.
If several commands have been started, the output of the last started
command is returned. After that, a subsequent call will return the
output of the new last (originally the second last) command:
| `Start Command` | echo 'HELLO' |
| `Start Command` | echo 'SECOND' |
| ${stdout}= | `Read Command Output` |
| `Should Contain` | ${stdout} | 'SECOND' |
| ${stdout}= | `Read Command Output` |
| `Should Contain` | ${stdout} | 'HELLO' |
This keyword logs the read command with log level ``INFO``.
"""
self._log("Reading output of command '%s'." % self._last_commands.get(self.current.config.index), self._config.loglevel)
opts = self._legacy_output_options(return_stdout, return_stderr,
return_rc)
try:
stdout, stderr, rc = self.current.read_command_output(timeout=timeout)
except SSHClientException as msg:
raise RuntimeError(msg)
return self._return_command_output(stdout, stderr, rc, *opts)
def create_local_ssh_tunnel(self, local_port, remote_host, remote_port=22, bind_address=None):
"""
The keyword uses the existing connection to set up local port forwarding
(the openssh -L option) from a local port through a tunneled
connection to a destination reachable from the SSH server machine.
The example below illustrates the forwarding from the local machine, of
the connection on port 80 of an inaccessible server (secure.server.com)
by connecting to a remote SSH server (remote.server.com) that has access
to the secure server, and makes it available locally, on the port 9191:
| `Open Connection` | remote.server.com | prompt=$ |
| `Login` | johndoe | secretpasswd |
| `Create Local SSH Tunnel` | 9191 | secure.server.com | 80 |
The tunnel is active as long as the connection is open.
The default ``remote_port`` is 22.
By default, anyone can connect on the specified port on the SSH client
because the local machine listens on all interfaces. Access can be
restricted by specifying a ``bind_address``. Setting ``bind_address``
does not work with Jython.
Example:
| `Create Local SSH Tunnel` | 9191 | secure.server.com | 80 | bind_address=127.0.0.1 |
``bind_address`` is new in SSHLibrary 3.3.0.
"""
self.current.create_local_ssh_tunnel(local_port, remote_host, remote_port, bind_address)
def _legacy_output_options(self, stdout, stderr, rc):
if not is_string(stdout):
return stdout, stderr, rc
stdout = stdout.lower()
if stdout == 'stderr':
return False, True, rc
if stdout == 'both':
return True, True, rc
return stdout, stderr, rc
def _return_command_output(self, stdout, stderr, rc, return_stdout,
return_stderr, return_rc):
self._log("Command exited with return code %d." % rc, self._config.loglevel)
ret = []
if is_truthy(return_stdout):
ret.append(stdout.rstrip('\n'))
if is_truthy(return_stderr):
ret.append(stderr.rstrip('\n'))
if is_truthy(return_rc):
ret.append(rc)
if len(ret) == 1:
return ret[0]
return ret
def write(self, text, loglevel=None):
"""Writes the given ``text`` on the remote machine and appends a newline.
Appended `newline` can be configured.
This keyword returns and consumes the written ``text``
(including the appended newline) from the server output. See the
`Interactive shells` section for more information.
The written ``text`` is logged. ``loglevel`` can be used to override
the default `log level`.
Example:
| ${written}= | `Write` | su |
| `Should Contain` | ${written} | su | # Returns the consumed output |
| ${output}= | `Read` |
| `Should Not Contain` | ${output} | ${written} | # Was consumed from the output |
| `Should Contain` | ${output} | Password: |
| `Write` | invalidpasswd |
| ${output}= | `Read` |
| `Should Contain` | ${output} | su: Authentication failure |
See also `Write Bare`.
"""
self._write(text, add_newline=True)
return self._read_and_log(loglevel, self.current.read_until_newline)
def write_bare(self, text):
"""Writes the given ``text`` on the remote machine without appending a newline.
Unlike `Write`, this keyword returns and consumes nothing. See the
`Interactive shells` section for more information.
Example:
| `Write Bare` | su\\n |
| ${output}= | `Read` |
| `Should Contain` | ${output} | su | # Was not consumed from output |
| `Should Contain` | ${output} | Password: |
| `Write Bare` | invalidpasswd\\n |
| ${output}= | `Read` |
| `Should Contain` | ${output} | su: Authentication failure |
See also `Write`.
"""
self._write(text)
def _write(self, text, add_newline=False):
try:
self.current.write(text, is_truthy(add_newline))
except SSHClientException as e:
raise RuntimeError(e)
def write_until_expected_output(self, text, expected, timeout,
retry_interval, loglevel=None):
"""Writes the given ``text`` repeatedly until ``expected`` appears in the server output.
This keyword returns nothing.
``text`` is written without appending a newline and is consumed from
the server output before ``expected`` is read. See more information
on the `Interactive shells` section.
If ``expected`` does not appear in output within ``timeout``, this
keyword fails. ``retry_interval`` defines the time before writing
``text`` again. Both ``timeout`` and ``retry_interval`` must be given
in Robot Framework's `time format`.
The written ``text`` is logged. ``loglevel`` can be used to override
the default `log level`.
This example will write ``lsof -c python27\\n`` (list all files
currently opened by Python 2.7), until ``myscript.py`` appears in the
output. The command is written every 0.5 seconds. The keyword fails if
``myscript.py`` does not appear in the server output in 5 seconds:
| `Write Until Expected Output` | lsof -c python27\\n | expected=myscript.py | timeout=5s | retry_interval=0.5s |
"""
self._read_and_log(loglevel, self.current.write_until_expected, text,
expected, timeout, retry_interval)
def read(self, loglevel=None, delay=None):
"""Consumes and returns everything available on the server output.
If ``delay`` is given, this keyword waits that amount of time and
reads output again. This wait-read cycle is repeated as long as
further reads return more output or the default `timeout` expires.
``delay`` must be given in Robot Framework's `time format`.
This keyword is most useful for reading everything from
the server output, thus clearing it.
The read output is logged. ``loglevel`` can be used to override
the default `log level`.
Example:
| `Open Connection` | my.server.com |
| `Login` | johndoe | secretpasswd |
| `Write` | sudo su - | |
| ${output}= | `Read` | delay=0.5s |
| `Should Contain` | ${output} | [sudo] password for johndoe: |
| `Write` | secretpasswd | |
| ${output}= | `Read` | loglevel=WARN | # Shown in the console due to loglevel |
| `Should Contain` | ${output} | root@ |
See `interactive shells` for more information about writing and
reading in general.
"""
return self._read_and_log(loglevel, self.current.read, delay)
def read_until(self, expected, loglevel=None):
"""Consumes and returns the server output until ``expected`` is encountered.
Text up until and including the ``expected`` will be returned.
If the `timeout` expires before the match is found, this keyword fails.
The read output is logged. ``loglevel`` can be used to override
the default `log level`.
Example:
| `Open Connection` | my.server.com |
| `Login` | johndoe | ${PASSWORD} |
| `Write` | sudo su - | |
| ${output}= | `Read Until` | : |
| `Should Contain` | ${output} | [sudo] password for johndoe: |
| `Write` | ${PASSWORD} | |
| ${output}= | `Read Until` | @ |
| `Should End With` | ${output} | root@ |
See also `Read Until Prompt` and `Read Until Regexp` keywords. For
more details about reading and writing in general, see the
`Interactive shells` section.
"""
return self._read_and_log(loglevel, self.current.read_until, expected)
def read_until_prompt(self, loglevel=None, strip_prompt=False):
"""Consumes and returns the server output until the prompt is found.
Text up and until prompt is returned. The `prompt` must be set before
this keyword is used.
If the `timeout` expires before the match is found, this keyword fails.
This keyword is useful for reading output of a single command when
output of previous command has been read and that command does not
produce prompt characters in its output.
The read output is logged. ``loglevel`` can be used to override
the default `log level`.
Example:
| `Open Connection` | my.server.com | prompt=$ |
| `Login` | johndoe | ${PASSWORD} |
| `Write` | sudo su - | |
| `Write` | ${PASSWORD} | |
| `Set Client Configuration` | prompt=# | # For root, the prompt is # |
| ${output}= | `Read Until Prompt` | |
| `Should End With` | ${output} | root@myserver:~# |
See also `Read Until` and `Read Until Regexp` keywords. For more
details about reading and writing in general, see the `Interactive
shells` section.
If you want to exclude the prompt from the returned output, set ``strip_prompt``
to a true value (see `Boolean arguments`). If your prompt is a regular expression,
make sure that the expression spans the whole prompt, because only the part of the
output that matches the regular expression is stripped away.
``strip_prompt`` argument is new in SSHLibrary 3.2.0.
"""
return self._read_and_log(loglevel, self.current.read_until_prompt, is_truthy(strip_prompt))
def read_until_regexp(self, regexp, loglevel=None):
"""Consumes and returns the server output until a match to ``regexp`` is found.
``regexp`` can be a regular expression pattern or a compiled regular
expression object. See the `Regular expressions` section for more
details about the syntax.
Text up until and including the ``regexp`` will be returned.
If the `timeout` expires before the match is found, this keyword fails.
The read output is logged. ``loglevel`` can be used to override
the default `log level`.
Example:
| `Open Connection` | my.server.com |
| `Login` | johndoe | ${PASSWORD} |
| `Write` | sudo su - | |
| ${output}= | `Read Until Regexp` | \\\\[.*\\\\].*: |
| `Should Contain` | ${output} | [sudo] password for johndoe: |
| `Write` | ${PASSWORD} | |
| ${output}= | `Read Until Regexp` | .*@ |
| `Should Contain` | ${output} | root@ |
See also `Read Until` and `Read Until Prompt` keywords. For more
details about reading and writing in general, see the `Interactive
shells` section.
"""
return self._read_and_log(loglevel, self.current.read_until_regexp,
regexp)
def _read_and_log(self, loglevel, reader, *args):
try:
output = reader(*args)
except SSHClientException as e:
if is_truthy(self.current.config.escape_ansi):
message = self._escape_ansi_sequences(e.args[0])
raise RuntimeError(message)
raise RuntimeError(e)
if is_truthy(self.current.config.escape_ansi):
output = self._escape_ansi_sequences(output)
self._log(output, loglevel)
return output
@staticmethod
def _escape_ansi_sequences(output):
ansi_escape = re.compile(r'(?:\x1B[@-_]|[\x80-\x9F])[0-?]*[ -/]*[@-~]', flags=re.IGNORECASE)
output = ansi_escape.sub('', output)
return ("%r" % output)[1:-1].encode().decode('unicode-escape')
def get_file(self, source, destination='.', scp='OFF', scp_preserve_times=False):
"""Downloads file(s) from the remote machine to the local machine.
``source`` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
If the source contains wildcards explained in `glob patterns`,
all files matching it are downloaded. In this case ``destination``
must always be a directory.
``destination`` is the target path on the local machine. Both
absolute paths and paths relative to the current working directory
are supported.
``scp`` enables the use of scp (secure copy protocol) for
the file transfer. See `Transfer files with SCP` for more details.
``scp_preserve_times`` preserve modification time and access time
of transferred files and directories. It is ignored when running with Jython.
Examples:
| `Get File` | /var/log/auth.log | /tmp/ |
| `Get File` | /tmp/example.txt | C:\\\\temp\\\\new_name.txt |
| `Get File` | /path/to/*.txt |
The local ``destination`` is created using the rules explained below:
1. If the ``destination`` is an existing file, the ``source`` file is
downloaded over it.
2. If the ``destination`` is an existing directory, the ``source``
file is downloaded into it. Possible file with the same name is
overwritten.
3. If the ``destination`` does not exist and it ends with the path
separator of the local operating system, it is considered a
directory. The directory is then created and the ``source`` file
is downloaded into it. Possible missing intermediate directories
are also created.
4. If the ``destination`` does not exist and does not end with the
local path separator, it is considered a file. The ``source`` file
is downloaded and saved using that file name, and possible missing
intermediate directories are also created.
5. If ``destination`` is not given, the current working directory on
the local machine is used as the destination. This is typically
the directory where the test execution was started and thus
accessible using built-in ``${EXECDIR}`` variable.
See also `Get Directory`.
``scp_preserve_times`` is new in SSHLibrary 3.6.0.
"""
return self._run_command(self.current.get_file, source,
destination, scp, scp_preserve_times)
def get_directory(self, source, destination='.', recursive=False,
scp='OFF', scp_preserve_times=False):
"""Downloads a directory, including its content, from the remote machine to the local machine.
``source`` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
``destination`` is the target path on the local machine. Both
absolute paths and paths relative to the current working directory
are supported.
``recursive`` specifies whether to recursively download all
subdirectories inside ``source``. Subdirectories are downloaded if
the argument value evaluates to true (see `Boolean arguments`).
``scp`` enables the use of scp (secure copy protocol) for
the file transfer. See `Transfer files with SCP` for more details.
``scp_preserve_times`` preserve modification time and access time
of transferred files and directories. It is ignored when running with Jython.
Examples:
| `Get Directory` | /var/logs | /tmp |
| `Get Directory` | /var/logs | /tmp/non/existing |
| `Get Directory` | /var/logs |
| `Get Directory` | /var/logs | recursive=True |
The local ``destination`` is created as following:
1. If ``destination`` is an existing path on the local machine,
``source`` directory is downloaded into it.
2. If ``destination`` does not exist on the local machine, it is
created and the content of ``source`` directory is downloaded
into it.
3. If ``destination`` is not given, ``source`` directory is
downloaded into the current working directory on the local
<machine. This is typically the directory where the test execution
was started and thus accessible using the built-in ``${EXECDIR}``
variable.
See also `Get File`.
``scp_preserve_times`` is new in SSHLibrary 3.6.0.
"""
return self._run_command(self.current.get_directory, source,
destination, is_truthy(recursive), scp, scp_preserve_times)
def put_file(self, source, destination='.', mode='0744', newline='',
scp='OFF', scp_preserve_times=False):
"""Uploads file(s) from the local machine to the remote machine.
``source`` is the path on the local machine. Both absolute paths and
paths relative to the current working directory are supported.
If the source contains wildcards explained in `glob patterns`,
all files matching it are uploaded. In this case ``destination``
must always be a directory.
``destination`` is the target path on the remote machine. Both
absolute paths and paths relative to the current working directory
are supported.
``mode`` can be used to set the target file permission.
Numeric values are accepted. The default value is ``0744``
(``-rwxr--r--``). If None value is provided, setting modes
will be skipped.
``newline`` can be used to force the line break characters that are
written to the remote files. Valid values are ``LF`` and ``CRLF``.
Does not work if ``scp`` is enabled.
``scp`` enables the use of scp (secure copy protocol) for
the file transfer. See `Transfer files with SCP` for more details.
``scp_preserve_times`` preserve modification time and access time
of transferred files and directories. It is ignored when running with Jython.
Examples:
| `Put File` | /path/to/*.txt |
| `Put File` | /path/to/*.txt | /home/groups/robot | mode=0770 |
| `Put File` | /path/to/*.txt | /home/groups/robot | mode=None |
| `Put File` | /path/to/*.txt | newline=CRLF |
The remote ``destination`` is created as following:
1. If ``destination`` is an existing file, ``source`` file is uploaded
over it.
2. If ``destination`` is an existing directory, ``source`` file is
uploaded into it. Possible file with same name is overwritten.
3. If ``destination`` does not exist and it ends with the
`path separator`, it is considered a directory. The directory is
then created and ``source`` file uploaded into it.
Possibly missing intermediate directories are also created.
4. If ``destination`` does not exist and it does not end with
the `path separator`, it is considered a file.
If the path to the file does not exist, it is created.
5. If ``destination`` is not given, the user's home directory
on the remote machine is used as the destination.
See also `Put Directory`.
``scp_preserve_times`` is new in SSHLibrary 3.6.0.
"""
return self._run_command(self.current.put_file, source,
destination, mode, newline, scp, scp_preserve_times)
def put_directory(self, source, destination='.', mode='0744', newline='',
recursive=False, scp='OFF', scp_preserve_times=False):
"""Uploads a directory, including its content, from the local machine to the remote machine.
``source`` is the path on the local machine. Both absolute paths and
paths relative to the current working directory are supported.
``destination`` is the target path on the remote machine. Both
absolute paths and paths relative to the current working directory
are supported.
``mode`` can be used to set the target file permission.
Numeric values are accepted. The default value is ``0744``
(``-rwxr--r--``).
``newline`` can be used to force the line break characters that are
written to the remote files. Valid values are ``LF`` and ``CRLF``.
Does not work if ``scp`` is enabled.
``recursive`` specifies whether to recursively upload all
subdirectories inside ``source``. Subdirectories are uploaded if the
argument value evaluates to true (see `Boolean arguments`).
``scp`` enables the use of scp (secure copy protocol) for
the file transfer. See `Transfer files with SCP` for more details.
``scp_preserve_times`` preserve modification time and access time
of transferred files and directories. It is ignored when running with Jython.
Examples:
| `Put Directory` | /var/logs | /tmp |
| `Put Directory` | /var/logs | /tmp/non/existing |
| `Put Directory` | /var/logs |
| `Put Directory` | /var/logs | recursive=True |
| `Put Directory` | /var/logs | /home/groups/robot | mode=0770 |
| `Put Directory` | /var/logs | newline=CRLF |Get File With SCP (transfer) And Preserve Time
The remote ``destination`` is created as following:
1. If ``destination`` is an existing path on the remote machine,
``source`` directory is uploaded into it.
2. If ``destination`` does not exist on the remote machine, it is
created and the content of ``source`` directory is uploaded into
it.
3. If ``destination`` is not given, ``source`` directory is typically
uploaded to user's home directory on the remote machine.
See also `Put File`.
``scp_preserve_times`` is new in SSHLibrary 3.6.0.
"""
return self._run_command(self.current.put_directory, source,
destination, mode, newline,
is_truthy(recursive), scp, scp_preserve_times)
def _run_command(self, command, *args):
try:
files = command(*args)
except SSHClientException as e:
raise RuntimeError(e)
if files:
for src, dst in files:
self._log("'%s' -> '%s'" % (src, dst), self._config.loglevel)
def file_should_exist(self, path):
"""Fails if the given ``path`` does NOT point to an existing file.
Supports wildcard expansions described in `glob patterns`.
Example:
| `File Should Exist` | /boot/initrd.img |
| `File Should Exist` | /boot/*.img |
Note that symlinks are followed:
| `File Should Exist` | /initrd.img | # Points to /boot/initrd.img |
"""
if not self.current.is_file(path):
raise AssertionError("File '%s' does not exist." % path)
def file_should_not_exist(self, path):
"""Fails if the given ``path`` points to an existing file.
Supports wildcard expansions described in `glob patterns`.
Example:
| `File Should Not Exist` | /non/existing |
| `File Should Not Exist` | /non/* |
Note that this keyword follows symlinks. Thus the example fails if
``/non/existing`` is a link that points an existing file.
"""
if self.current.is_file(path):
raise AssertionError("File '%s' exists." % path)
def directory_should_exist(self, path):
"""Fails if the given ``path`` does not point to an existing directory.
Supports wildcard expansions described in `glob patterns`, but only on the current directory.
Example:
| `Directory Should Exist` | /usr/share/man |
| `Directory Should Exist` | /usr/share/* |
Note that symlinks are followed:
| `Directory Should Exist` | /usr/local/man | # Points to /usr/share/man/ |
"""
if not self.current.is_dir(path):
raise AssertionError("Directory '%s' does not exist." % path)
def directory_should_not_exist(self, path):
"""Fails if the given ``path`` points to an existing directory.
Supports wildcard expansions described in `glob patterns`, but only on the current directory.
Example:
| `Directory Should Not Exist` | /non/existing |
| `Directory Should Not Exist` | /non/* |
Note that this keyword follows symlinks. Thus the example fails if
``/non/existing`` is a link that points to an existing directory.
"""
if self.current.is_dir(path):
raise AssertionError("Directory '%s' exists." % path)
def list_directory(self, path, pattern=None, absolute=False):
"""Returns and logs items in the remote ``path``, optionally filtered with ``pattern``.
``path`` is a path on the remote machine. Both absolute paths and
paths relative to the current working directory are supported.
If ``path`` is a symlink, it is followed.
Item names are returned in case-sensitive alphabetical order,
e.g. ``['A Name', 'Second', 'a lower case name', 'one more']``.
Implicit directories ``.`` and ``..`` are not returned. The returned
items are automatically logged.
By default, the item names are returned relative to the given
remote path (e.g. ``file.txt``). If you want them be returned in the
absolute format (e.g. ``/home/johndoe/file.txt``), set the
``absolute`` argument to any non-empty string.
If ``pattern`` is given, only items matching it are returned. The
pattern is a glob pattern and its syntax is explained in the
`Pattern matching` section.
Examples (using also other `List Directory` variants):
| @{items}= | `List Directory` | /home/johndoe |
| @{files}= | `List Files In Directory` | /tmp | *.txt | absolute=True |
If you are only interested in directories or files,
use `List Files In Directory` or `List Directories In Directory`,
respectively.
"""
try:
items = self.current.list_dir(path, pattern, is_truthy(absolute))
except SSHClientException as msg:
raise RuntimeError(msg)
self._log('%d item%s:\n%s' % (len(items), plural_or_not(items),
'\n'.join(items)), self._config.loglevel)
return items
def list_files_in_directory(self, path, pattern=None, absolute=False):
"""A wrapper for `List Directory` that returns only files."""
absolute = is_truthy(absolute)
try:
files = self.current.list_files_in_dir(path, pattern, absolute)
except SSHClientException as msg:
raise RuntimeError(msg)
files = self.current.list_files_in_dir(path, pattern, absolute)
self._log('%d file%s:\n%s' % (len(files), plural_or_not(files),
'\n'.join(files)), self._config.loglevel)
return files
def list_directories_in_directory(self, path, pattern=None, absolute=False):
"""A wrapper for `List Directory` that returns only directories."""
try:
dirs = self.current.list_dirs_in_dir(path, pattern, is_truthy(absolute))
except SSHClientException as msg:
raise RuntimeError(msg)
self._log('%d director%s:\n%s' % (len(dirs),
'y' if len(dirs) == 1 else 'ies',
'\n'.join(dirs)), self._config.loglevel)
return dirs
class _DefaultConfiguration(Configuration):
def __init__(self, timeout, newline, prompt, loglevel, term_type, width,
height, path_separator, encoding, escape_ansi, encoding_errors):
super(_DefaultConfiguration, self).__init__(
timeout=TimeEntry(timeout),
newline=NewlineEntry(newline),
prompt=StringEntry(prompt),
loglevel=LogLevelEntry(loglevel),
term_type=StringEntry(term_type),
width=IntegerEntry(width),
height=IntegerEntry(height),
path_separator=StringEntry(path_separator),
encoding=StringEntry(encoding),
escape_ansi=StringEntry(escape_ansi),
encoding_errors=StringEntry(encoding_errors)
) | /robotframework-sshlibrary-3.8.1rc1.tar.gz/robotframework-sshlibrary-3.8.1rc1/src/SSHLibrary/library.py | 0.828904 | 0.401306 | library.py | pypi |
from .utils import is_bytes, secs_to_timestr, timestr_to_secs
class ConfigurationException(Exception):
"""Raised when creating, updating or accessing a Configuration entry fails.
"""
pass
class Configuration(object):
"""A simple configuration class.
Configuration is defined with keyword arguments, in which the value must
be an instance of :py:class:`Entry`. Different subclasses of `Entry` can
be used to handle common types and conversions.
Example::
cfg = Configuration(name=StringEntry('initial'),
age=IntegerEntry('42'))
assert cfg.name == initial
assert cfg.age == 42
cfg.update(name='John Doe')
assert cfg.name == 'John Doe'
"""
def __init__(self, **entries):
self._config = entries
def __str__(self):
return '\n'.join('%s=%s' % (k, v) for k, v in self._config.items())
def update(self, **entries):
"""Update configuration entries.
:param entries: entries to be updated, keyword argument names must
match existing entry names. If any value in `**entries` is None,
the corresponding entry is *not* updated.
See `__init__` for an example.
"""
for name, value in entries.items():
if value is not None:
self._config[name].set(value)
def get(self, name):
"""Return entry corresponding to name."""
return self._config[name]
def __getattr__(self, name):
if name in self._config:
return self._config[name].value
msg = "Configuration parameter '%s' is not defined." % name
raise ConfigurationException(msg)
class Entry(object):
"""A base class for values stored in :py:class:`Configuration`.
:param:`initial` the initial value of this entry.
"""
def __init__(self, initial=None):
self._value = self._create_value(initial)
def __str__(self):
return str(self._value)
@property
def value(self):
return self._value
def set(self, value):
self._value = self._parse_value(value)
def _parse_value(self, value):
raise NotImplementedError
def _create_value(self, value):
if value is None:
return None
return self._parse_value(value)
class StringEntry(Entry):
"""String value to be stored in :py:class:`Configuration`."""
def _parse_value(self, value):
return str(value)
class IntegerEntry(Entry):
"""Integer value to be stored in stored in :py:class:`Configuration`.
Given value is converted to string using `int()`.
"""
def _parse_value(self, value):
return int(value)
class TimeEntry(Entry):
"""Time string to be stored in :py:class:`Configuration`.
Given time string will be converted to seconds using
:py:func:`robot.utils.timestr_to_secs`.
"""
def _parse_value(self, value):
value = str(value)
return timestr_to_secs(value) if value else None
def __str__(self):
return secs_to_timestr(self._value)
class LogLevelEntry(Entry):
"""Log level to be stored in :py:class:`Configuration`.
Given string must be one of 'TRACE', 'DEBUG', 'INFO', 'WARN' or 'NONE' case
insensitively.
"""
LEVELS = ('TRACE', 'DEBUG', 'INFO', 'WARN', 'NONE')
def _parse_value(self, value):
value = str(value).upper()
if value not in self.LEVELS:
raise ConfigurationException("Invalid log level '%s'." % value)
return value
class NewlineEntry(Entry):
"""New line sequence to be stored in :py:class:`Configuration`.
Following conversion are performed on the given string:
* 'LF' -> '\n'
* 'CR' -> '\r'
"""
def _parse_value(self, value):
if is_bytes(value):
value = value.decode('ASCII')
value = value.upper()
return value.replace('LF', '\n').replace('CR', '\r') | /robotframework-sshlibrary-3.8.1rc1.tar.gz/robotframework-sshlibrary-3.8.1rc1/src/SSHLibrary/config.py | 0.928417 | 0.557303 | config.py | pypi |
# robotframework-stacktrace
A listener for RF >= 4.0 that prints a Stack Trace to console to faster find the code section where the failure appears.
## Installation
```shell
pip install robotframework-stacktrace
```
## Usage
```shell
robot --listener RobotStackTracer <your file.robot>
```
### Example
Old Console Output:
```commandline
❯ robot -d logs TestCases/14_Browser/01_CarConfig.robot
==============================================================================
01 CarConfig
==============================================================================
Configure Car with Pass | FAIL |
TimeoutError: page.selectOption: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector ""Basismodell" >> ../.. >> select"
selector resolved to visible <select _ngcontent-c7="" class="maxWidth ng-untouched ng…>…</select>
selecting specified option(s)
did not find some options - waiting...
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
Configure Car with wrong Acc | FAIL |
TimeoutError: page.check: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector "//span[contains(text(),'aABS')]/../input"
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
Configure Car with car Acc | FAIL |
TimeoutError: page.click: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector "[href="/config/summary/wrong"]"
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
01 CarConfig | FAIL |
3 tests, 0 passed, 3 failed
==============================================================================
Output: /Source/RF-Schulung/02_RobotFiles/logs/output.xml
Log: /Source/RF-Schulung/02_RobotFiles/logs/log.html
Report: /Source/RF-Schulung/02_RobotFiles/logs/report.html
```
New Stack Trace Output
```commandline
❯ robot -d logs --listener RobotStackTracer TestCases/14_Browser/01_CarConfig.robot
==============================================================================
01 CarConfig
==============================================================================
Configure Car with Pass ...
Traceback (most recent call last):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:23
T: Configure Car with Pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:28
Select aMinigolf as model
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/functional_keywords.resource:14
Select Options By ${select_CarBaseModel} text ${basemodel}
| ${select_CarBaseModel} = "Basismodell" >> ../.. >> select (str)
| ${basemodel} = aMinigolf (str)
______________________________________________________________________________
Configure Car with Pass | FAIL |
TimeoutError: page.selectOption: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector ""Basismodell" >> ../.. >> select"
selector resolved to visible <select _ngcontent-c7="" class="maxWidth ng-untouched ng…>…</select>
selecting specified option(s)
did not find some options - waiting...
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
Configure Car with wrong Acc ....
Traceback (most recent call last):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:38
T: Configure Car with wrong Acc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:43
Select Accessory aABS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/functional_keywords.resource:38
Check Checkbox //span[contains(text(),'${accessory}')]/../input
| //span[contains(text(),'${accessory}')]/../input = //span[contains(text(),'aABS')]/../input (str)
______________________________________________________________________________
Configure Car with wrong Acc | FAIL |
TimeoutError: page.check: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector "//span[contains(text(),'aABS')]/../input"
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
Configure Car with car Acc ..
Traceback (most recent call last):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:51
T: Configure Car with car Acc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:62
Set wrong Car Name ${car}
| ${car} = My New Car (str)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File /Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/functional_keywords.resource:53
Click ${car_name}
| ${car_name} = [href="/config/summary/wrong"] (str)
______________________________________________________________________________
Configure Car with car Acc | FAIL |
TimeoutError: page.click: Timeout 3000ms exceeded.
=========================== logs ===========================
waiting for selector "[href="/config/summary/wrong"]"
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
------------------------------------------------------------------------------
01 CarConfig | FAIL |
3 tests, 0 passed, 3 failed
==============================================================================
Output: /Source/RF-Schulung/02_RobotFiles/logs/output.xml
Log: /Source/RF-Schulung/02_RobotFiles/logs/log.html
Report: /Source/RF-Schulung/02_RobotFiles/logs/report.html
``` | /robotframework-stacktrace-0.4.1.tar.gz/robotframework-stacktrace-0.4.1/README.md | 0.478773 | 0.703257 | README.md | pypi |
from enum import IntEnum
from os import path
from robot.errors import VariableError
from robot.libraries.BuiltIn import BuiltIn
from robot.utils import cut_long_message
__version__ = "0.4.1"
bi = BuiltIn()
muting_keywords = [
"Run Keyword And Ignore Error",
"Run Keyword And Expect Error",
"Run Keyword And Return Status",
"Run Keyword And Warn On Failure",
"Wait Until Keyword Succeeds",
]
class Kind(IntEnum):
Suite = 0
Test = 1
Keyword = 2
class StackElement:
def __init__(
self, file, source, lineno, name, args=None, kind: Kind = Kind.Keyword
):
self.file = file
self.source = source
self.lineno = lineno
self.name = name
self.args = args or []
self.kind = kind
def resolve_args(self):
for arg in self.args:
try:
resolved = bi.replace_variables(arg)
if resolved != arg:
yield str(arg), f"{resolved} ({type(resolved).__name__})"
except VariableError:
yield str(arg), "<Unable to define variable value>"
class RobotStackTracer:
ROBOT_LISTENER_API_VERSION = 2
def __init__(self):
self.StackTrace = []
self.SuiteTrace = []
self.new_error = True
self.errormessage = ""
self.mutings = []
self.lib_files = {}
def start_suite(self, name, attrs):
self.SuiteTrace.append(attrs["source"])
def library_import(self, name, attrs):
self.lib_files[name] = attrs.get("source")
def resource_import(self, name, attrs):
self.lib_files[name] = attrs.get("source")
def start_test(self, name, attrs):
self.StackTrace = [
StackElement(
self.SuiteTrace[-1],
self.SuiteTrace[-1],
attrs["lineno"],
name,
kind=Kind.Test,
)
]
def start_keyword(self, name, attrs):
source = attrs.get(
"source",
self.StackTrace[-1].file if self.StackTrace else self.SuiteTrace[-1],
)
file = self.lib_files.get(attrs.get("libname"), source)
self.StackTrace.append(
StackElement(
file,
self.fix_source(source),
attrs.get("lineno", None),
attrs["kwname"],
attrs["args"],
)
)
if attrs["kwname"] in muting_keywords:
self.mutings.append(attrs["kwname"])
self.new_error = True
def fix_source(self, source):
if (
source
and path.isdir(source)
and path.isfile(path.join(source, "__init__.robot"))
):
return path.join(source, "__init__.robot")
else:
return source
def end_keyword(self, name, attrs):
if self.mutings and attrs["kwname"] == self.mutings[-1]:
self.mutings.pop()
if attrs["status"] == "FAIL" and self.new_error and not self.mutings:
print("\n".join(self._create_stacktrace_text()))
self.StackTrace.pop()
self.new_error = False
def _create_stacktrace_text(self) -> str:
error_text = [f" "]
error_text += [" Traceback (most recent call last):"]
call: StackElement
for index, call in enumerate(self.StackTrace):
if call.kind >= Kind.Test:
kind = "T:" if call.kind == Kind.Test else ""
path = (
f"{call.source}:{call.lineno}"
if call.lineno and call.lineno > 0
else f"{call.source}:0"
)
error_text += [f' {"~" * 74}']
error_text += [f" File {path}"]
error_text += [
f' {kind} {call.name} {" ".join(call.args or [])}'
]
for var, value in call.resolve_args():
error_text += [f" | {var} = {cut_long_message(value)}"]
error_text += [f'{"_" * 78}']
return error_text
def end_test(self, name, attrs):
self.StackTrace = []
def end_suite(self, name, attrs):
self.SuiteTrace.pop()
def log_message(self, message):
if message["level"] == "FAIL":
self.errormessage = message["message"] # may be relevant / Not used | /robotframework-stacktrace-0.4.1.tar.gz/robotframework-stacktrace-0.4.1/src/RobotStackTracer/__init__.py | 0.482185 | 0.174762 | __init__.py | pypi |
from robot.api import logger
from .store import StateMachineStore
from .state_machine import StateMachine, State
from .utils import is_string, is_dictionary, get_keword, build_callback, dict_merge, keyword_should_exist
from .exceptions import StateMachineNotFoundError
class StateMachineFacade:
"""Facade provides api to create and manage state machine."""
def __init__(self) -> None:
self._store = StateMachineStore()
def create_state_machine(self, name: str) -> None:
"""Creates state machine object."""
if not is_string(name):
raise RuntimeError('Name of state machine must be a string.')
if self._store.get(name) is not None:
logger.warn('State machine named {} has been overwritten'.format(name))
sm = StateMachine(name)
self._store.add(name, sm)
logger.debug("State machine with name '{}' was created.".format(name))
def add_state(self, state: str, on_update: str, sm: str) -> None:
"""Adds single state to state machine."""
if not is_string(state):
raise RuntimeError('State parameter should be name of keyword with main state procedure.')
if not is_string(on_update):
raise RuntimeError('On update parameter should be name of keyword with transition procedure.')
if not is_string(sm):
raise RuntimeError('Name of state machine must be a string.')
sm_instance = self._get_state_machine_or_raise_error(sm)
keyword_should_exist(state)
keyword_should_exist(on_update)
state_keyword = get_keword(state)
on_update_keyword = get_keword(on_update)
run_callback = build_callback(state_keyword)
on_update_callback = build_callback(on_update_keyword)
state_instance = State(name=state, run_callback=run_callback, on_update_callback=on_update_callback)
sm_instance.add_state(state_instance)
logger.debug("State with '{}' run keyword "
"and '{}' update keyword was add to '{}' state machine.".format(state, on_update, sm))
def go_to_state(self, state: str, sm: str) -> None:
"""Jumps to specified state."""
if not is_string(state):
raise RuntimeError('State parameter should be name of keyword.')
if not is_string(sm):
raise RuntimeError('Sm parameter should be name of created state machine.')
sm_instance = self._get_state_machine_or_raise_error(sm)
sm_instance.go_to_state(state)
def update_state(self, sm: str) -> None:
"""Goes to next state."""
if not is_string(sm):
raise RuntimeError('Sm parameter should be name of created state machine.')
sm_instance = self._get_state_machine_or_raise_error(sm)
sm_instance.update()
def get_context(self, sm: str) -> dict:
"""Returns context for given state machine."""
if not is_string(sm):
raise RuntimeError('Sm parameter should be name of created state machine.')
sm_instance = self._get_state_machine_or_raise_error(sm)
return sm_instance.context
def set_context(self, sm: str, context: dict) -> None:
"""Overwrites context for given state machine."""
if not is_string(sm):
raise RuntimeError('Sm parameter should be name of created state machine.')
if not is_dictionary(context):
raise RuntimeError('Context parameter should be a dictionary.')
sm_instance = self._get_state_machine_or_raise_error(sm)
sm_instance.context = context
def update_context(self, sm: str, item: dict) -> None:
"""Updates context for given state machine."""
if not is_string(sm):
raise RuntimeError('Sm parameter should be name of created state machine.')
if not is_dictionary(item):
raise RuntimeError('Item parameter should be a dictionary.')
sm_instance = self._get_state_machine_or_raise_error(sm)
sm_instance.context = dict_merge(sm_instance.context, item)
def destroy_state_machine(self, name: str) -> None:
"""Destroys state machine object."""
if not is_string(name):
raise RuntimeError('Name of state machine must be a string.')
if self._store.get(name) is None:
logger.warn("State machine named '{}' does not exist.".format(name))
else:
self._store.remove(name)
logger.debug("State machine with name '{}' was destroyed.".format(name))
def _get_state_machine_or_raise_error(self, sm: str) -> StateMachine:
"""
Gets state machine with passed name or raises error if state machine does not exist.
:param sm: name of state machine
:return: state machine object
"""
state_machine = self._store.get(sm)
if state_machine is None:
logger.debug('All created state machines:\n' + ',\n'.join(self._store.get_all()))
missing_state_machine_message = "There is no state machine named '{}'.\n" \
"Call keyword 'Create State Machine' to create it."
raise StateMachineNotFoundError(missing_state_machine_message.format(sm))
return state_machine | /robotframework-statemachinelibrary-1.0.2.tar.gz/robotframework-statemachinelibrary-1.0.2/src/StateMachineLibrary/facade.py | 0.866288 | 0.271222 | facade.py | pypi |
import copy
from typing import Callable
from robot.api import logger
from robot.running.model import Keyword
from robot.running.context import EXECUTION_CONTEXTS
from robot.libraries.BuiltIn import RobotNotRunningError
from robot.errors import DataError
from robot.running.usererrorhandler import UserErrorHandler
def get_keword(name: str) -> Keyword:
"""
Returns Keyword object for given name.
:param name: name of keyword
:return: found keyword
"""
return Keyword(name)
def get_robot_context(top: bool = False) -> EXECUTION_CONTEXTS:
"""
Returns current robot context or raises RobotNotRunningError error.
:param top: top context will be returned if it is True
:return: robot context
"""
ctx = EXECUTION_CONTEXTS.current if not top else EXECUTION_CONTEXTS.top
if ctx is None:
raise RobotNotRunningError('Cannot access execution context')
return ctx
def keyword_should_exist(name: str) -> None:
"""
Checks if keyword was defined.
:param name: name of keyword
:return: None
"""
ctx = get_robot_context()
try:
runner = ctx.namespace.get_runner(name)
except DataError as error:
raise AssertionError(error.message)
if isinstance(runner, UserErrorHandler):
raise AssertionError(runner.error.message)
def is_string(item: object) -> bool:
"""
Checks if passed argument is a string object
:param item: item for verify
:return: Ture if item is string, otherwise False
"""
return isinstance(item, str)
def is_dictionary(item: object) -> bool:
"""
Checks if passed argument is a dictionary object
:param item: item for verify
:return: Ture if item is dictionary, otherwise False
"""
return isinstance(item, dict)
def build_callback(keyword: Keyword) -> Callable:
"""
Return function callback for state_machine.State class.
:param keyword: keyword which 'run' method will be executed in callback
:return: callback function
"""
def callback():
context = get_robot_context()
keyword.run(context)
return callback
def dict_merge(a: dict, b: dict) -> dict:
"""
Merges second dictionary to first. It does not modify any of dictionaries.
:param a: first dictionary
:param b: second dictionary
:return: merged dictionary
"""
if not is_dictionary(a):
raise ValueError("Argument 'a' should be a dictionary")
if not is_dictionary(b):
return b
result = copy.deepcopy(a)
for k, v in b.items():
if k in result and isinstance(result[k], dict):
result[k] = dict_merge(result[k], v)
else:
result[k] = copy.deepcopy(v)
return result
def log_state_machine_states(sm) -> None:
"""
Logs all created state machines.
:param sm: State machine instnace
:return: None
"""
logger.debug('States added to state machine:\n' + ',\n'.join(sm.states.keys())) | /robotframework-statemachinelibrary-1.0.2.tar.gz/robotframework-statemachinelibrary-1.0.2/src/StateMachineLibrary/utils.py | 0.82963 | 0.372363 | utils.py | pypi |
from .facade import StateMachineFacade
class StateMachineLibrary:
"""Interface provides only necessary methods for robot framework."""
def __init__(self) -> None:
self._facade = StateMachineFacade()
def create_state_machine(self, name) -> None:
"""
Creates state machine object. State machine can control flow in your test or task definition.
Example of flow:
state A -> state B -> state C -> state A -> state B
`-> state C
:param name: name of state machine given by you
:return: None
"""
self._facade.create_state_machine(name)
def add_state(self, state, on_update, sm) -> None:
"""
Adds single state to state machine. To creates state you have to define state keyword
which contains main code of state to execute. In addition you have to define a method that contains
the transition logic to the next state.
:param state: name of keyword with main state logic
:param on_update: name of keyword with transition logic to the next state
:param sm: name of state machine
:return: None
"""
self._facade.add_state(state, on_update, sm)
def go_to_state(self, state, sm) -> None:
"""
Jumps to specified state. State is identified by name of main keyword.
.. warning::
You can not use it in main keyword for given state.
:param state: name of next keyword
:param sm: name of state machine
:return: None
"""
self._facade.go_to_state(state, sm)
def update_state(self, sm) -> None:
"""
Goes to next state. It calls keyword with transition logic to the next state and main keyword from next state.
Next state is indicated in keyword with transition logic for current state.
:param sm: name of state machine
:return: None
"""
self._facade.update_state(sm)
def get_context(self, sm: str) -> dict:
"""
Returns context for given state machine. Context is dictionary where you can store common resources
for state machine.
:param sm: name of state machine
:return: context as dictionary
"""
return self._facade.get_context(sm)
def set_context(self, sm: str, context: dict) -> None:
"""
Overwrites context for given state machine. All common resources for state machine will be overwritten.
:param sm: name of state machine
:param context: context to set
:return: None
"""
self._facade.set_context(sm, context)
def update_context(self, sm: str, item: dict) -> None:
"""
Updates context for given state machine. It merges passed dictionary item with existing context.
Example:
+---------------------+------------------------------------+------------------------------------+
| existing context | item | result |
+=====================+====================================+====================================+
| {'result':{}} | {'result':{'status':'FINISHED'}} | {'result':{'status':'FINISHED'}} |
+---------------------+------------------------------------+------------------------------------+
| {'time':'00:00'} | {'user':'Leo'} | {'time':'00:00','user':'Leo'} |
+---------------------+------------------------------------+------------------------------------+
| {'errors':0} | {'errors':1} | {'errors':1} |
+---------------------+------------------------------------+------------------------------------+
:param sm: name of state machine
:param item: object to add to context
:return: None
"""
self._facade.update_context(sm, item)
def destroy_state_machine(self, name) -> None:
"""
Destroys state machine object.
:param name: name of created state machine
:return: None
"""
self._facade.destroy_state_machine(name) | /robotframework-statemachinelibrary-1.0.2.tar.gz/robotframework-statemachinelibrary-1.0.2/src/StateMachineLibrary/interface.py | 0.923541 | 0.557905 | interface.py | pypi |
import json
import re
class RequestedParams:
def __init__(self, cookies, body, content_type,
files, headers, query_params):
self.cookies = cookies
self.body = body
self.content_type = content_type
self.files = files
self.headers = headers
self.query_params = query_params
class HeaderDoesNotExist:
def __repr__(self):
return "<HEADER DOES NOT EXIST>"
class Statistic:
def __init__(self, method, url):
self.method = method
self.url = url
self.requests= []
self._current_request_index = None
self._number_of_requests_not_specify = True
self._error_messages = ["Expect that server was requested with [{0}] {1}.".format(method.upper(),url)]
@property
def requested_times(self):
return len(self.requests)
def exactly_once(self):
return self.exactly_1_times()
def exactly_twice(self):
return self.exactly_2_times()
def for_the_first_time(self):
return self.for_the_1_time()
def for_the_second_time(self):
return self.for_the_2_time()
def __getattr__(self, item):
exactly_times_pattern = r"^exactly_(?P<number>\d+)_times$"
exactly_times_result = re.match(exactly_times_pattern, item)
if exactly_times_result:
number = int(exactly_times_result.groupdict()["number"])
return self._exactly_times(number)
for_the_time_pattern = r"^for_the_(?P<number>\d+)_time$"
for_the_time_result = re.match(for_the_time_pattern, item)
if for_the_time_result:
number = int(for_the_time_result.groupdict()["number"])
return self._for_the_time(number)
raise AttributeError("'Statistic' object has no attribute '{0}'".format(item))
def _exactly_times(self, expected_requested_times):
self._number_of_requests_not_specify = False
if expected_requested_times != self.requested_times:
self._error_messages.append(" {0} times.\nBut server was requested {0} times."
.format(expected_requested_times,self.requested_times))
self._raise_assertion()
return lambda: self
def _for_the_time(self, times):
if self.requested_times < times:
self._error_messages.append(
" At least {0} times.\nBut server was requested {1} times."
.format(times,self.requested_times))
self._raise_assertion()
else:
self._current_request_index = times - 1
return lambda: self
def with_cookies(self, cookies):
actual_cookies = self.get_current_request().cookies
if cookies != actual_cookies:
requested_time = self._current_request_index + 1
self._error_messages.append("\nFor the {0} time: with cookies {1}.\nBut for the {0} time: cookies was {2}."
.format(requested_time,cookies,actual_cookies))
return self
def with_body(self, body):
actual_body = self.get_current_request().body.decode("utf-8", errors="skip")
if body != actual_body:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with body {1}.\nBut for the {0} time: body was {2}."
.format(requested_time,body.__repr__(),actual_body.__repr__()))
return self
def with_json(self, json_dict):
body = json.dumps(json_dict, sort_keys=True)
actual_body = self.get_current_request().body.decode("utf-8", errors="skip")
try:
actual_json_dict = json.loads(actual_body)
except json.JSONDecodeError:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with json {1}.\nBut for the {0} time: json was corrupted {2}."
.format(requested_time,body,actual_body.__repr__()))
return self
actual_body = json.dumps(actual_json_dict, sort_keys=True)
if body != actual_body:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with json {1}.\nBut for the {0} time: json was {2}."
.format(requested_time,body,actual_body))
return self
def with_content_type(self, content_type):
actual_content_type = self.get_current_request().content_type
if content_type != actual_content_type:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with content type {1}.\nBut for the {0} time: content type was {2}."
.format(requested_time,content_type.__repr__(),actual_content_type.__repr__())
)
return self
def with_files(self, files):
actual_files = self.get_current_request().files
if files != actual_files:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with files {1}.\nBut for the {0} time: files was {2}."
.format(requested_time,files,actual_files))
return self
def with_headers(self, headers):
actual_headers = self.get_current_request().headers
expected_headers = {name.upper(): value for name, value in headers.items()}
headers_diff = self._get_headers_diff(expected_headers, actual_headers)
if headers_diff:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with headers contain {1}.\nBut for the {0} time: headers contained {2}."
.format(requested_time,expected_headers,headers_diff))
return self
@staticmethod
def _get_headers_diff(expected_headers, actual_headers):
headers_diff = {}
for header_name, header_value in expected_headers.items():
if header_name in actual_headers and header_value != actual_headers[header_name]:
headers_diff[header_name] = actual_headers[header_name]
elif header_name not in actual_headers:
headers_diff[header_name] = HeaderDoesNotExist()
return headers_diff
def with_query_params(self, query_params):
actual_query_params = self.get_current_request().query_params
if query_params != actual_query_params:
requested_time = self._current_request_index + 1
self._error_messages.append(
"\nFor the {0} time: with query params {1}.\n"
"But for the {0} time: query params was {2}."
.format(requested_time,query_params,actual_query_params))
return self
def get_current_request(self):
if self._current_request_index is None:
raise AttributeError("You should specify concrete request for check with 'for_the_<any_number>_time'")
return self.requests[self._current_request_index]
def check(self):
if not self.requested_times and self._number_of_requests_not_specify:
self._error_messages.append("\nBut server was requested 0 times.")
if self.errors_exist:
self._raise_assertion()
else:
self._clean_state()
return True
@property
def errors_exist(self):
return len(self._error_messages) > 1
def _raise_assertion(self):
error_message = "".join(self._error_messages)
self._clean_state()
raise AssertionError(error_message)
def _clean_state(self):
self._error_messages = self._error_messages[0:1]
self._number_of_requests_not_specify = True
self._current_request_index = None | /robotframework-stublibrary-0.1.4.tar.gz/robotframework-stublibrary-0.1.4/src/StubLibrary/statistic.py | 0.619586 | 0.240234 | statistic.py | pypi |
import os,psutil
import hashlib
from .robotlibcore import keyword
from allpairspy import AllPairs
class Commons(object):
@keyword
def create_testcases(self,parameters,**kwargs):
"""
Using parameters to create testcases by pairwise method
Optionally, you can specify:
filter_func - https://github.com/thombashi/allpairspy/blob/master/examples/example2.1.py
n - https://github.com/thombashi/allpairspy/blob/master/examples/example1.2.py
previously_tested - https://github.com/thombashi/allpairspy/blob/master/examples/example1.3.py
in kwargs to customize the generared testcases set
Example usage:
| ${l1} | Create List | 1 | 2 | 3 |
| ${l2} | Create List | a | b |
| ${l} | Create List | ${l1} | ${l2} |
| ${x} | Create Testcases | ${l} |
| ${x} | Create Testcases | ${l} | n=1 |
"""
if kwargs.has_key('n'):
kwargs['n']=int(kwargs['n'])
l=list(AllPairs(parameters, **kwargs))
return l
@keyword(name='MD5 Sum')
def md5sum(self,fname):
"""
Calculating digital fingerprint of a file's 128-bit MD5 hashes
"""
hash_md5 = hashlib.md5()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
@keyword
def kill_process(self,name):
"""
Kill Processes by name
"""
for proc in psutil.process_iter():
if proc.name() == name:
proc.kill()
return True
return False
@keyword
def set_hosts(self,address,names,type='ipv4'):
'''add item to system hosts file'''
from python_hosts import Hosts, HostsEntry
hosts = Hosts()
if isinstance(names,str):
names=[names]
new_entry = HostsEntry(entry_type=type, address=address, names=names)
hosts.add([new_entry])
hosts.write() | /robotframework-stublibrary-0.1.4.tar.gz/robotframework-stublibrary-0.1.4/src/StubLibrary/commons.py | 0.612889 | 0.299566 | commons.py | pypi |
from suds import WebFault
from .utils import *
import socket
class RawSoapMessage(object):
def __init__(self, string):
if isinstance(string, str):
self.message = string.encode('UTF-8')
else:
self.message = str(string)
def __str__(self):
return self.message.decode('UTF-8')
def __unicode__(self):
return self.message
class _ProxyKeywords(object):
def call_soap_method(self, name, *args):
"""Calls the SOAP method with the given ``name`` and ``args``.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(None, None, False, name, *args)
def specific_soap_call(self, service, port, name, *args):
"""Calls the SOAP method overriding client settings.
If there is only one service specified then ``service`` is ignored.
``service`` and ``port`` can be either by name or index. If only `port` or
``service`` need to be specified, leave the other one ${None} or
${EMPTY}. The index is the order of appearence in the WSDL starting
with 0.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(service, port, False, name, *args)
def call_soap_method_expecting_fault(self, name, *args):
"""Calls the SOAP method expecting the server to raise a fault.
Fails if the server does not raise a fault. Returns a Python object
graph or SOAP envelope as a XML string depending on the client
options.
A fault has the following attributes:\n
| faultcode | required |
| faultstring | required |
| faultactor | optional |
| detail | optional |
"""
return self._call(None, None, True, name, *args)
def create_raw_soap_message(self, message):
"""Returns an object that can used in lieu of SOAP method arguments.
`message` should be an entire SOAP message as a string. The object
returned can be used in lieu of *args for `Call Soap Method`, `Call
Soap Method Expecting Fault`, and `Specific Soap Call`.
Example:\n
| ${message}= | `Create Raw Soap Message` | <SOAP-ENV:Envelope ...</ns2:Body></SOAP-ENV:Envelope> |
| `Call Soap Method` | addContact | ${message} |
"""
return RawSoapMessage(message)
# private
def _call(self, service, port, expect_fault, name, *args):
client = self._client()
self._backup_options()
if service or (service == 0):
client.set_options(service=parse_index(service))
if port or (port == 0):
client.set_options(port=parse_index(port))
method = getattr(client.service, name)
try:
if len(args) == 1 and isinstance(args[0], RawSoapMessage):
received = method(__inject={'msg': args[0].message})
else:
received = method(*args)
if expect_fault:
raise AssertionError('The server did not raise a fault.')
except WebFault as e:
if not expect_fault:
raise e
received = e.fault
finally:
self._restore_options()
return_xml = self._get_external_option("return_xml", False)
if return_xml:
received = self.get_last_received().decode('utf-8')
return received
# private
def _backup_options(self):
options = self._client().options
self._old_options = dict([[n, getattr(options, n)] for n in ('service', 'port')])
if self._global_timeout:
self._old_timeout = socket.getdefaulttimeout()
def _restore_options(self):
self._client().set_options(**self._old_options)
# restore the default socket timeout because suds does not
if self._global_timeout:
socket.setdefaulttimeout(self._old_timeout) | /robotframework-suds2library-1.1.1.tar.gz/robotframework-suds2library-1.1.1/src/Suds2Library/proxy.py | 0.729134 | 0.225619 | proxy.py | pypi |
from robot.utils import ConnectionCache
from .version import VERSION
from .factory import _FactoryKeywords
from .clientmanagement import _ClientManagementKeywords
from .options import _OptionsKeywords
from .proxy import _ProxyKeywords
from .soaplogging import _SoapLoggingKeywords
from .wsse import _WsseKeywords
from suds import null
from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn
import weakref
__version__ = VERSION
class Suds2Library(_ClientManagementKeywords, _FactoryKeywords,
_OptionsKeywords, _ProxyKeywords, _SoapLoggingKeywords,
_WsseKeywords):
"""Suds2Library is a library for functional testing of SOAP-based web
services. It is a full Python 3 port of
[https://github.com/ombre42/robotframework-sudslibrary|SudsLibrary]
using [https://github.com/cackharot/suds-py3|suds-py3] as a SOAP client.
Suds2Library is based on [https://fedorahosted.org/suds/|Suds], a dynamic
SOAP 1.1 client.
== Case Sensitivity in Suds2Library ==
Many things in the world of SOAP are case-sensitive. This includes method
names, WSDL object names and attributes, and service or port names.
== Creating and Configuring a Client ==
If necessary, use keywords `Bind Schema To Location` or `Add Doctor
Import`. These are rarely needed. Next, `Create Soap Client` to create a Suds
client. The output from this keyword contains useful information including
available types and methods. Next, use other keywords to configure the
client as necessary. `Set Location` is the most commonly needed keyword.
== Working with WSDL Objects ==
When Suds digests a WSDL, it creates dynamic types to represent the complex
types defined by a WSDL or its imports. These types are listed in the
output of `Create Soap Client`. WSDL objects are used as method arguments,
attribute values of other WSDL objects, and return values. `Create Wsdl
Object` is used to create instances of WSDL object types. To see what the
structure of a WSDL object is, you can do this:
| ${obj}= | `Create Wsdl Object` | someObject |
| ${obj as str}= | `Convert To String` | ${obj} |
| `Log` | ${obj as str} | |
The same technique can be used to analyze a response object. It may also
help to use a tool such as Eclipse or SoapUI to comprehend the structures.
=== Getting WSDL Object Attributes ===
Getting a WSDL object's attribute value may be done with `Get Wsdl Object
Attribute` or extended variable syntax*. Keywords from other libraries, such
as _BuiltIn_ and _Collections_ may be used to verify attribute values.
Examples:
| ${name}= | `Get Wsdl Object Attribute` | ${person} | name |
| `Should Be Equal` | ${person.name} | Bob | |
=== Setting WSDL Object Attributes ===
Setting a WSDL object's attribute value may be done with `Set Wsdl Object
Attribute` or extended variable syntax*. `Set Wsdl Object Attribute`
verifies the argument is an object of the correct type and the attribute
exists.
| `Set Wsdl Object Attribute` | ${person} | name | Tia |
| ${person.name}= | `Set Variable` | Tia | |
* In order to use extended variable syntax, the attribute name must consist
of only letters, numbers, and underscores.
== Example Test ==
The following simple example demonstrates verifying the return value using
keywords in this library and in the `BuiltIn` and `Collections` libraries.
You can run this test because it uses a public web service.
| `Create Soap Client` | http://www.webservicex.net/Statistics.asmx?WSDL | | |
| ${dbl array}= | `Create Wsdl Object` | ArrayOfDouble | |
| `Append To List` | ${dbl array.double} | 2.0 | |
| `Append To List` | ${dbl array.double} | 3.0 | |
| ${result}= | `Call Soap Method` | GetStatistics | ${dbl array} |
| `Should Be Equal As Numbers` | ${result.Average} | 2.5 | |
The definition of type ArrayOfDouble:
| <s:complexType name="ArrayOfDouble">
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="unbounded" name="double" type="s:double"/>
| </s:sequence>
| </s:complexType>
Note that the attribute name on the ArrayOfDouble-type that is the list of
numbers is the singular "double". Outside of the WSDL, the structure can
also be seen in the output of Create Wsdl Object:
| ${dbl array} = (ArrayOfDouble){
| double[] = <empty>
| }
The relevant part of the WSDL defining the parameters to the method:
| <s:element name="GetStatistics">
| <s:complexType>
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="1" name="X" type="tns:ArrayOfDouble"/>
| </s:sequence>
| </s:complexType>
| </s:element>
The definition of this method appears in the output of Create Soap Client
as:
| GetStatistics(ArrayOfDouble X, )
== Passing Explicit NULL Values ==
If you have a service that takes ``NULL`` values for required parameters or
you want to pass ``NULL`` for optional object attributes, you simply need to
set the value to ``${SUDS_NULL}``. You need to use ``${SUDS_NULL}`` instead of
``${None}`` because ``None`` is interpreted by the marshaller as not having a
value. The soap message will contain an empty (and ``xsi:nil="true"`` if node
defined as nillable). ``${SUDS_NULL}`` is defined during library
initialization, so editors like RIDE will not show it as defined.
== Extending Suds2Library ==
There may be times where Suds/Suds2Library does not work using the library
keywords alone. Extending the library instead of writing a custom one will
allow you to use the existing keywords in Suds2Library.
There are two methods useful for extending Suds2Library:
| _client()
| _add_client(client, alias=None)
The first can be used to access the current instance of
suds.client.Client. The second can be used to put a client into the client
cache that you have instantiated.
Here is an example demonstrating how to implement a keyword that adds a
MessagePlugin to the current Suds client (based on the [https://fedorahosted.org/suds/wiki/Documentation#MessagePlugin|Suds documentation]):
| from robot.libraries.BuiltIn import BuiltIn
| from suds.plugin import MessagePlugin
|
| class _MyPlugin(MessagePlugin):
| def marshalled(self, context):
| body = context.envelope.getChild('Body')
| foo = body[0]
| foo.set('id', '12345')
| foo.set('version', '2.0')
|
| class Suds2LibraryExtensions(object):
| def attach_my_plugin(self):
| client = BuiltIn().get_library_instance("Suds2Library")._client()
| # prepend so Suds2Library's plugin is left in place
| plugins = client.options.plugins
| if any(isinstance(x, _MyPlugin) for x in plugins):
| return
| plugins.insert(0, _MyPlugin())
| client.set_options(plugins=plugins)
"""
ROBOT_LIBRARY_VERSION = VERSION
ROBOT_LIBRARY_SCOPE = "GLOBAL"
ROBOT_LIBRARY_DOC_FORMAT = "ROBOT"
def __init__(self):
self._cache = ConnectionCache(no_current_msg='No current client')
self._imports = []
self._logger = logger
self._global_timeout = True
self._external_options = weakref.WeakKeyDictionary()
try: # exception if Robot is not running
BuiltIn().set_global_variable("${SUDS_NULL}", null())
except:
pass | /robotframework-suds2library-1.1.1.tar.gz/robotframework-suds2library-1.1.1/src/Suds2Library/__init__.py | 0.716615 | 0.47098 | __init__.py | pypi |
from suds.sudsobject import Object as SudsObject
class _FactoryKeywords(object):
def set_wsdl_object_attribute(self, object, name, value):
"""Sets the attribute of a WSDL object.
Example:
| ${order search request}= | `Create Wsdl Object` | OrderSearchRequest | |
| `Set Wsdl Object Attribute` | ${order search request} | id | 4065 |
"""
self._assert_is_suds_object(object)
getattr(object, name)
setattr(object, name, value)
def get_wsdl_object_attribute(self, object, name):
"""Gets the attribute of a WSDL object.
Extendend variable syntax may be used to access attributes; however,
some WSDL objects may have attribute names that are illegal in Python,
necessitating this keyword.
Example:
| ${sale record}= | `Call Soap Method` | getLastSale | |
| ${price}= | `Get Wsdl Object Attribute` | ${sale record} | Price |
"""
self._assert_is_suds_object(object)
return getattr(object, name)
def create_wsdl_object(self, type, *name_value_pairs):
"""Creates a WSDL object of the specified ``type``.
Requested ``type`` must be defined in the WSDL, in an import specified
by the WSDL, or with `Add Doctor Import`. ``type`` is case sensitive.
Example:
| ${contact}= | `Create Wsdl Object` | Contact | |
| `Set Wsdl Object Attribute` | ${contact} | Name | Kelly Newman |
Attribute values can be set by passing the attribute name and value in
pairs. This is equivalent to the two lines above:
| ${contact}= | `Create Wsdl Object` | Contact | Name | Kelly Newman |
"""
if len(name_value_pairs) % 2 != 0:
raise ValueError("Creating a WSDL object failed. There should be "
"an even number of name-value pairs.")
obj = self._client().factory.create(type)
for i in range(0, len(name_value_pairs), 2):
self.set_wsdl_object_attribute(obj, name_value_pairs[i], name_value_pairs[i + 1])
return obj
# private
def _assert_is_suds_object(self, object):
if not isinstance(object, SudsObject):
raise ValueError("Object must be a WSDL object (suds.sudsobject.Object).") | /robotframework-suds2library-1.1.1.tar.gz/robotframework-suds2library-1.1.1/src/Suds2Library/factory.py | 0.771327 | 0.396477 | factory.py | pypi |
import logging
import os
from suds.xsd.doctor import ImportDoctor
from urllib.parse import urlparse
from urllib.request import pathname2url
from suds.client import Client
from .utils import *
class _ClientManagementKeywords(object):
def create_soap_client(self, url_or_path, alias=None, autoblend=False, timeout='90 seconds', username=None,
password=None, auth_type='STANDARD'):
"""Loads a WSDL from the given URL/path and creates a Suds SOAP client.
Returns the index of this client instance which can be used later to
switch back to it. See `Switch Soap Client` for example.
Optional alias is an alias for the client instance and it can be used
for switching between clients (just as index can be used). See `Switch
Soap Client` for more details.
``username`` and ``password`` are needed if the WSDL is on a server
requiring basic authentication. ``auth_type`` selects the authentication
scheme to use. See `Set Http Authentication` for more information.
Autoblend ensures that the schema(s) defined within the WSDL import
each other.
``timeout`` sets the timeout for SOAP requests and must be given in
Robot Framework's time format (e.g. ``1 minute``, ``2 min 3 s``, ``4.5``).
Examples:
| `Create Soap Client` | http://localhost:8080/ws/Billing.asmx?WSDL |
| `Create Soap Client` | ${CURDIR}/../wsdls/tracking.wsdl |
"""
url = self._get_url(url_or_path)
autoblend = to_bool(autoblend)
kwargs = {'autoblend': autoblend}
if username:
password = password if password is not None else ""
transport = self._get_transport(auth_type, username, password)
kwargs['transport'] = transport
imports = self._imports
if imports:
self._log_imports()
kwargs['doctor'] = ImportDoctor(*imports)
client = Client(url, **kwargs)
logging.getLogger('suds.client').disabled = True
index = self._add_client(client, alias)
self._set_soap_timeout(timeout)
return index
def switch_soap_client(self, index_or_alias):
"""Switches between clients using index or alias.
Index is returned from `Create Soap Client` and alias can be given to
it.
Example:
| `Create Soap Client` | http://localhost:8080/Billing?wsdl | Billing |
| `Create Soap Client` | http://localhost:8080/Marketing?wsdl | Marketing |
| `Call Soap Method` | sendSpam | |
| `Switch Soap Client` | Billing | # alias |
| `Call Soap Method` | sendInvoices | |
| `Switch Soap Client` | 2 | # index |
Above example expects that there was no other clients created when
creating the first one because it used index ``1`` when switching to it
later. If you aren't sure about that you can store the index into
a variable as below.
| ${id} = | `Create Soap Client` | ... |
| # Do something ... | | |
| `Switch Soap Client` | ${id} | |
"""
self._cache.switch(index_or_alias)
def close_connection(self):
"""Closes the current connection from soap client.
Previous connection is made active by this keyword. Manually use
`Switch Connection` to switch to another connection.
Example:
| ${id} = | `Create Soap Client` | ... |
| # Do something ... | | |
| `Close Connection` |
"""
index = self._cache.current_index
if index is None:
raise RuntimeError("No open connection.")
self._cache._connections[index-1] = self._cache.current = self._cache._no_current
self._cache._connections.pop()
try:
self._cache.current=self._cache.get_connection(index-1)
except RuntimeError:
pass
def close_all_connections(self):
"""Closes all open connections.
This keyword is ought to be used either in test or suite teardown to
make sure all the connections are closed before the test execution
finishes.
After this keyword, the connection indices returned by
`Create Soap Client` are reset and start from ``1``.
Example:
| ${id} = | `Create Soap Client` |
| ${id} = | `Create Soap Client |
| # Do something with the connections |
| [Teardown] | `Close All Connections` |
"""
self._cache.empty_cache()
# PyAPI
def _client(self):
"""Returns the current suds.client.Client instance."""
return self._cache.current
def _add_client(self, client, alias=None):
"""Puts a client into the cache and returns the index.
The added client becomes the current one."""
client.set_options(faults=True)
self._logger.info('Using WSDL at %s%s' % (client.wsdl.url, client))
self._imports = []
index = self._cache.register(client, alias)
self.set_soap_logging(True)
return index
# private
def _log_imports(self):
if self._imports:
msg = "Using Imports for ImportDoctor:"
for imp in self._imports:
msg += "\n Namespace: '%s' Location: '%s'" % (imp.ns, imp.location)
for ns in imp.filter.tns:
msg += "\n Filtering for namespace '%s'" % ns
self._logger.info(msg)
def _get_url(self, url_or_path):
if not len(urlparse(url_or_path).scheme) > 1:
if not os.path.isfile(url_or_path):
raise IOError("File '%s' not found." % url_or_path)
url_or_path = 'file:' + pathname2url(url_or_path)
return url_or_path | /robotframework-suds2library-1.1.1.tar.gz/robotframework-suds2library-1.1.1/src/Suds2Library/clientmanagement.py | 0.6705 | 0.307345 | clientmanagement.py | pypi |
from suds import WebFault
from suds.sax.text import Raw
from .utils import *
import socket
class RawSoapMessage(object):
def __init__(self, string):
if isinstance(string, bytes):
self.message = string.decode()
else:
self.message = str(string)
def __str__(self):
return self.message
def __unicode__(self):
return self.message
class _ProxyKeywords(object):
def call_soap_method(self, name, *args):
"""Calls the SOAP method with the given `name` and `args`.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(None, None, False, name, *args)
def specific_soap_call(self, service, port, name, *args):
"""Calls the SOAP method overriding client settings.
If there is only one service specified then `service` is ignored.
`service` and `port` can be either by name or index. If only `port` or
`service` need to be specified, leave the other one ${None} or
${EMPTY}. The index is the order of appearence in the WSDL starting
with 0.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(service, port, False, name, *args)
def call_soap_method_expecting_fault(self, name, *args):
"""Calls the SOAP method expecting the server to raise a fault.
Fails if the server does not raise a fault. Returns a Python object
graph or SOAP envelope as a XML string depending on the client
options.
A fault has the following attributes:\n
| faultcode | required |
| faultstring | required |
| faultactor | optional |
| detail | optional |
"""
return self._call(None, None, True, name, *args)
def create_raw_soap_message(self, message):
"""Returns an object that can used in lieu of SOAP method arguments.
`message` should be an entire SOAP message as a string. The object
returned can be used in lieu of *args for `Call Soap Method`, `Call
Soap Method Expecting Fault`, and `Specific Soap Call`.
Example:\n
| ${message}= | Create Raw Soap Message | <SOAP-ENV:Envelope ...</ns2:Body></SOAP-ENV:Envelope> |
| Call Soap Method | addContact | ${message} |
"""
return RawSoapMessage(message)
# private
def _call(self, service, port, expect_fault, name, *args):
client = self._client()
self._backup_options()
if service or (service == 0):
client.set_options(service=parse_index(service))
if port or (port == 0):
client.set_options(port=parse_index(port))
method = getattr(client.service, name)
received = None
try:
if len(args) == 1 and isinstance(args[0], RawSoapMessage):
received = method(__inject={'msg': args[0].message})
else:
received = method(*args)
if expect_fault:
raise AssertionError('The server did not raise a fault.')
except WebFault as e:
if not expect_fault:
raise e
received = e.fault
finally:
self._restore_options()
return_xml = self._get_external_option("return_xml", False)
if return_xml:
received = self.get_last_received()
return received
# private
def _backup_options(self):
options = self._client().options
self._old_options = dict([[n, getattr(options, n)] for n in ('service', 'port')])
if self._global_timeout:
self._old_timeout = socket.getdefaulttimeout()
def _restore_options(self):
self._client().set_options(**self._old_options)
# restore the default socket timeout because suds does not
if self._global_timeout:
socket.setdefaulttimeout(self._old_timeout) | /robotframework-sudslibrary-aljcalandra-1.1.4.tar.gz/robotframework-sudslibrary-aljcalandra-1.1.4/src/SudsLibrary/proxy.py | 0.725357 | 0.259356 | proxy.py | pypi |
from robot.utils import ConnectionCache
from .version import VERSION
from .monkeypatches import *
from .factory import _FactoryKeywords
from .clientmanagement import _ClientManagementKeywords
from .options import _OptionsKeywords
from .proxy import _ProxyKeywords
from .soaplogging import _SoapLoggingKeywords
from .wsse import _WsseKeywords
from suds import null
from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn
import traceback
import weakref
__version__ = VERSION
class SudsLibrary(_ClientManagementKeywords, _FactoryKeywords,
_OptionsKeywords, _ProxyKeywords, _SoapLoggingKeywords,
_WsseKeywords):
"""SudsLibrary is a library for functional testing of SOAP-based web
services.
SudsLibrary is based on [https://fedorahosted.org/suds/|Suds], a dynamic
SOAP 1.1 client.
== Case Sensitivy in SudsLibrary ==
Many things in the world of SOAP are case-sensitive. This includes method
names, WSDL object names and attributes, and service or port names.
== Creating and Configuring a Client ==
If necessary, use keywords `Bind Schema To Location` or `Add Doctor
Import`. These are rarely needed. Next, `Create Soap Client` to create a Suds
client. The output from this keyword contains useful information including
available types and methods. Next, use other keywords to configure the
client as necessary. `Set Location` is the most commonly needed keyword.
== Working with WSDL Objects ==
When Suds digests a WSDL, it creates dynamic types to represent the complex
types defined by a WSDL or its imports. These types are listed in the
output of `Create Soap Client`. WSDL objects are used as method arguments,
attribute values of other WSDL objects, and return values. `Create Wsdl
Object` is used to create instances of WSDL object types. To see what the
structure of a WSDL object is, you can do this:
| ${obj}= | Create Wsdl Object | someObject |
| ${obj as str}= | Convert To String | ${obj} |
| Log | ${obj as str} | |
The same technique can be used to analyze a response object. It may also
help to use a tool such as Eclipse or SoapUI to comprehend the structures.
=== Getting WSDL Object Attributes ===
Getting a WSDL object's attribute value may be done with `Get Wsdl Object
Attribute` or extended variable syntax*. Keywords from other libraries, such
as _BuiltIn_ and _Collections_ may be used to verify attribute values.
Examples:
| ${name}= | Get Wsdl Object Attribute | ${person} | name |
| Should Be Equal | ${person.name} | Bob | |
=== Setting WSDL Object Attributes ===
Setting a WSDL object's attribute value may be done with `Set Wsdl Object
Attribute` or extended variable syntax*. `Set Wsdl Object Attribute`
verifies the argument is an object of the correct type and the attribute
exists.
| Set Wsdl Object Attribute | ${person} | name | Tia |
| ${person.name}= | Set Variable | Tia | |
* In order to use extended variable syntax, the attribute name must consist
of only letters, numbers, and underscores.
== Example Test ==
The following simple example demonstrates verifying the return value using
keywords in this library and in the `BuiltIn` and `Collections` libraries.
You can run this test because it uses a public web service.
| Create Soap Client | http://www.webservicex.net/Statistics.asmx?WSDL | | |
| ${dbl array}= | Create Wsdl Object | ArrayOfDouble | |
| Append To List | ${dbl array.double} | 2.0 | |
| Append To List | ${dbl array.double} | 3.0 | |
| ${result}= | Call Soap Method | GetStatistics | ${dbl array} |
| Should Be Equal As Numbers | ${result.Average} | 2.5 | |
The definition of type ArrayOfDouble:
| <s:complexType name="ArrayOfDouble">
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="unbounded" name="double" type="s:double"/>
| </s:sequence>
| </s:complexType>
Note that the attribute name on the ArrayOfDouble-type that is the list of
numbers is the singular "double". Outside of the WSDL, the structure can
also be seen in the output of Create Wsdl Object:
| ${dbl array} = (ArrayOfDouble){
| double[] = <empty>
| }
The relevant part of the WSDL defining the parameters to the method:
| <s:element name="GetStatistics">
| <s:complexType>
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="1" name="X" type="tns:ArrayOfDouble"/>
| </s:sequence>
| </s:complexType>
| </s:element>
The definition of this method appears in the output of Create Soap Client
as:
| GetStatistics(ArrayOfDouble X, )
== Passing Explicit NULL Values ==
If you have a service that takes NULL values for required parameters or
you want to pass NULL for optional object attributes, you simply need to
set the value to ${SUDS_NULL}. You need to use ${SUDS_NULL} instead of
${None} because None is interpreted by the marshaller as not having a
value. The soap message will contain an empty (and xsi:nil="true" if node
defined as nillable). ${SUDS_NULL} is defined during library
initialization, so editors like RIDE will not show it as defined.
== Extending SudsLibrary ==
There may be times where Suds/SudsLibrary does not work using the library
keywords alone. Extending the library instead of writing a custom one will
allow you to use the existing keywords in SudsLibrary.
There are two methods useful for extending SudsLibrary:
| _client()
| _add_client(client, alias=None)
The first can be used to access the current instance of
suds.client.Client. The second can be used to put a client into the client
cache that you have instantiated.
Here is an example demonstrating how to implement a keyword that adds a
MessagePlugin to the current Suds client (based on the [https://fedorahosted.org/suds/wiki/Documentation#MessagePlugin|Suds documentation]):
| from robot.libraries.BuiltIn import BuiltIn
| from suds.plugin import MessagePlugin
|
| class _MyPlugin(MessagePlugin):
| def marshalled(self, context):
| body = context.envelope.getChild('Body')
| foo = body[0]
| foo.set('id', '12345')
| foo.set('version', '2.0')
|
| class SudsLibraryExtensions(object):
| def attach_my_plugin(self):
| client = BuiltIn().get_library_instance("SudsLibrary")._client()
| # prepend so SudsLibrary's plugin is left in place
| plugins = client.options.plugins
| if any(isinstance(x, _MyPlugin) for x in plugins):
| return
| plugins.insert(0, _MyPlugin())
| client.set_options(plugins=plugins)
"""
ROBOT_LIBRARY_VERSION = VERSION
ROBOT_LIBRARY_SCOPE = "GLOBAL"
ROBOT_LIBRARY_DOC_FORMAT = "ROBOT"
def __init__(self):
self._cache = ConnectionCache(no_current_msg='No current client')
self._imports = []
self._logger = logger
self._global_timeout = True
self._external_options = weakref.WeakKeyDictionary()
try: # exception if Robot is not running
BuiltIn().set_global_variable("${SUDS_NULL}", null())
except:
pass | /robotframework-sudslibrary-aljcalandra-1.1.4.tar.gz/robotframework-sudslibrary-aljcalandra-1.1.4/src/SudsLibrary/__init__.py | 0.7011 | 0.338063 | __init__.py | pypi |
from suds.sudsobject import Object as SudsObject
class _FactoryKeywords(object):
def set_wsdl_object_attribute(self, object, name, value):
"""Sets the attribute of a WSDL object.
Example:
| ${order search request}= | Create Wsdl Object | OrderSearchRequest | |
| Set Wsdl Object Attribute | ${order search request} | id | 4065 |
"""
self._assert_is_suds_object(object)
getattr(object, name)
setattr(object, name, value)
def get_wsdl_object_attribute(self, object, name):
"""Gets the attribute of a WSDL object.
Extendend variable syntax may be used to access attributes; however,
some WSDL objects may have attribute names that are illegal in Python,
necessitating this keyword.
Example:
| ${sale record}= | Call Soap Method | getLastSale | |
| ${price}= | Get Wsdl Object Attribute | ${sale record} | Price |
"""
self._assert_is_suds_object(object)
return getattr(object, name)
def create_wsdl_object(self, type, *name_value_pairs):
"""Creates a WSDL object of the specified `type`.
Requested `type` must be defined in the WSDL, in an import specified
by the WSDL, or with `Add Doctor Import`. `type` is case sensitive.
Example:
| ${contact}= | Create Wsdl Object | Contact | |
| Set Wsdl Object Attribute | ${contact} | Name | Kelly Newman |
Attribute values can be set by passing the attribute name and value in
pairs. This is equivalent to the two lines above:
| ${contact}= | Create Wsdl Object | Contact | Name | Kelly Newman |
"""
if len(name_value_pairs) % 2 != 0:
raise ValueError("Creating a WSDL object failed. There should be "
"an even number of name-value pairs.")
obj = self._client().factory.create(type)
for i in range(0, len(name_value_pairs), 2):
self.set_wsdl_object_attribute(obj, name_value_pairs[i], name_value_pairs[i + 1])
return obj
# private
def _assert_is_suds_object(self, object):
if not isinstance(object, SudsObject):
raise ValueError("Object must be a WSDL object (suds.sudsobject.Object).") | /robotframework-sudslibrary-aljcalandra-1.1.4.tar.gz/robotframework-sudslibrary-aljcalandra-1.1.4/src/SudsLibrary/factory.py | 0.73173 | 0.359898 | factory.py | pypi |
import os
import urllib
from suds.xsd.doctor import ImportDoctor
from suds.transport.http import HttpAuthenticated
from urllib.parse import urlparse
from suds.client import Client
from .utils import *
class _ClientManagementKeywords(object):
def create_soap_client(self, url_or_path, alias=None, autoblend=False, timeout='90 seconds', username=None,
password=None, auth_type='STANDARD'):
"""Loads a WSDL from the given URL/path and creates a Suds SOAP client.
Returns the index of this client instance which can be used later to
switch back to it. See `Switch Soap Client` for example.
Optional alias is an alias for the client instance and it can be used
for switching between clients (just as index can be used). See `Switch
Soap Client` for more details.
`username` and `password` are needed if the WSDL is on a server
requiring basic authentication. `auth_type` selects the authentication
scheme to use. See `Set Http Authentication` for more information.
Autoblend ensures that the schema(s) defined within the WSDL import
each other.
`timeout` sets the timeout for SOAP requests and must be given in
Robot Framework's time format (e.g. '1 minute', '2 min 3 s', '4.5').
Examples:
| Create Soap Client | http://localhost:8080/ws/Billing.asmx?WSDL |
| Create Soap Client | ${CURDIR}/../wsdls/tracking.wsdl |
"""
url = self._get_url(url_or_path)
autoblend = to_bool(autoblend)
kwargs = {'autoblend': autoblend}
if username:
password = password if password is not None else ""
transport = self._get_transport(auth_type, username, password)
kwargs['transport'] = transport
imports = self._imports
if imports:
self._log_imports()
kwargs['doctor'] = ImportDoctor(*imports)
client = Client(url, **kwargs)
index = self._add_client(client, alias)
self._set_soap_timeout(timeout)
return index
def switch_soap_client(self, index_or_alias):
"""Switches between clients using index or alias.
Index is returned from `Create Soap Client` and alias can be given to
it.
Example:
| Create Soap Client | http://localhost:8080/Billing?wsdl | Billing |
| Create Soap Client | http://localhost:8080/Marketing?wsdl | Marketing |
| Call Soap Method | sendSpam | |
| Switch Soap Client | Billing | # alias |
| Call Soap Method | sendInvoices | |
| Switch Soap Client | 2 | # index |
Above example expects that there was no other clients created when
creating the first one because it used index '1' when switching to it
later. If you aren't sure about that you can store the index into
a variable as below.
| ${id} = | Create Soap Client | ... |
| # Do something ... | | |
| Switch Soap Client | ${id} | |
"""
self._cache.switch(index_or_alias)
# PyAPI
def _client(self):
"""Returns the current suds.client.Client instance."""
return self._cache.current
def _add_client(self, client, alias=None):
"""Puts a client into the cache and returns the index.
The added client becomes the current one."""
client.set_options(faults=True)
self._logger.info('Using WSDL at %s%s' % (client.wsdl.url, client))
self._imports = []
index = self._cache.register(client, alias)
self.set_soap_logging(True)
return index
# private
def _log_imports(self):
if self._imports:
msg = "Using Imports for ImportDoctor:"
for imp in self._imports:
msg += "\n Namespace: '%s' Location: '%s'" % (imp.ns, imp.location)
for ns in imp.filter.tns:
msg += "\n Filtering for namespace '%s'" % ns
self._logger.info(msg)
def _get_url(self, url_or_path):
if not len(urlparse(url_or_path).scheme) > 1:
if not os.path.isfile(url_or_path):
raise IOError("File '%s' not found." % url_or_path)
url_or_path = 'file:' + urllib.pathname2url(url_or_path)
return url_or_path | /robotframework-sudslibrary-aljcalandra-1.1.4.tar.gz/robotframework-sudslibrary-aljcalandra-1.1.4/src/SudsLibrary/clientmanagement.py | 0.69233 | 0.272937 | clientmanagement.py | pypi |
from suds import WebFault
from suds.sax.text import Raw
from .utils import *
import socket
class RawSoapMessage(object):
def __init__(self, string):
if isinstance(string, unicode):
self.message = string.encode('UTF-8')
else:
self.message = str(string)
def __str__(self):
return self.message
def __unicode__(self):
return self.message.decode('UTF-8')
class _ProxyKeywords(object):
def call_soap_method(self, name, *args):
"""Calls the SOAP method with the given `name` and `args`.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(None, None, False, name, *args)
def specific_soap_call(self, service, port, name, *args):
"""Calls the SOAP method overriding client settings.
If there is only one service specified then `service` is ignored.
`service` and `port` can be either by name or index. If only `port` or
`service` need to be specified, leave the other one ${None} or
${EMPTY}. The index is the order of appearence in the WSDL starting
with 0.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(service, port, False, name, *args)
def call_soap_method_expecting_fault(self, name, *args):
"""Calls the SOAP method expecting the server to raise a fault.
Fails if the server does not raise a fault. Returns a Python object
graph or SOAP envelope as a XML string depending on the client
options.
A fault has the following attributes:\n
| faultcode | required |
| faultstring | required |
| faultactor | optional |
| detail | optional |
"""
return self._call(None, None, True, name, *args)
def create_raw_soap_message(self, message):
"""Returns an object that can used in lieu of SOAP method arguments.
`message` should be an entire SOAP message as a string. The object
returned can be used in lieu of *args for `Call Soap Method`, `Call
Soap Method Expecting Fault`, and `Specific Soap Call`.
Example:\n
| ${message}= | Create Raw Soap Message | <SOAP-ENV:Envelope ...</ns2:Body></SOAP-ENV:Envelope> |
| Call Soap Method | addContact | ${message} |
"""
return RawSoapMessage(message)
# private
def _call(self, service, port, expect_fault, name, *args):
client = self._client()
self._backup_options()
if service or (service == 0):
client.set_options(service=parse_index(service))
if port or (port == 0):
client.set_options(port=parse_index(port))
method = getattr(client.service, name)
retxml = client.options.retxml
received = None
try:
if len(args) == 1 and isinstance(args[0], RawSoapMessage):
received = method(__inject={'msg': args[0].message})
else:
received = method(*args)
# client does not raise fault when retxml=True, this will cause it to be raised
if retxml:
binding = method.method.binding.input
binding.get_reply(method.method, received)
if expect_fault:
raise AssertionError('The server did not raise a fault.')
except WebFault, e:
if not expect_fault:
raise e
if not retxml:
received = e.fault
finally:
self._restore_options()
return received
# private
def _backup_options(self):
options = self._client().options
self._old_options = dict([[n, getattr(options, n)] for n in ('service', 'port')])
if self._global_timeout:
self._old_timeout = socket.getdefaulttimeout()
def _restore_options(self):
self._client().set_options(**self._old_options)
# restore the default socket timeout because suds does not
if self._global_timeout:
socket.setdefaulttimeout(self._old_timeout) | /robotframework-sudslibrary-0.8.tar.gz/robotframework-sudslibrary-0.8/src/SudsLibrary/proxy.py | 0.716814 | 0.22658 | proxy.py | pypi |
from robot.utils import ConnectionCache
from .version import VERSION
from .monkeypatches import *
from .factory import _FactoryKeywords
from .clientmanagement import _ClientManagementKeywords
from .options import _OptionsKeywords
from .proxy import _ProxyKeywords
from .soaplogging import _SoapLoggingKeywords
from .wsse import _WsseKeywords
from suds import null
from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn
import urllib2
import traceback
__version__ = VERSION
class SudsLibrary(_ClientManagementKeywords, _FactoryKeywords,
_OptionsKeywords, _ProxyKeywords, _SoapLoggingKeywords,
_WsseKeywords):
"""SudsLibrary is a library for functional testing of SOAP-based web
services.
SudsLibrary is based on [https://fedorahosted.org/suds/|Suds], a dynamic
SOAP 1.1 client.
== Case Sensitivy in SudsLibrary ==
Many things in the world of SOAP are case-sensitive. This includes method
names, WSDL object names and attributes, and service or port names.
== Creating and Configuring a Client ==
If necessary, use keywords `Bind Schema To Location` or `Add Doctor
Import`. These are rarely needed. Next, `Create Soap Client` to create a Suds
client. The output from this keyword contains useful information including
available types and methods. Next, use other keywords to configure the
client as necessary. `Set Location` is the most commonly needed keyword.
== Working with WSDL Objects ==
When Suds digests a WSDL, it creates dynamic types to represent the complex
types defined by a WSDL or its imports. These types are listed in the
output of `Create Soap Client`. WSDL objects are used as method arguments,
attribute values of other WSDL objects, and return values. `Create Wsdl
Object` is used to create instances of WSDL object types. To see what the
structure of a WSDL object is, you can do this:
| ${obj}= | Create Wsdl Object | someObject |
| ${obj as str}= | Convert To String | ${obj} |
| Log | ${obj as str} | |
The same technique can be used to analyze a response object. It may also
help to use a tool such as Eclipse or SoapUI to comprehend the structures.
=== Getting WSDL Object Attributes ===
Getting a WSDL object's attribute value may be done with `Get Wsdl Object
Attribute` or extended variable syntax*. Keywords from other libraries, such
as _BuiltIn_ and _Collections_ may be used to verify attribute values.
Examples:
| ${name}= | Get Wsdl Object Attribute | ${person} | name |
| Should Be Equal | ${person.name} | Bob | |
=== Setting WSDL Object Attributes ===
Setting a WSDL object's attribute value may be done with `Set Wsdl Object
Attribute` or extended variable syntax*. `Set Wsdl Object Attribute`
verifies the argument is an object of the correct type and the attribute
exists.
| Set Wsdl Object Attribute | ${person} | name | Tia |
| ${person.name}= | Set Variable | Tia | |
* In order to use extended variable syntax, the attribute name must consist
of only letters, numbers, and underscores.
== Example Test ==
The following simple example demonstrates verifying the return value using
keywords in this library and in the `BuiltIn` and `Collections` libraries.
You can run this test because it uses a public web service.
| Create Soap Client | http://www.webservicex.net/Statistics.asmx?WSDL | | |
| ${dbl array}= | Create Wsdl Object | ArrayOfDouble | |
| Append To List | ${dbl array.double} | 2.0 | |
| Append To List | ${dbl array.double} | 3.0 | |
| ${result}= | Call Soap Method | GetStatistics | ${dbl array} |
| Should Be Equal As Numbers | ${result.Average} | 2.5 | |
The definition of type ArrayOfDouble:
| <s:complexType name="ArrayOfDouble">
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="unbounded" name="double" type="s:double"/>
| </s:sequence>
| </s:complexType>
Note that the attribute name on the ArrayOfDouble-type that is the list of
numbers is the singular "double". Outside of the WSDL, the structure can
also be seen in the output of Create Wsdl Object:
| ${dbl array} = (ArrayOfDouble){
| double[] = <empty>
| }
The relevant part of the WSDL defining the parameters to the method:
| <s:element name="GetStatistics">
| <s:complexType>
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="1" name="X" type="tns:ArrayOfDouble"/>
| </s:sequence>
| </s:complexType>
| </s:element>
The definition of this method appears in the output of Create Soap Client
as:
| GetStatistics(ArrayOfDouble X, )
== Passing Explicit NULL Values ==
If you have a service that takes NULL values for required parameters or
you want to pass NULL for optional object attributes, you simply need to
set the value to ${SUDS_NULL}. You need to use ${SUDS_NULL} instead of
${None} because None is interpreted by the marshaller as not having a
value. The soap message will contain an empty (and xsi:nil="true" if node
defined as nillable). ${SUDS_NULL} is defined during library
initialization, so editors like RIDE will not show it as defined.
== Extending SudsLibrary ==
There may be times where Suds/SudsLibrary does not work using the library
keywords alone. Extending the library instead of writing a custom one will
allow you to use the existing keywords in SudsLibrary.
There are two methods useful for extending SudsLibrary:
| _client()
| _add_client(client, alias=None)
The first can be used to access the current instance of
suds.client.Client. The second can be used to put a client into the client
cache that you have instantiated.
Here is an example demonstrating how to implement a keyword that adds a
MessagePlugin to the current Suds client (based on the [https://fedorahosted.org/suds/wiki/Documentation#MessagePlugin|Suds documentation]):
| from robot.libraries.BuiltIn import BuiltIn
| from suds.plugin import MessagePlugin
|
| class _MyPlugin(MessagePlugin):
| def marshalled(self, context):
| body = context.envelope.getChild('Body')
| foo = body[0]
| foo.set('id', '12345')
| foo.set('version', '2.0')
|
| class SudsLibraryExtensions(object):
| def attach_my_plugin(self):
| client = BuiltIn().get_library_instance("SudsLibrary")._client()
| # prepend so SudsLibrary's plugin is left in place
| plugins = client.options.plugins
| if any(isinstance(x, _MyPlugin) for x in plugins):
| return
| plugins.insert(0, _MyPlugin())
| client.set_options(plugins=plugins)
"""
ROBOT_LIBRARY_VERSION = VERSION
ROBOT_LIBRARY_SCOPE = "GLOBAL"
ROBOT_LIBRARY_DOC_FORMAT = "ROBOT"
def __init__(self):
self._cache = ConnectionCache(no_current_msg='No current client')
self._imports = []
self._logger = logger
self._global_timeout = True
try:
part = urllib2.__version__.split('.', 1)
n = float('.'.join(part))
if n >= 2.6:
self._global_timeout = False
except Exception, e:
raise e
self._logger.warn("Failed to get urllib2's version")
self._logger.debug(traceback.format_exc())
try: # exception if Robot is not running
BuiltIn().set_global_variable("${SUDS_NULL}", null())
except:
pass | /robotframework-sudslibrary-0.8.tar.gz/robotframework-sudslibrary-0.8/src/SudsLibrary/__init__.py | 0.693992 | 0.412234 | __init__.py | pypi |
from suds.xsd.doctor import Import
from suds.xsd.sxbasic import Import as BasicImport
from suds import ServiceNotFound
from suds.transport.https import HttpAuthenticated
from suds.transport.https import WindowsHttpAuthenticated
from suds.transport.http import HttpAuthenticated as AlwaysSendTransport
from .utils import *
import robot
class _OptionsKeywords(object):
def set_service(self, service):
"""Sets the `service` to use in future requests.
`service` should be the name or the index of the service as it appears in the WSDL.
"""
service = parse_index(service)
self._client().set_options(service=service)
def set_port(self, port):
"""Sets the `port` to use in future requests.
`port` should be the name or the index of the port as it appears in the WSDL.
"""
port = parse_index(port)
self._client().set_options(port=port)
def set_proxies(self, *protocol_url_pairs):
"""Sets the http proxy settings.
| Set Proxy | http | localhost:5000 | https | 10.0.4.23:80 |
"""
if len(protocol_url_pairs) % 2 != 0:
raise ValueError("There should be an even number of protocol-url pairs.")
proxy = {}
for i in range(0, len(protocol_url_pairs), 2):
proxy[protocol_url_pairs[i]] = protocol_url_pairs[i + 1]
self._client().set_options(proxy=proxy)
def set_headers(self, *dict_or_key_value_pairs):
"""Sets _extra_ http headers to send in future requests.
For HTTP headers; not to be confused with the SOAP header element.
Example:
| Set Headers | X-Requested-With | autogen | # using key-value pairs |
or using a dictionary:
| ${headers}= | Create Dictionary | X-Requested-With | autogen |
| Set Headers | ${headers} | | # using a dictionary |
"""
length = len(dict_or_key_value_pairs)
if length == 1:
headers = dict_or_key_value_pairs[0]
elif length % 2 == 0:
headers = {}
for i in range(0, len(dict_or_key_value_pairs), 2):
headers[dict_or_key_value_pairs[i]] = dict_or_key_value_pairs[i + 1]
else:
raise ValueError("There should be an even number of name-value pairs.")
self._client().set_options(headers=headers)
def set_soap_headers(self, *headers):
"""Sets SOAP headers to send in future requests.
Example:
| ${auth header}= | Create Wsdl Object | AuthHeader | |
| Set Wsdl Object Attribute | ${auth header} | UserID | gcarlson |
| Set Wsdl Object Attribute | ${auth header} | Password | heyOh |
| Set Soap Headers | ${auth header} | # using WSDL object | |
or using a dictionary:
| ${auth dict}= | Create Dictionary | UserName | gcarlson | Password | heyOh |
| Set Soap Headers | ${auth dict} | # using a dictionary | | | |
For setting WS-Security elements in the SOAP header, see
`Apply Username Token` and `Apply Security Timestamp`.
"""
self._client().set_options(soapheaders=headers)
def set_return_xml(self, return_xml):
"""Sets whether to return XML in future requests.
The default value is _False_. If `return_xml` is _True_, then return
the SOAP envelope as a string in future requests. Otherwise, return a
Python object graph. `Get Last Received` returns the XML received
regardless of this setting.
See also `Call Soap Method`, `Call Soap Method Expecting Fault`, and
`Specific Soap Call`.
Example:
| ${old value}= | Set Return Xml | True |
"""
old_value = self._client().options.retxml
self._set_boolean_option('retxml', return_xml)
return old_value
def set_http_authentication(self, username, password, type='STANDARD'):
"""Sets http authentication type and credentials.
Available types are STANDARD, ALWAYS_SEND, and NTLM. Type STANDARD
will only send credentials to the server upon request (HTTP/1.0 401
Authorization Required) by the server only. Type ALWAYS_SEND will
cause an Authorization header to be sent in every request. Type NTLM
requires the python-ntlm package to be installed, which is not
packaged with Suds or SudsLibrary.
"""
classes = {
'STANDARD': HttpAuthenticated,
'ALWAYS_SEND': AlwaysSendTransport,
'NTLM': WindowsHttpAuthenticated
}
try:
_class = classes[type.upper()]
except KeyError:
raise ValueError("'%s' is not a supported type." % type)
transport = _class(username=username, password=password)
self._client().set_options(transport=transport)
def set_location(self, url, service=None, names=None):
"""Sets location to use in future requests.
This is for when the location(s) specified in the WSDL are not correct.
`service` is the name or index of the service to change and ignored
unless there is more than one service. `names` should be either a
comma-delimited list of methods names or an iterable (e.g. a list). If
no methods names are given, then sets the location for all methods of
the service(s).
Example:
| Set Location | http://localhost:8080/myWS |
"""
wsdl = self._client().wsdl
service_count = len(wsdl.services)
if (service_count == 1):
service = 0
elif not service is None:
service = parse_index(service)
if isinstance(names, basestring):
names = names.split(",")
if service is None:
for svc in wsdl.services:
svc.setlocation(url, names)
elif isinstance(service, int):
wsdl.services[service].setlocation(url, names)
else:
for svc in wsdl.services:
if svc.name == service:
svc.setlocation(url, names)
return
raise ServiceNotFound(service)
def add_doctor_import(self, import_namespace, location=None, filters=None):
"""Adds an import be used in the next client.
Doctor imports are applied to the _next_ client created with
`Create Soap Client`. Doctor imports are necessary when references are
made in one schema to named objects defined in another schema without
importing it. Use `location` to specify the location to download the
schema file. `filters` should be either a comma-delimited list of
namespaces or an iterable (e.g. a list).
The following example would import the SOAP encoding schema into only
the namespace http://some/namespace/A if it is not already imported:
| Add Doctor Import | http://schemas.xmlsoap.org/soap/encoding/ | filters=http://some/namespace/A |
"""
if isinstance(filters, basestring):
filters = filters.split(",")
imp = Import(import_namespace, location)
if not filters is None:
for filter in filters:
imp.filter.add(filter)
self._imports.append(imp)
def bind_schema_to_location(self, namespace, location):
"""Sets the `location` for the given `namespace` of a schema.
This is for when an import statement specifies a schema but not its
location. If the schemaLocation is present and incorrect, this will
not override that. Bound schemas are shared amongst all instances of
SudsLibrary. Schemas should be bound if necessary before `Add Doctor
Import` or `Create Soap Client` where appropriate.
"""
BasicImport.bind(namespace, location)
def set_soap_timeout(self, timeout):
"""Sets the timeout for SOAP requests.
`timeout` must be given in Robot Framework's time format (e.g.
'1 minute', '2 min 3 s', '4.5'). The default timeout is 90 seconds.
Example:
| Set Soap Timeout | 3 min |
"""
self._set_soap_timeout(timeout)
timestr = format_robot_time(timeout)
self._logger.info("SOAP timeout set to %s" % timestr)
# private
def _set_boolean_option(self, name, value):
value = to_bool(value)
self._client().set_options(**{name: value})
def _set_soap_timeout(self, timeout):
timeout_in_secs = robot.utils.timestr_to_secs(timeout)
self._client().set_options(timeout=timeout_in_secs) | /robotframework-sudslibrary-0.8.tar.gz/robotframework-sudslibrary-0.8/src/SudsLibrary/options.py | 0.710226 | 0.328637 | options.py | pypi |
from suds.sudsobject import Object as SudsObject
class _FactoryKeywords(object):
def set_wsdl_object_attribute(self, object, name, value):
"""Sets the attribute of a WSDL object.
Example:
| ${order search request}= | Create Wsdl Object | OrderSearchRequest | |
| Set Wsdl Object Attribute | ${order search request} | id | 4065 |
"""
self._assert_is_suds_object(object)
getattr(object, name)
setattr(object, name, value)
def get_wsdl_object_attribute(self, object, name):
"""Gets the attribute of a WSDL object.
Extendend variable syntax may be used to access attributes; however,
some WSDL objects may have attribute names that are illegal in Python,
necessitating this keyword.
Example:
| ${sale record}= | Call Soap Method | getLastSale | |
| ${price}= | Get Wsdl Object Attribute | ${sale record} | Price |
"""
self._assert_is_suds_object(object)
return getattr(object, name)
def create_wsdl_object(self, type, *name_value_pairs):
"""Creates a WSDL object of the specified `type`.
Requested `type` must be defined in the WSDL, in an import specified
by the WSDL, or with `Add Doctor Import`. `type` is case sensitive.
Example:
| ${contact}= | Create Wsdl Object | Contact | |
| Set Wsdl Object Attribute | ${contact} | Name | Kelly Newman |
Attribute values can be set by passing the attribute name and value in
pairs. This is equivalent to the two lines above:
| ${contact}= | Create Wsdl Object | Contact | Name | Kelly Newman |
"""
if len(name_value_pairs) % 2 != 0:
raise ValueError("Creating a WSDL object failed. There should be "
"an even number of name-value pairs.")
obj = self._client().factory.create(type)
for i in range(0, len(name_value_pairs), 2):
self.set_wsdl_object_attribute(obj, name_value_pairs[i], name_value_pairs[i + 1])
return obj
# private
def _assert_is_suds_object(self, object):
if not isinstance(object, SudsObject):
raise ValueError("Object must be a WSDL object (suds.sudsobject.Object).") | /robotframework-sudslibrary-0.8.tar.gz/robotframework-sudslibrary-0.8/src/SudsLibrary/factory.py | 0.73173 | 0.359898 | factory.py | pypi |
import os
import urllib
from suds.xsd.doctor import ImportDoctor
from suds.transport.http import HttpAuthenticated
from urlparse import urlparse
from suds.client import Client
from .utils import *
class _ClientManagementKeywords(object):
def create_soap_client(self, url_or_path, alias=None, autoblend=False, timeout='90 seconds'):
"""Loads a WSDL from the given URL/path and creates a Suds SOAP client.
Returns the index of this client instance which can be used later to
switch back to it. See `Switch Soap Client` for example.
Optional alias is an alias for the client instance and it can be used
for switching between clients (just as index can be used). See `Switch
Soap Client` for more details.
Autoblend ensures that the schema(s) defined within the WSDL import
each other.
`timeout` sets the timeout for SOAP requests and must be given in
Robot Framework's time format (e.g. '1 minute', '2 min 3 s', '4.5').
Examples:
| Create Soap Client | http://localhost:8080/ws/Billing.asmx?WSDL |
| Create Soap Client | ${CURDIR}/../wsdls/tracking.wsdl |
"""
url = self._get_url(url_or_path)
autoblend = to_bool(autoblend)
kwargs = {'autoblend': autoblend}
imports = self._imports
if imports:
self._log_imports()
kwargs['doctor'] = ImportDoctor(*imports)
client = Client(url, **kwargs)
index = self._add_client(client, alias)
self._set_soap_timeout(timeout)
return index
def switch_soap_client(self, index_or_alias):
"""Switches between clients using index or alias.
Index is returned from `Create Soap Client` and alias can be given to
it.
Example:
| Create Soap Client | http://localhost:8080/Billing?wsdl | Billing |
| Create Soap Client | http://localhost:8080/Marketing?wsdl | Marketing |
| Call Soap Method | sendSpam | |
| Switch Soap Client | Billing | # alias |
| Call Soap Method | sendInvoices | |
| Switch Soap Client | 2 | # index |
Above example expects that there was no other clients created when
creating the first one because it used index '1' when switching to it
later. If you aren't sure about that you can store the index into
a variable as below.
| ${id} = | Create Soap Client | ... |
| # Do something ... | | |
| Switch Soap Client | ${id} | |
"""
self._cache.switch(index_or_alias)
# PyAPI
def _client(self):
"""Returns the current suds.client.Client instance."""
return self._cache.current
def _add_client(self, client, alias=None):
"""Puts a client into the cache and returns the index.
The added client becomes the current one."""
client.set_options(faults=True)
self._logger.info('Using WSDL at %s%s' % (client.wsdl.url, client))
self._imports = []
index = self._cache.register(client, alias)
self.set_soap_logging(True)
return index
# private
def _log_imports(self):
if self._imports:
msg = "Using Imports for ImportDoctor:"
for imp in self._imports:
msg += "\n Namespace: '%s' Location: '%s'" % (imp.ns, imp.location)
for ns in imp.filter.tns:
msg += "\n Filtering for namespace '%s'" % ns
self._logger.info(msg)
def _get_url(self, url_or_path):
if not len(urlparse(url_or_path).scheme) > 1:
if not os.path.isfile(url_or_path):
raise IOError("File '%s' not found." % url_or_path)
url_or_path = 'file:' + urllib.pathname2url(url_or_path)
return url_or_path | /robotframework-sudslibrary-0.8.tar.gz/robotframework-sudslibrary-0.8/src/SudsLibrary/clientmanagement.py | 0.690455 | 0.276342 | clientmanagement.py | pypi |
from suds import WebFault
from suds.sax.text import Raw
from .utils import *
import socket
class RawSoapMessage(object):
def __init__(self, string):
if isinstance(string, unicode):
self.message = string.encode('UTF-8')
else:
self.message = str(string)
def __str__(self):
return self.message
def __unicode__(self):
return self.message.decode('UTF-8')
class _ProxyKeywords(object):
def call_soap_method(self, name, *args):
"""Calls the SOAP method with the given `name` and `args`.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(None, None, False, name, *args)
def specific_soap_call(self, service, port, name, *args):
"""Calls the SOAP method overriding client settings.
If there is only one service specified then `service` is ignored.
`service` and `port` can be either by name or index. If only `port` or
`service` need to be specified, leave the other one ${None} or
${EMPTY}. The index is the order of appearence in the WSDL starting
with 0.
Returns a Python object graph or SOAP envelope as a XML string
depending on the client options.
"""
return self._call(service, port, False, name, *args)
def call_soap_method_expecting_fault(self, name, *args):
"""Calls the SOAP method expecting the server to raise a fault.
Fails if the server does not raise a fault. Returns a Python object
graph or SOAP envelope as a XML string depending on the client
options.
A fault has the following attributes:\n
| faultcode | required |
| faultstring | required |
| faultactor | optional |
| detail | optional |
"""
return self._call(None, None, True, name, *args)
def create_raw_soap_message(self, message):
"""Returns an object that can used in lieu of SOAP method arguments.
`message` should be an entire SOAP message as a string. The object
returned can be used in lieu of *args for `Call Soap Method`, `Call
Soap Method Expecting Fault`, and `Specific Soap Call`.
Example:\n
| ${message}= | Create Raw Soap Message | <SOAP-ENV:Envelope ...</ns2:Body></SOAP-ENV:Envelope> |
| Call Soap Method | addContact | ${message} |
"""
return RawSoapMessage(message)
# private
def _call(self, service, port, expect_fault, name, *args):
client = self._client()
self._backup_options()
if service or (service == 0):
client.set_options(service=parse_index(service))
if port or (port == 0):
client.set_options(port=parse_index(port))
method = getattr(client.service, name)
retxml = client.options.retxml
received = None
try:
if len(args) == 1 and isinstance(args[0], RawSoapMessage):
received = method(__inject={'msg': args[0].message})
else:
received = method(*args)
# client does not raise fault when retxml=True, this will cause it to be raised
if retxml:
binding = method.method.binding.input
binding.get_reply(method.method, received)
if expect_fault:
raise AssertionError('The server did not raise a fault.')
except WebFault as e:
if not expect_fault:
raise e
if not retxml:
received = e.fault
finally:
self._restore_options()
return received
# private
def _backup_options(self):
options = self._client().options
self._old_options = dict([[n, getattr(options, n)] for n in ('service', 'port')])
if self._global_timeout:
self._old_timeout = socket.getdefaulttimeout()
def _restore_options(self):
self._client().set_options(**self._old_options)
# restore the default socket timeout because suds does not
if self._global_timeout:
socket.setdefaulttimeout(self._old_timeout) | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/proxy.py | 0.716715 | 0.226655 | proxy.py | pypi |
from .utils import *
from suds.wsse import Security
from suds.wsse import Token
from suds.wsse import Timestamp
from suds.wsse import UsernameToken
from suds.sax.element import Element
from random import random
from hashlib import sha1
import base64
import re
from datetime import timedelta
import robot
from logging import getLogger
from suds import *
from suds.xsd import *
import time
import datetime as dt
TEXT_TYPE = 'http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText'
DIGEST_TYPE = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest"
BASE64_ENC_TYPE = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary"
WSSENS = \
('wsse',
'http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd')
WSUNS = \
('wsu',
'http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd')
class Date:
"""
An XML date object.
Supported formats:
- YYYY-MM-DD
- YYYY-MM-DD(z|Z)
- YYYY-MM-DD+06:00
- YYYY-MM-DD-06:00
@ivar date: The object value.
@type date: B{datetime}.I{date}
"""
def __init__(self, date):
"""
@param date: The value of the object.
@type date: (date|str)
@raise ValueError: When I{date} is invalid.
"""
if isinstance(date, dt.date):
self.date = date
return
if isinstance(date, str):
self.date = self.__parse(date)
return
raise ValueError(type(date))
def year(self):
"""
Get the I{year} component.
@return: The year.
@rtype: int
"""
return self.date.year
def month(self):
"""
Get the I{month} component.
@return: The month.
@rtype: int
"""
return self.date.month
def day(self):
"""
Get the I{day} component.
@return: The day.
@rtype: int
"""
return self.date.day
def __parse(self, s):
"""
Parse the string date.
Supported formats:
- YYYY-MM-DD
- YYYY-MM-DD(z|Z)
- YYYY-MM-DD+06:00
- YYYY-MM-DD-06:00
Although, the TZ is ignored because it's meaningless
without the time, right?
@param s: A date string.
@type s: str
@return: A date object.
@rtype: I{date}
"""
try:
year, month, day = s[:10].split('-', 2)
year = int(year)
month = int(month)
day = int(day)
return dt.date(year, month, day)
except:
log.debug(s, exec_info=True)
raise ValueError('Invalid format "%s"' % s)
def __str__(self):
return unicode(self)
def __unicode__(self):
return self.date.isoformat()
class Time:
"""
An XML time object.
Supported formats:
- HH:MI:SS
- HH:MI:SS(z|Z)
- HH:MI:SS.ms
- HH:MI:SS.ms(z|Z)
- HH:MI:SS(+|-)06:00
- HH:MI:SS.ms(+|-)06:00
@ivar tz: The timezone
@type tz: L{Timezone}
@ivar date: The object value.
@type date: B{datetime}.I{time}
"""
def __init__(self, time, adjusted=True):
"""
@param time: The value of the object.
@type time: (time|str)
@param adjusted: Adjust for I{local} Timezone.
@type adjusted: boolean
@raise ValueError: When I{time} is invalid.
"""
self.tz = Timezone()
if isinstance(time, dt.time):
self.time = time
return
if isinstance(time, str):
self.time = self.__parse(time)
if adjusted:
self.__adjust()
return
raise ValueError(type(time))
def hour(self):
"""
Get the I{hour} component.
@return: The hour.
@rtype: int
"""
return self.time.hour
def minute(self):
"""
Get the I{minute} component.
@return: The minute.
@rtype: int
"""
return self.time.minute
def second(self):
"""
Get the I{seconds} component.
@return: The seconds.
@rtype: int
"""
return self.time.second
def microsecond(self):
"""
Get the I{microsecond} component.
@return: The microsecond.
@rtype: int
"""
return self.time.microsecond
def __adjust(self):
"""
Adjust for TZ offset.
"""
if hasattr(self, 'offset'):
today = dt.date.today()
delta = self.tz.adjustment(self.offset)
d = dt.datetime.combine(today, self.time)
d = ( d + delta )
self.time = d.time()
def __parse(self, s):
"""
Parse the string date.
Patterns:
- HH:MI:SS
- HH:MI:SS(z|Z)
- HH:MI:SS.ms
- HH:MI:SS.ms(z|Z)
- HH:MI:SS(+|-)06:00
- HH:MI:SS.ms(+|-)06:00
@param s: A time string.
@type s: str
@return: A time object.
@rtype: B{datetime}.I{time}
"""
try:
offset = None
part = Timezone.split(s)
hour, minute, second = part[0].split(':', 2)
hour = int(hour)
minute = int(minute)
second, ms = self.__second(second)
if len(part) == 2:
self.offset = self.__offset(part[1])
if ms is None:
return dt.time(hour, minute, second)
else:
return dt.time(hour, minute, second, ms)
except:
log.debug(s, exec_info=True)
raise ValueError('Invalid format "%s"' % s)
def __second(self, s):
"""
Parse the seconds and microseconds.
The microseconds are truncated to 999999 due to a restriction in
the python datetime.datetime object.
@param s: A string representation of the seconds.
@type s: str
@return: Tuple of (sec,ms)
@rtype: tuple.
"""
part = s.split('.')
if len(part) > 1:
return (int(part[0]), int(part[1][:6]))
else:
return (int(part[0]), None)
def __offset(self, s):
"""
Parse the TZ offset.
@param s: A string representation of the TZ offset.
@type s: str
@return: The signed offset in hours.
@rtype: str
"""
if len(s) == len('-00:00'):
return int(s[:3])
if len(s) == 0:
return self.tz.local
if len(s) == 1:
return 0
raise Exception()
def __str__(self):
return unicode(self)
def __unicode__(self):
time = self.time.isoformat()
if self.tz.local:
return '%s%+.2d:00' % (time, self.tz.local)
else:
return '%sZ' % time
class DateTime(Date,Time):
"""
An XML time object.
Supported formats:
- YYYY-MM-DDB{T}HH:MI:SS
- YYYY-MM-DDB{T}HH:MI:SS(z|Z)
- YYYY-MM-DDB{T}HH:MI:SS.ms
- YYYY-MM-DDB{T}HH:MI:SS.ms(z|Z)
- YYYY-MM-DDB{T}HH:MI:SS(+|-)06:00
- YYYY-MM-DDB{T}HH:MI:SS.ms(+|-)06:00
@ivar datetime: The object value.
@type datetime: B{datetime}.I{datedate}
"""
def __init__(self, date):
"""
@param date: The value of the object.
@type date: (datetime|str)
@raise ValueError: When I{tm} is invalid.
"""
if isinstance(date, dt.datetime):
Date.__init__(self, date.date())
Time.__init__(self, date.time())
self.datetime = \
dt.datetime.combine(self.date, self.time)
return
if isinstance(date, str):
part = date.split('T')
Date.__init__(self, part[0])
Time.__init__(self, part[1], 0)
self.datetime = \
dt.datetime.combine(self.date, self.time)
self.__adjust()
return
raise ValueError(type(date))
def __adjust(self):
"""
Adjust for TZ offset.
"""
if not hasattr(self, 'offset'):
return
delta = self.tz.adjustment(self.offset)
try:
d = ( self.datetime + delta )
self.datetime = d
self.date = d.date()
self.time = d.time()
except OverflowError:
log.warn('"%s" caused overflow, not-adjusted', self.datetime)
def __str__(self):
return unicode(self)
def __unicode__(self):
s = []
s.append(Date.__unicode__(self))
s.append(Time.__unicode__(self))
return 'T'.join(s)
class UTC(DateTime):
"""
Represents current UTC time.
"""
def __init__(self, date=None):
if date is None:
date = dt.datetime.utcnow()
DateTime.__init__(self, date)
self.tz.local = 0
class AutoTimestamp(Timestamp):
def __init__(self, validity=None):
Token.__init__(self)
self.validity = validity
def xml(self):
self.created = Token.utc()
root = Element("Timestamp", ns=WSUNS)
created = Element('Created', ns=WSUNS)
created.setText(self._trim_to_ms(str(UTC(self.created))))
root.append(created)
if self.validity is not None:
self.expires = self.created + timedelta(seconds=self.validity)
expires = Element('Expires', ns=WSUNS)
expires.setText(self._trim_to_ms(str(UTC(self.expires))))
root.append(expires)
return root
def _trim_to_ms(self, datetime):
return re.sub(r'(?<=\.\d{3})\d+', '', datetime)
class AutoUsernameToken(UsernameToken):
def __init__(self, username=None, password=None, setcreated=False,
setnonce=False, digest=False):
UsernameToken.__init__(self, username, password)
self.autosetcreated = setcreated
self.autosetnonce = setnonce
self.digest = digest
def setnonce(self, text=None):
if text is None:
hash = sha1()
hash.update(str(random()))
hash.update(str(UTC()))
self.nonce = hash.hexdigest()
else:
self.nonce = text
def xml(self):
if self.digest and self.password is None:
raise RuntimeError("Cannot generate password digest without the password.")
if self.autosetnonce:
self.setnonce()
if self.autosetcreated:
self.setcreated()
root = Element('UsernameToken', ns=WSSENS)
u = Element('Username', ns=WSSENS)
u.setText(self.username)
root.append(u)
if self.password is not None:
password = self.password
if self.digest:
password = self.get_digest()
p = Element('Password', ns=WSSENS)
p.setText(password)
p.set('Type', DIGEST_TYPE if self.digest else TEXT_TYPE)
root.append(p)
if self.nonce is not None:
n = Element('Nonce', ns=WSSENS)
n.setText(base64.encodestring(self.nonce)[:-1])
n.set('EncodingType', BASE64_ENC_TYPE)
root.append(n)
if self.created:
c = Element('Created', ns=WSUNS)
c.setText(str(UTC(self.created)))
root.append(c)
return root
def get_digest(self):
nonce = str(self.nonce) if self.nonce else ""
created = str(UTC(self.created)) if self.created else ""
password = str(self.password)
message = nonce + created + password
return base64.encodestring(sha1(message).digest())[:-1]
class _WsseKeywords(object):
def apply_security_timestamp(self, duration=None):
"""Applies a Timestamp element to future requests valid for the given `duration`.
The SOAP header will contain a Timestamp element as specified in the
WS-Security extension. The Created and Expires values are updated
every time a request is made. If `duration` is None, the Expires
element will be absent.
`duration` must be given in Robot Framework's time format (e.g.
'1 minute', '2 min 3 s', '4.5').
Example:
| Apply Security Timestamp | 5 min |
"""
if duration is not None:
duration = robot.utils.timestr_to_secs(duration)
wsse = self._get_wsse()
wsse.tokens = [x for x in wsse.tokens if not isinstance(x, Timestamp)]
wsse.tokens.insert(0, AutoTimestamp(duration))
self._client().set_options(wsse=wsse)
def apply_username_token(self, username, password=None, setcreated=False,
setnonce=False, digest=False):
"""Applies a UsernameToken element to future requests.
The SOAP header will contain a UsernameToken element as specified in
Username Token Profile 1.1 that complies with Basic Security Profile
1.1. The Created and Nonce values, if enabled, are generated
automatically and updated every time a request is made. If `digest` is
True, a digest derived from the password is sent.
Example:
| Apply Username Token | ying | myPa$$word |
"""
setcreated = to_bool(setcreated)
setnonce = to_bool(setnonce)
digest = to_bool(digest)
if digest and password is None:
raise RuntimeError("Password is required when digest is True.")
token = AutoUsernameToken(username, password, setcreated, setnonce,
digest)
wsse = self._get_wsse()
wsse.tokens = [x for x in wsse.tokens if not isinstance(x, UsernameToken)]
wsse.tokens.append(token)
self._client().set_options(wsse=wsse)
# private
def _get_wsse(self, create=True):
wsse = self._client().options.wsse
if wsse is None and create:
wsse = Security()
wsse.mustUnderstand = '1'
return wsse | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/wsse.py | 0.58747 | 0.176388 | wsse.py | pypi |
from robot.utils import ConnectionCache
from .version import VERSION
from .monkeypatches import *
from .factory import _FactoryKeywords
from .clientmanagement import _ClientManagementKeywords
from .options import _OptionsKeywords
from .proxy import _ProxyKeywords
from .soaplogging import _SoapLoggingKeywords
from .wsse import _WsseKeywords
from suds import null
from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn
import traceback
try:
import urllib2
except ImportError:
import urllib
__version__ = VERSION
class SudsLibrary(_ClientManagementKeywords, _FactoryKeywords,
_OptionsKeywords, _ProxyKeywords, _SoapLoggingKeywords,
_WsseKeywords):
"""SudsLibrary is a library for functional testing of SOAP-based web
services.
SudsLibrary is based on [https://fedorahosted.org/suds/|Suds], a dynamic
SOAP 1.1 client.
== Case Sensitivy in SudsLibrary ==
Many things in the world of SOAP are case-sensitive. This includes method
names, WSDL object names and attributes, and service or port names.
== Creating and Configuring a Client ==
If necessary, use keywords `Bind Schema To Location` or `Add Doctor
Import`. These are rarely needed. Next, `Create Soap Client` to create a Suds
client. The output from this keyword contains useful information including
available types and methods. Next, use other keywords to configure the
client as necessary. `Set Location` is the most commonly needed keyword.
== Working with WSDL Objects ==
When Suds digests a WSDL, it creates dynamic types to represent the complex
types defined by a WSDL or its imports. These types are listed in the
output of `Create Soap Client`. WSDL objects are used as method arguments,
attribute values of other WSDL objects, and return values. `Create Wsdl
Object` is used to create instances of WSDL object types. To see what the
structure of a WSDL object is, you can do this:
| ${obj}= | Create Wsdl Object | someObject |
| ${obj as str}= | Convert To String | ${obj} |
| Log | ${obj as str} | |
The same technique can be used to analyze a response object. It may also
help to use a tool such as Eclipse or SoapUI to comprehend the structures.
=== Getting WSDL Object Attributes ===
Getting a WSDL object's attribute value may be done with `Get Wsdl Object
Attribute` or extended variable syntax*. Keywords from other libraries, such
as _BuiltIn_ and _Collections_ may be used to verify attribute values.
Examples:
| ${name}= | Get Wsdl Object Attribute | ${person} | name |
| Should Be Equal | ${person.name} | Bob | |
=== Setting WSDL Object Attributes ===
Setting a WSDL object's attribute value may be done with `Set Wsdl Object
Attribute` or extended variable syntax*. `Set Wsdl Object Attribute`
verifies the argument is an object of the correct type and the attribute
exists.
| Set Wsdl Object Attribute | ${person} | name | Tia |
| ${person.name}= | Set Variable | Tia | |
* In order to use extended variable syntax, the attribute name must consist
of only letters, numbers, and underscores.
== Example Test ==
The following simple example demonstrates verifying the return value using
keywords in this library and in the `BuiltIn` and `Collections` libraries.
You can run this test because it uses a public web service.
| Create Soap Client | http://www.webservicex.net/Statistics.asmx?WSDL | | |
| ${dbl array}= | Create Wsdl Object | ArrayOfDouble | |
| Append To List | ${dbl array.double} | 2.0 | |
| Append To List | ${dbl array.double} | 3.0 | |
| ${result}= | Call Soap Method | GetStatistics | ${dbl array} |
| Should Be Equal As Numbers | ${result.Average} | 2.5 | |
The definition of type ArrayOfDouble:
| <s:complexType name="ArrayOfDouble">
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="unbounded" name="double" type="s:double"/>
| </s:sequence>
| </s:complexType>
Note that the attribute name on the ArrayOfDouble-type that is the list of
numbers is the singular "double". Outside of the WSDL, the structure can
also be seen in the output of Create Wsdl Object:
| ${dbl array} = (ArrayOfDouble){
| double[] = <empty>
| }
The relevant part of the WSDL defining the parameters to the method:
| <s:element name="GetStatistics">
| <s:complexType>
| <s:sequence>
| <s:element minOccurs="0" maxOccurs="1" name="X" type="tns:ArrayOfDouble"/>
| </s:sequence>
| </s:complexType>
| </s:element>
The definition of this method appears in the output of Create Soap Client
as:
| GetStatistics(ArrayOfDouble X, )
== Passing Explicit NULL Values ==
If you have a service that takes NULL values for required parameters or
you want to pass NULL for optional object attributes, you simply need to
set the value to ${SUDS_NULL}. You need to use ${SUDS_NULL} instead of
${None} because None is interpreted by the marshaller as not having a
value. The soap message will contain an empty (and xsi:nil="true" if node
defined as nillable). ${SUDS_NULL} is defined during library
initialization, so editors like RIDE will not show it as defined.
== Extending SudsLibrary ==
There may be times where Suds/SudsLibrary does not work using the library
keywords alone. Extending the library instead of writing a custom one will
allow you to use the existing keywords in SudsLibrary.
There are two methods useful for extending SudsLibrary:
| _client()
| _add_client(client, alias=None)
The first can be used to access the current instance of
suds.client.Client. The second can be used to put a client into the client
cache that you have instantiated.
Here is an example demonstrating how to implement a keyword that adds a
MessagePlugin to the current Suds client (based on the [https://fedorahosted.org/suds/wiki/Documentation#MessagePlugin|Suds documentation]):
| from robot.libraries.BuiltIn import BuiltIn
| from suds.plugin import MessagePlugin
|
| class _MyPlugin(MessagePlugin):
| def marshalled(self, context):
| body = context.envelope.getChild('Body')
| foo = body[0]
| foo.set('id', '12345')
| foo.set('version', '2.0')
|
| class SudsLibraryExtensions(object):
| def attach_my_plugin(self):
| client = BuiltIn().get_library_instance("SudsLibrary")._client()
| # prepend so SudsLibrary's plugin is left in place
| plugins = client.options.plugins
| if any(isinstance(x, _MyPlugin) for x in plugins):
| return
| plugins.insert(0, _MyPlugin())
| client.set_options(plugins=plugins)
"""
ROBOT_LIBRARY_VERSION = VERSION
ROBOT_LIBRARY_SCOPE = "GLOBAL"
ROBOT_LIBRARY_DOC_FORMAT = "ROBOT"
def __init__(self):
self._cache = ConnectionCache(no_current_msg='No current client')
self._imports = []
self._logger = logger
self._global_timeout = False
try: # exception if Robot is not running
BuiltIn().set_global_variable("${SUDS_NULL}", null())
except:
pass | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/__init__.py | 0.69368 | 0.430028 | __init__.py | pypi |
from suds.xsd.doctor import Import
from suds.xsd.sxbasic import Import as BasicImport
from suds import ServiceNotFound
from suds.transport.https import HttpAuthenticated
from suds.transport.https import WindowsHttpAuthenticated
from suds.transport.http import HttpAuthenticated as AlwaysSendTransport
from .utils import *
import robot
class _OptionsKeywords(object):
def set_service(self, service):
"""Sets the `service` to use in future requests.
`service` should be the name or the index of the service as it appears in the WSDL.
"""
service = parse_index(service)
self._client().set_options(service=service)
def set_port(self, port):
"""Sets the `port` to use in future requests.
`port` should be the name or the index of the port as it appears in the WSDL.
"""
port = parse_index(port)
self._client().set_options(port=port)
def set_proxies(self, *protocol_url_pairs):
"""Sets the http proxy settings.
| Set Proxy | http | localhost:5000 | https | 10.0.4.23:80 |
"""
if len(protocol_url_pairs) % 2 != 0:
raise ValueError("There should be an even number of protocol-url pairs.")
proxy = {}
for i in range(0, len(protocol_url_pairs), 2):
proxy[protocol_url_pairs[i]] = protocol_url_pairs[i + 1]
self._client().set_options(proxy=proxy)
def set_headers(self, *dict_or_key_value_pairs):
"""Sets _extra_ http headers to send in future requests.
For HTTP headers; not to be confused with the SOAP header element.
Example:
| Set Headers | X-Requested-With | autogen | # using key-value pairs |
or using a dictionary:
| ${headers}= | Create Dictionary | X-Requested-With | autogen |
| Set Headers | ${headers} | | # using a dictionary |
"""
length = len(dict_or_key_value_pairs)
if length == 1:
headers = dict_or_key_value_pairs[0]
elif length % 2 == 0:
headers = {}
for i in range(0, len(dict_or_key_value_pairs), 2):
headers[dict_or_key_value_pairs[i]] = dict_or_key_value_pairs[i + 1]
else:
raise ValueError("There should be an even number of name-value pairs.")
self._client().set_options(headers=headers)
def set_soap_headers(self, *headers):
"""Sets SOAP headers to send in future requests.
Example:
| ${auth header}= | Create Wsdl Object | AuthHeader | |
| Set Wsdl Object Attribute | ${auth header} | UserID | gcarlson |
| Set Wsdl Object Attribute | ${auth header} | Password | heyOh |
| Set Soap Headers | ${auth header} | # using WSDL object | |
or using a dictionary:
| ${auth dict}= | Create Dictionary | UserName | gcarlson | Password | heyOh |
| Set Soap Headers | ${auth dict} | # using a dictionary | | | |
For setting WS-Security elements in the SOAP header, see
`Apply Username Token` and `Apply Security Timestamp`.
"""
self._client().set_options(soapheaders=headers)
def set_return_xml(self, return_xml):
"""Sets whether to return XML in future requests.
The default value is _False_. If `return_xml` is _True_, then return
the SOAP envelope as a string in future requests. Otherwise, return a
Python object graph. `Get Last Received` returns the XML received
regardless of this setting.
See also `Call Soap Method`, `Call Soap Method Expecting Fault`, and
`Specific Soap Call`.
Example:
| ${old value}= | Set Return Xml | True |
"""
old_value = self._client().options.retxml
self._set_boolean_option('retxml', return_xml)
return old_value
def set_http_authentication(self, username, password, type='STANDARD'):
"""Sets http authentication type and credentials.
Available types are STANDARD, ALWAYS_SEND, and NTLM. Type STANDARD
will only send credentials to the server upon request (HTTP/1.0 401
Authorization Required) by the server only. Type ALWAYS_SEND will
cause an Authorization header to be sent in every request. Type NTLM
requires the python-ntlm package to be installed, which is not
packaged with Suds or SudsLibrary.
"""
classes = {
'STANDARD': HttpAuthenticated,
'ALWAYS_SEND': AlwaysSendTransport,
'NTLM': WindowsHttpAuthenticated
}
try:
_class = classes[type.upper()]
except KeyError:
raise ValueError("'%s' is not a supported type." % type)
transport = _class(username=username, password=password)
self._client().set_options(transport=transport)
def set_location(self, url, service=None, names=None):
"""Sets location to use in future requests.
This is for when the location(s) specified in the WSDL are not correct.
`service` is the name or index of the service to change and ignored
unless there is more than one service. `names` should be either a
comma-delimited list of methods names or an iterable (e.g. a list). If
no methods names are given, then sets the location for all methods of
the service(s).
Example:
| Set Location | http://localhost:8080/myWS |
"""
wsdl = self._client().wsdl
service_count = len(wsdl.services)
if (service_count == 1):
service = 0
elif not service is None:
service = parse_index(service)
if isinstance(names, str):
names = names.split(",")
if service is None:
for svc in wsdl.services:
svc.setlocation(url, names)
elif isinstance(service, int):
wsdl.services[service].setlocation(url, names)
else:
for svc in wsdl.services:
if svc.name == service:
svc.setlocation(url, names)
return
raise ServiceNotFound(service)
def add_doctor_import(self, import_namespace, location=None, filters=None):
"""Adds an import be used in the next client.
Doctor imports are applied to the _next_ client created with
`Create Soap Client`. Doctor imports are necessary when references are
made in one schema to named objects defined in another schema without
importing it. Use `location` to specify the location to download the
schema file. `filters` should be either a comma-delimited list of
namespaces or an iterable (e.g. a list).
The following example would import the SOAP encoding schema into only
the namespace http://some/namespace/A if it is not already imported:
| Add Doctor Import | http://schemas.xmlsoap.org/soap/encoding/ | filters=http://some/namespace/A |
"""
if isinstance(filters, str):
filters = filters.split(",")
imp = Import(import_namespace, location)
if not filters is None:
for filter in filters:
imp.filter.add(filter)
self._imports.append(imp)
def bind_schema_to_location(self, namespace, location):
"""Sets the `location` for the given `namespace` of a schema.
This is for when an import statement specifies a schema but not its
location. If the schemaLocation is present and incorrect, this will
not override that. Bound schemas are shared amongst all instances of
SudsLibrary. Schemas should be bound if necessary before `Add Doctor
Import` or `Create Soap Client` where appropriate.
"""
BasicImport.bind(namespace, location)
def set_soap_timeout(self, timeout):
"""Sets the timeout for SOAP requests.
`timeout` must be given in Robot Framework's time format (e.g.
'1 minute', '2 min 3 s', '4.5'). The default timeout is 90 seconds.
Example:
| Set Soap Timeout | 3 min |
"""
self._set_soap_timeout(timeout)
timestr = format_robot_time(timeout)
self._logger.info("SOAP timeout set to %s" % timestr)
# private
def _set_boolean_option(self, name, value):
value = to_bool(value)
self._client().set_options(**{name: value})
def _set_soap_timeout(self, timeout):
timeout_in_secs = robot.utils.timestr_to_secs(timeout)
self._client().set_options(timeout=timeout_in_secs) | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/options.py | 0.740456 | 0.312291 | options.py | pypi |
from suds.sudsobject import Object as SudsObject
class _FactoryKeywords(object):
def set_wsdl_object_attribute(self, object, name, value):
"""Sets the attribute of a WSDL object.
Example:
| ${order search request}= | Create Wsdl Object | OrderSearchRequest | |
| Set Wsdl Object Attribute | ${order search request} | id | 4065 |
"""
self._assert_is_suds_object(object)
getattr(object, name)
setattr(object, name, value)
def get_wsdl_object_attribute(self, object, name):
"""Gets the attribute of a WSDL object.
Extendend variable syntax may be used to access attributes; however,
some WSDL objects may have attribute names that are illegal in Python,
necessitating this keyword.
Example:
| ${sale record}= | Call Soap Method | getLastSale | |
| ${price}= | Get Wsdl Object Attribute | ${sale record} | Price |
"""
self._assert_is_suds_object(object)
return getattr(object, name)
def create_wsdl_object(self, type, *name_value_pairs):
"""Creates a WSDL object of the specified `type`.
Requested `type` must be defined in the WSDL, in an import specified
by the WSDL, or with `Add Doctor Import`. `type` is case sensitive.
Example:
| ${contact}= | Create Wsdl Object | Contact | |
| Set Wsdl Object Attribute | ${contact} | Name | Kelly Newman |
Attribute values can be set by passing the attribute name and value in
pairs. This is equivalent to the two lines above:
| ${contact}= | Create Wsdl Object | Contact | Name | Kelly Newman |
"""
if len(name_value_pairs) % 2 != 0:
raise ValueError("Creating a WSDL object failed. There should be "
"an even number of name-value pairs.")
obj = self._client().factory.create(type)
for i in range(0, len(name_value_pairs), 2):
self.set_wsdl_object_attribute(obj, name_value_pairs[i], name_value_pairs[i + 1])
return obj
# private
def _assert_is_suds_object(self, object):
if not isinstance(object, SudsObject):
raise ValueError("Object must be a WSDL object (suds.sudsobject.Object).") | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/factory.py | 0.73173 | 0.359898 | factory.py | pypi |
import os
import urllib
from suds.xsd.doctor import ImportDoctor
from suds.transport.http import HttpAuthenticated
from suds.client import Client
from .utils import *
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
class _ClientManagementKeywords(object):
def create_soap_client(self, url_or_path, alias=None, autoblend=False, timeout='90 seconds'):
"""Loads a WSDL from the given URL/path and creates a Suds SOAP client.
Returns the index of this client instance which can be used later to
switch back to it. See `Switch Soap Client` for example.
Optional alias is an alias for the client instance and it can be used
for switching between clients (just as index can be used). See `Switch
Soap Client` for more details.
Autoblend ensures that the schema(s) defined within the WSDL import
each other.
`timeout` sets the timeout for SOAP requests and must be given in
Robot Framework's time format (e.g. '1 minute', '2 min 3 s', '4.5').
Examples:
| Create Soap Client | http://localhost:8080/ws/Billing.asmx?WSDL |
| Create Soap Client | ${CURDIR}/../wsdls/tracking.wsdl |
"""
url = self._get_url(url_or_path)
autoblend = to_bool(autoblend)
kwargs = {'autoblend': autoblend}
imports = self._imports
if imports:
self._log_imports()
kwargs['doctor'] = ImportDoctor(*imports)
client = Client(url, **kwargs)
index = self._add_client(client, alias)
self._set_soap_timeout(timeout)
return index
def switch_soap_client(self, index_or_alias):
"""Switches between clients using index or alias.
Index is returned from `Create Soap Client` and alias can be given to
it.
Example:
| Create Soap Client | http://localhost:8080/Billing?wsdl | Billing |
| Create Soap Client | http://localhost:8080/Marketing?wsdl | Marketing |
| Call Soap Method | sendSpam | |
| Switch Soap Client | Billing | # alias |
| Call Soap Method | sendInvoices | |
| Switch Soap Client | 2 | # index |
Above example expects that there was no other clients created when
creating the first one because it used index '1' when switching to it
later. If you aren't sure about that you can store the index into
a variable as below.
| ${id} = | Create Soap Client | ... |
| # Do something ... | | |
| Switch Soap Client | ${id} | |
"""
self._cache.switch(index_or_alias)
# PyAPI
def _client(self):
"""Returns the current suds.client.Client instance."""
return self._cache.current
def _add_client(self, client, alias=None):
"""Puts a client into the cache and returns the index.
The added client becomes the current one."""
client.set_options(faults=True)
self._logger.info('Using WSDL at %s%s' % (client.wsdl.url, client))
self._imports = []
index = self._cache.register(client, alias)
self.set_soap_logging(True)
return index
# private
def _log_imports(self):
if self._imports:
msg = "Using Imports for ImportDoctor:"
for imp in self._imports:
msg += "\n Namespace: '%s' Location: '%s'" % (imp.ns, imp.location)
for ns in imp.filter.tns:
msg += "\n Filtering for namespace '%s'" % ns
self._logger.info(msg)
def _get_url(self, url_or_path):
if not len(urlparse(url_or_path).scheme) > 1:
if not os.path.isfile(url_or_path):
raise IOError("File '%s' not found." % url_or_path)
url_or_path = 'file:' + urllib.pathname2url(url_or_path)
return url_or_path | /robotframework_sudslibrary3-1.0-py3-none-any.whl/SudsLibrary/clientmanagement.py | 0.66072 | 0.266497 | clientmanagement.py | pypi |
from robot.api.deco import library, keyword
import sysrepo
import libyang
@library(scope='GLOBAL')
class SysrepoLibrary(object):
"""
SysrepoLibrary is a Robot Framework library for Sysrepo.
"""
def __init__(self):
self.conns = {}
self.sessions = {}
@keyword("Open Sysrepo Connection")
def open_connection(self):
"""
Opens a Sysrepo connection.
:returns:
the connection ID of an opened connection
"""
conn = sysrepo.SysrepoConnection()
connID = 0
if len(self.conns.keys()) != 0:
connID = max(self.conns.keys()) + 1
self.conns[connID] = conn
self.sessions[connID] = dict()
return connID
@keyword("Close Sysrepo Connection")
def close_connection(self, connID):
"""
Closes a Sysrepo Connection.
:arg connID:
An opened connection ID.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing index {connID}")
self.conns[connID].disconnect()
del self.sessions[connID]
@keyword("Open Datastore Session")
def open_session(self, connID, datastore):
"""
Opens a Sysrepo datastore session.
:arg connID:
An opened connection ID.
:arg datastore:
Specifies which datastore to open a session to.
Example: "running"
:returns:
An open session ID.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing index {connID}")
sess = self.conns[connID].start_session(datastore)
sessID = 0
if len(self.sessions[connID].keys()) != 0:
sessID = max(self.sessions[connID].keys()) + 1
self.sessions[connID][sessID] = sess
return sessID
@keyword("Close Datastore Session")
def close_session(self, connID, sessID):
"""
Closes a Sysrepo datastore session.
:arg connID:
An opened connection ID.
:arg sessID:
An opened session ID, corrseponding to the connection ID.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
if sessID not in self.sessions[connID]:
raise RuntimeError(f"Non-existing session index {sessID}")
self.sessions[connID][sessID].stop()
del self.sessions[connID][sessID]
@keyword("Close All Sysrepo Connections And Sessions")
def close_all_connections_and_sessions(self):
"""
Closes all open connections and sessions.
Example: for usage with `Suite Teardown`
"""
# force a key copy, avoid runtime error for dicitionary len change
for connID in tuple(self.conns):
for sessID in tuple(self.sessions[connID]):
self.close_session(connID, sessID)
self.close_connection(connID)
@keyword("Get Datastore Data")
def get_datastore_data(self, connID, sessID, xpath, fmt):
"""
Get a datastore's data.
:arg connID:
An opened connection ID.
:arg sessID:
An opened session ID, corresponding to the connection.
:arg xpath:
The datastore's XML path
:arg fmt:
Format of the returned data.
Example: xml
:returns:
The datastore's data in the specified format.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
if sessID not in self.sessions[connID]:
raise RuntimeError(f"Non-existing session index {sessID}")
with self.sessions[connID][sessID].get_data_ly(xpath) as data:
return data.print_mem(fmt, pretty=False, with_siblings=True)
@keyword("Edit Datastore Config")
def edit_config(self, connID, sessID, data, fmt):
"""
Edit a datastore's config file.
:arg connID:
An opened connection ID.
:arg sessID:
An opened session ID, corresponding to the connection.
:arg data:
The new config data
:arg fmt:
Format of the returned data.
Example: xml
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
if sessID not in self.sessions[connID]:
raise RuntimeError(f"Non-existing session index {sessID}")
with self.conns[connID].get_ly_ctx() as ctx:
yangData = ctx.parse_data_mem(data,
fmt,
no_state=True,
strict=True)
self.sessions[connID][sessID].edit_batch_ly(yangData)
self.sessions[connID][sessID].apply_changes()
yangData.free()
@keyword("Edit Datastore Config By File")
def edit_config_by_file(self, connID, sessID, fpath, fmt):
"""
Edit a datastore's config file by a file's contents.
:arg connID:
An opened connection ID.
:arg sessID:
An opened session ID, corresponding to the connection.
:arg fpath:
Path to the file containing the data.
:arg fmt:
Format of the returned data.
Example: xml
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
if sessID not in self.sessions[connID]:
raise RuntimeError(f"Non-existing session index {sessID}")
try:
with open(fpath, "r") as f:
data = f.read().strip()
self.edit_config(connID, sessID, data, fmt)
except IOError:
raise RuntimeError(f"Non-existing file {fpath}")
@keyword("Send RPC")
def send_rpc(self, connID, rpc, fmt):
"""
Send a RPC.
:arg connID:
An opened connection ID.
:arg rpc:
Rpc to send.
:arg fmt:
Format of the returned data.
Example: xml
:returns:
The data in the specified format.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
with self.conns[connID].get_ly_ctx() as ctx:
dnode = ctx.parse_op_mem(fmt, rpc, libyang.DataType.RPC_YANG)
data = dnode.print(fmt, out_type=libyang.IOType.MEMORY)
dnode.free()
return data
@keyword("Send RPC By File")
def send_rpc_by_file(self, connID, fpath, fmt):
"""
Send a RPC by a file's contents.
:arg connID:
An opened connection ID.
:arg fpath:
Path to the file containing the data.
:arg fmt:
Format of the returned data.
Example: xml
:returns:
The data in the specified format.
"""
if connID not in self.conns.keys():
raise RuntimeError(f"Non-existing connection index {connID}")
try:
with open(fpath, "r") as f:
rpc = f.read().strip()
return self.send_rpc(connID, rpc, fmt)
except IOError:
raise RuntimeError(f"Non-existing file {fpath}") | /robotframework_sysrepolibrary-0.1.1-py3-none-any.whl/SysrepoLibrary/SysrepoLibrary.py | 0.572245 | 0.2182 | SysrepoLibrary.py | pypi |
from typing import Any, List, Optional, Tuple, Union
from robot.api import logger
from robot.utils import ConnectionCache
from tarantool import Connection, response
import codecs
class TarantoolLibrary(object):
"""
Robot Framework library for working with Tarantool DB.
== Dependencies ==
| tarantool | https://pypi.org/project/tarantool/ | version > 0.5 |
| robot framework | http://robotframework.org |
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
def __init__(self) -> None:
"""Library initialization.
Robot Framework ConnectionCache() class is prepared for working with concurrent connections."""
self._connection: Optional[Connection] = None
self._cache = ConnectionCache()
@property
def connection(self) -> Connection:
""" Property method for getting existence DB connection object.
*Returns:*\n
DB connection
"""
if not self._connection:
raise AttributeError('No database connection found.')
return self._connection
def _modify_key_type(self, key: Any, key_type: str) -> Union[int, str]:
"""
Convert key to the required tarantool data type.
Tarantool data types corresponds to the following Python types:
STR - str
NUM, NUM64 - int
*Args:*\n
_key_: key to modify;\n
_key_type_: key type: STR, NUM, NUM64;\n
*Returns:*\n
modified key.
"""
key_type = key_type.upper()
if key_type == "STR":
if isinstance(key, bytes):
return codecs.decode(key)
return str(key)
if key_type in ["NUM", "NUM64"]:
return int(key)
raise Exception(f"Wrong key type for conversation: {key_type}. Allowed ones are STR, NUM and NUM64")
def connect_to_tarantool(self, host: str, port: Union[int, str], user: str = None, password: str = None,
alias: str = None) -> int:
"""
Connection to Tarantool DB.
*Args:*\n
_host_ - host for db connection;\n
_port_ - port for db connection;\n
_user_ - username for db connection;\n
_password_ - password for db connection;\n
_alias_ - connection alias, used for switching between open connections;\n
*Returns:*\n
Returns ID of the new connection. The connection is set as active.
*Example:*\n
| Connect To Tarantool | 127.0.0.1 | 3301 |
"""
logger.debug(f'Connecting to the Tarantool DB using host={host}, port={port}, user={user}')
try:
self._connection = Connection(host=host, port=int(port), user=user, password=password)
return self._cache.register(self.connection, alias)
except Exception as exc:
raise Exception("Logon to Tarantool error:", str(exc))
def close_all_tarantool_connections(self) -> None:
"""
Close all Tarantool connections that were opened.
After calling this keyword connection index returned by opening new connections [#Connect To Tarantool |Connect To Tarantool],
starts from 1.
*Example:*\n
| Connect To Tarantool | 192.168.0.1 | 3031 | user | password | alias=trnt_1 |
| Connect To Tarantool | 192.168.0.2 | 3031 | user | password | alias=trnt_2 |
| Switch Tarantool Connection | trnt_1 |
| @{data1}= | Select | space1 | key1 |
| Switch Tarantool Connection | trnt_2 |
| @{data2}= | Select | space2 | key2 |
| Close All Tarantool Connections |
"""
self._cache.close_all()
self._connection = None
def switch_tarantool_connection(self, index_or_alias: Union[int, str]) -> int:
"""
Switch to another existing Tarantool connection using its index or alias.\n
The connection index is obtained on creating connection.
Connection alias is optional and can be set at connecting to DB [#Connect To Tarantool|Connect To Tarantool].
*Args:*\n
_index_or_alias_ - connection index or alias assigned to connection;
*Returns:*\n
Index of the previous connection.
*Example:* (switch by alias)\n
| Connect To Tarantool | 192.168.0.1 | 3031 | user | password | alias=trnt_1 |
| Connect To Tarantool | 192.168.0.2 | 3031 | user | password | alias=trnt_2 |
| Switch Tarantool Connection | trnt_1 |
| @{data1}= | Select | space1 | key1 |
| Switch Tarantool Connection | trnt_2 |
| @{data2}= | Select | space2 | key2 |
| Close All Tarantool Connections |
*Example:* (switch by connection index)\n
| ${trnt_index1}= | Connect To Tarantool | 192.168.0.1 | 3031 | user | password |
| ${trnt_index2}= | Connect To Tarantool | 192.168.0.2 | 3031 | user | password |
| @{data1}= | Select | space1 | key1 |
| ${previous_index}= | Switch Tarantool Connection | ${trnt_index1} |
| @{data2}= | Select | space2 | key2 |
| Switch Tarantool Connection | ${previous_index} |
| @{data3}= | Select | space1 | key1 |
| Close All Tarantool Connections |
"""
logger.debug(f'Switching to tarantool connection with alias/index {index_or_alias}')
old_index = self._cache.current_index
self._connection = self._cache.switch(index_or_alias)
return old_index
def select(self, space_name: Union[int, str], key: Any, offset: int = 0, limit: int = 0xffffffff,
index: Union[int, str] = 0, key_type: str = None, **kwargs: Any) -> response.Response:
"""
Select and retrieve data from the database.
*Args:*\n
_space_name_: space id to insert a record;\n
_key_: values to search over the index;\n
_offset_: offset in the resulting tuple set;\n
_limit_: limits the total number of returned tuples. Deafult is max of unsigned int32;\n
_index_: specifies which index to use. Default is 0 which means that the primary index will be used;\n
_key_type_: type of the key;\n
_kwargs_: additional params;\n
*Returns:*\n
Tarantool server response.
*Example:*\n
| ${data_from_trnt}= | Select | space_name=some_space_name | key=0 | key_type=NUM |
| Set Test Variable | ${key} | ${data_from_trnt[0][0]} |
| Set Test Variable | ${data_from_field} | ${data_from_trnt[0][1]} |
"""
logger.debug(f'Select data from space {space_name} by key {key}')
if key_type:
key = self._modify_key_type(key=key, key_type=key_type)
return self.connection.select(
space_name=space_name,
key=key,
offset=offset,
limit=limit,
index=index,
**kwargs
)
def insert(self, space_name: Union[int, str], values: Tuple[Union[int, str], ...]) -> response.Response:
"""
Execute insert request.
*Args:*\n
_space_name_: space id to insert a record;\n
_values_: record to be inserted. The tuple must contain only scalar (integer or strings) values;\n
*Returns:*\n
Tarantool server response
*Example:*\n
| ${data_to_insert}= | Create List | 1 | ${data} |
| ${response}= | Insert | space_name=${SPACE_NAME} | values=${data_to_insert} |
| Set Test Variable | ${key} | ${response[0][0]} |
"""
logger.debug(f'Insert values {values} in space {space_name}')
return self.connection.insert(space_name=space_name, values=values)
def create_operation(self, operation: str, field: int, arg: Any) -> Tuple:
"""
Check and prepare operation tuple.
*Allowed operations:*;\n
'+' for addition (values must be numeric);\n
'-' for subtraction (values must be numeric);\n
'&' for bitwise AND (values must be unsigned numeric);\n
'|' for bitwise OR (values must be unsigned numeric);\n
'^' for bitwise XOR (values must be unsigned numeric);\n
':' for string splice (you must provide 'offset', 'count' and 'value'
for this operation);\n
'!' for insertion (provide any element to insert);\n
'=' for assignment (provide any element to assign);\n
'#' for deletion (provide count of fields to delete);\n
*Args:*\n
_operation_: operation sign;\n
_field_: field number, to apply operation to;\n
_arg_: depending on operation argument or list of arguments;\n
*Returns:*\n
Sequence of the operation parameters.
*Example:*\n
| ${list_to_append}= | Create List | ${offset} | ${count} | ${value} |
| ${operation}= | Create Operation | operation=: | field=${1} | arg=${list_to_append} |
"""
if operation not in ('+', '-', '&', '|', '^', ':', '!', '=', '#'):
raise Exception(f'Unsupported operation: {operation}')
if isinstance(arg, (list, tuple)):
op_field_list: List[Union[int, str]] = [operation, field]
op_field_list.extend(arg)
return tuple(op_field_list)
else:
return operation, field, arg
def update(self, space_name: Union[int, str], key: Any, op_list: Union[Tuple, List[Tuple]],
key_type: str = None, **kwargs: Any) -> response.Response:
"""
Execute update request.
Update accepts both operation and list of operations for the argument op_list.
*Args:*\n
_space_name_: space number or name to update a record;\n
_key_: key that identifies a record;\n
_op_list_: operation or list of operations. Each operation is tuple of three (or more) values;\n
_key_type_: type of the key;\n
_kwargs_: additional params;\n
*Returns:*\n
Tarantool server response.
*Example:* (list of operations)\n
| ${operation1}= | Create Operation | operation== | field=${1} | arg=NEW DATA |
| ${operation2}= | Create Operation | operation== | field=${2} | arg=ANOTHER NEW DATA |
| ${op_list}= | Create List | ${operation1} | ${operation2} |
| Update | space_name=${SPACE_NAME} | key=${key} | op_list=${op_list} |
*Example:* (one operation)\n
| ${list_to_append}= | Create List | ${offset} | ${count} | ${value} |
| ${operation}= | Create Operation | operation== | field=${1} | arg=NEW DATA |
| Update | space_name=${SPACE_NAME} | key=${key} | op_list=${operation} |
"""
logger.debug(f'Update data in space {space_name} with key {key} with operations {op_list}')
if key_type:
key = self._modify_key_type(key=key, key_type=key_type)
if isinstance(op_list[0], (list, tuple)):
return self.connection.update(space_name=space_name, key=key, op_list=op_list, **kwargs)
else:
return self.connection.update(space_name=space_name, key=key, op_list=[op_list], **kwargs)
def delete(self, space_name: Union[int, str], key: Any, key_type: str = None, **kwargs: Any) -> response.Response:
"""
Execute delete request.
*Args:*\n
_space_name_: space number or name to delete a record;\n
_key_: key that identifies a record;\n
_key_type_: type of the key;\n
_kwargs_: additional params;\n
*Returns:*\n
Tarantool server response.
*Example:*\n
| Delete | space_name=${SPACE_NAME}| key=${key} |
"""
logger.debug(f'Delete data in space {space_name} by key {key}')
if key_type:
key = self._modify_key_type(key=key, key_type=key_type)
return self.connection.delete(space_name=space_name, key=key, **kwargs) | /robotframework-tarantoollibrary-2.0.0.tar.gz/robotframework-tarantoollibrary-2.0.0/src/TarantoolLibrary.py | 0.913414 | 0.21032 | TarantoolLibrary.py | pypi |
import json
from jinja2 import Environment, BaseLoader, select_autoescape
from robot.api import logger
from robot.errors import VariableError
from robot.libraries.BuiltIn import BuiltIn
class TemplatedData:
def __init__(
self,
default_empty="",
jinja_template=False,
return_type="text",
ignore_missing=False,
):
self.default_empty = default_empty
self.jinja_template = jinja_template
self.return_type = return_type
self.ignore_missing = ignore_missing
@staticmethod
def normalize(value):
return value.lower()
def get_templated_data(self, template, **kwargs):
default_empty = kwargs.pop("default_empty", self.default_empty)
jinja_template = kwargs.pop("jinja_template", self.jinja_template)
return_type = kwargs.pop("return_type", self.return_type)
ignore_missing = kwargs.pop("ignore_missing", self.ignore_missing)
logger.debug(f"Template:\n{template}")
overwrite_values = {self.normalize(arg): value for arg, value in kwargs.items()}
templated_vars = {}
elements = _search_variables(template, default_empty, ignore_missing)
template = resolve(elements, overwrite_values, jinja_template, templated_vars)
if jinja_template:
r_template = Environment(loader=BaseLoader(), autoescape=select_autoescape(['html', 'htm', 'xml'])).from_string(template)
replaced_data = r_template.render(templated_vars=templated_vars)
else:
replaced_data = template
logger.debug(f"Rendered template:\n{replaced_data}")
return self.return_data_with_type(replaced_data, return_type)
def get_templated_data_from_path(self, path, encoding="utf-8", **kwargs):
with open(path, encoding=encoding) as f:
template = f.read()
return self.get_templated_data(template, **kwargs)
@staticmethod
def return_data_with_type(data, data_type):
if data_type == "json":
return json.loads(data)
return data
class Variable:
def __init__(self, string, default_empty, ignore_missing):
self.raw_name = f"${{{string}}}"
self.ignore_missing = ignore_missing
elements = _search_variables(string, default_empty, ignore_missing)
for index, elem in enumerate(elements):
if isinstance(elem, str):
if ":" in elem:
pre, suf = elem.split(":", maxsplit=1)
self.value = elements[:index]
self.default = elements[index + 1 :]
if pre:
self.value.append(pre)
if suf:
self.default.insert(0, suf)
break
else:
self.value = elements
self.default = default_empty
def resolve(self, overwrite_values, jinja_template, templated_vars):
value = resolve(self.value, overwrite_values, jinja_template, templated_vars)
raw_name, *attrs = value.split(".", maxsplit=1)
if raw_name in overwrite_values:
default = overwrite_values[raw_name]
else:
built_in = BuiltIn()
if self.ignore_missing:
name = built_in._get_var_name(f"${{{raw_name}}}")
try:
default = built_in._variables.replace_scalar(name)
except VariableError:
default = self.raw_name
else:
default_value = resolve(
self.default, overwrite_values, jinja_template, templated_vars
)
default = built_in.get_variable_value(
f"${{{raw_name}}}", default=default_value
)
templated_vars[raw_name] = default
if jinja_template:
var_replaced = f'templated_vars["{raw_name}"]'
if attrs:
var_replaced += "." + attrs[0]
else:
var_replaced = str(default)
return var_replaced
def resolve(elements, overwrite_values, jinja_template, templated_vars):
new_elements = []
for elem in elements:
if isinstance(elem, Variable):
elem = elem.resolve(overwrite_values, jinja_template, templated_vars)
new_elements.append(elem)
return "".join(new_elements)
def _search_variables(string, default_empty, ignore_missing):
"""return list in form of [string, Variable, string, Variable.. ] .
Following string: "my value is ${value}." will return: ["my value is", Variable("${value}"), .]
"""
elements = []
if not string:
return elements
while True:
var_start = string.find("${")
if var_start < 0:
if string:
elements.append(string)
break
if var_start:
elements.append(string[:var_start])
string = string[var_start + 2 :]
bracket = 1
for index, char in enumerate(string):
if char == "{":
bracket += 1
elif char == "}":
bracket -= 1
if not bracket:
elements.append(Variable(string[:index], default_empty, ignore_missing))
string = string[index + 1 :]
break
else:
if string:
elements.append(string)
break
return elements | /robotframework-templateddata-1.4.0.tar.gz/robotframework-templateddata-1.4.0/TemplatedData/__init__.py | 0.441191 | 0.265654 | __init__.py | pypi |
# Copyright (c) 2015 Lingaro
import traceback
import socket
from robot.api import logger
from .utils import get_netloc_and_path
from .rally import WrappedRally
class ConnectionManager(object):
"""
Connection Manager handles the connection & disconnection to the Rally server.
"""
RALLY_CONNECTION_CLASS = WrappedRally
INVALIND_CREDENTIALS_MESSAGE = u'Invalid credentials'
def __init__(self):
"""
Initializes _rally_connection to None.
"""
self._rally_connection = None
def _check_connection(self):
"""
Checks if connection is initialized.
"""
return not self._rally_connection is None
def _assert_connection(self):
"""
Checks that connection is initialized and raises a ValueError and log some info if not.
"""
if not self._check_connection():
logger.warn("connection have not established yet, probably you have to call connect to rally first")
raise ValueError("connection have not established yet")
def _get_rally_connection(self):
"""
Safe rally connection getter.
:return: rally connection object
"""
self._assert_connection()
return self._rally_connection
def _create_rally_connection(self, *args, **kwargs):
logger.debug("timeout before really connection creation: {0}".format(socket.getdefaulttimeout()))
old_timeout = socket.getdefaulttimeout()
try:
return self.RALLY_CONNECTION_CLASS(*args, **kwargs)
finally:
logger.debug("timeout changed to: {0}".format(socket.getdefaulttimeout()))
socket.setdefaulttimeout(old_timeout)
logger.debug("Back to previous value: {0}".format(socket.getdefaulttimeout()))
def _detect_wrong_credentials(self, err):
"""
Detects if error is related to invalid credentials passed by the user. This check is useful to not retry and to
not block the user account.
Args:
err:
Returns: True if error is related to invalid credentials and False otherwise
"""
return self.INVALIND_CREDENTIALS_MESSAGE in err.message
def connect_to_rally(self, server_url, user, password, workspace, project=None, number_of_retries=3, log_file=None):
"""
Establishes connection to the rally server using the provided parameters: `server`, `user` and `password`.
You have to specify a `workspace` parameter to set the correct workspace environment. You may set `project`
parameter, but it is optional (default None means to search in all projects in workspace).
You may provided `number_of_retries` parameter witch indicate how many times we should try to establish
connections. Default value is 3. If `number_of_retries` is reach the last exception is thrown. All exceptions
occurred in previous tries are swallowed.
Method can enable rally logging. You can provide optional parameter `log_file` to point file of your choice.
Default `log_file` parameter value is None, witch indicates that logging is disabled.
Example usage:
| # explicitly specifies all property values |
| Connect To Rally | SERVER_URL | USER | PASSWORD | SOME-WORKSPACE | SOME-PROJECT | NUMBER-OF-RETRIES | PATH-TO-LOG-FILE |
| # minimal property values set |
| Connect To Rally | SERVER_URL | USER | PASSWORD | WORKSPACE |
| # minimal with logging logging enabled |
| Connect To Rally | SERVER_URL | USER | PASSWORD | WORKSPACE | log_file=False |
"""
logger.info(u"Try to connect to rally using: server={server}, workspace={workspace}, project={project}".format(
server=server_url,
workspace=workspace,
project=project
))
server = get_netloc_and_path(server_url)
kwargs = {}
if project:
kwargs['project'] = project
if workspace:
kwargs['workspace'] = workspace
tries_counter = 0
number_of_retries = int(number_of_retries)
while True:
tries_counter += 1
try:
self._rally_connection = self._create_rally_connection(server, user, password, **kwargs)
break
except Exception as e:
if self._detect_wrong_credentials(e):
logger.error(self.INVALIND_CREDENTIALS_MESSAGE)
raise e
elif number_of_retries <= tries_counter:
logger.warn("An error occurred. Maximum number of tries reached.")
raise e
else:
logger.warn("An error occurred. Try again.")
logger.warn(traceback.format_exc())
logger.info("Connection to {server} established.".format(server=server))
if log_file:
self._rally_connection.enableLogging(str(log_file))
logger.info(u"Logging to {0}".format(log_file))
else:
logger.info(u"Logging is disabled")
self._rally_connection.disableLogging()
def _reset_rally_connection(self):
self._rally_connection = None
def disconnect_from_rally(self):
"""
Disconnects from the rally server.
For example:
| Disconnect From Rally | # disconnects from current connection to the rally |
"""
if not self._check_connection():
logger.info("connection doesn't exist so can't be disconnected")
else:
logger.info("resetting connection")
self._reset_rally_connection() | /robotframework-testmanagement-0.1.12.tar.gz/robotframework-testmanagement-0.1.12/src/TestManagementLibrary/connection_manager.py | 0.750827 | 0.21655 | connection_manager.py | pypi |
# Copyright (c) 2015 Lingaro
import traceback
from robot.api import logger
class Operator(unicode):
pass
Operator.EQUAL = Operator('=')
Operator.NOT_EQUAL = Operator('!=')
Operator.CONTAINS = Operator('contains')
Operator.NOT_CONTAINS = Operator('!contains')
class RallyQueryParameter(object):
NAME = None
DEFAULT_OPERATOR = Operator.EQUAL
@classmethod
def from_string(cls, value):
#@TODO: so far only default operator is supported
return cls(value)
@classmethod
def is_default_value(cls, value):
return not bool(value)
def __init__(self, value, operator=None):
self._value = value
if operator is None:
operator = self.DEFAULT_OPERATOR
self._operator = operator
@property
def name(self):
if self.NAME is None:
raise ValueError(u"subclass of RallyQueryParameter should provide NAME property")
return self.NAME
def construct(self):
return u"{name} {operator} \"{value}\"".format(
name=unicode(self.name),
operator=unicode(self._operator),
value=unicode(self._value)
)
def is_valid(self):
return True
class ObjectIDParameter(RallyQueryParameter):
NAME = u"ObjectID"
DEFAULT_OPERATOR = Operator.EQUAL
class FormattedIDParameter(RallyQueryParameter):
NAME = u'FormattedID'
DEFAULT_OPERATOR = Operator.CONTAINS
class NotesParameter(RallyQueryParameter):
NAME = u'Notes'
DEFAULT_OPERATOR = Operator.CONTAINS
class ProjectParameter(RallyQueryParameter):
NAME = u'Project'
DEFAULT_OPERATOR = Operator.EQUAL
class DescriptionParameter(RallyQueryParameter):
NAME = u'Description'
DEFAULT_OPERATOR = Operator.CONTAINS
class NameParameter(RallyQueryParameter):
NAME = u'Name'
DEFAULT_OPERATOR = Operator.CONTAINS
class DisplayNameParameter(RallyQueryParameter):
NAME = u'DisplayName'
DEFAULT_OPERATOR = Operator.CONTAINS
class UserNameParameter(RallyQueryParameter):
NAME = u'UserName'
DEFAULT_OPERATOR = Operator.CONTAINS
class RallyQueryJoinMethod(unicode):
PATTERN = u"({arg1}) {oper} ({arg2})"
@classmethod
def from_string(cls, value):
if value == cls.AND:
return cls.AND
elif value == cls.OR:
return cls.OR
else:
raise ValueError(u"Unsupported join method {0}".format(value))
def join_params(self, params):
result = u""
if len(params) == 1:
result = params[0].construct()
elif len(params) > 1:
result = self.PATTERN.format(
arg1=params[0].construct(),
oper=self,
arg2=params[1].construct()
)
for param in params[2:]:
result = self.PATTERN.format(
arg1=result,
oper=self,
arg2=param.construct()
)
return result
RallyQueryJoinMethod.AND = RallyQueryJoinMethod('AND')
RallyQueryJoinMethod.OR = RallyQueryJoinMethod('OR')
class RallyQuery(object):
def __init__(self, join_method=None):
self._params = []
if join_method:
join_method = RallyQueryJoinMethod.from_string(join_method)
else:
join_method = RallyQueryJoinMethod.AND
self._join_method = join_method
def add_parameter(self, parameter):
if not parameter.is_valid():
raise ValueError(u"Invalid parameter {0}".format(unicode(parameter)))
if parameter in self._params:
raise ValueError(u"Duplicated parameter value {0}".format(unicode(parameter)))
self._params.append(parameter)
def construct(self):
return self._join_method.join_params(self._params)
class QueryManager(object):
QUERY_PARAMETER_REGISTRY = dict(
object_id=ObjectIDParameter,
formatted_id=FormattedIDParameter,
project=ProjectParameter,
name=NameParameter,
description=DescriptionParameter,
notes=NotesParameter,
)
@classmethod
def _build_query(cls, **kwargs):
"""
Builder design pattern method witch construct the RallyQuery object base on method parameters.
If parameter is evaluated to False, it will be omitted in the query.
:return: query object with parameters.
"""
query = RallyQuery(join_method=kwargs.pop(u'param_join_method', None))
for name, value in kwargs.iteritems():
if name in cls.QUERY_PARAMETER_REGISTRY:
param_class = cls.QUERY_PARAMETER_REGISTRY.get(name)
if not param_class.is_default_value(value):
param = param_class.from_string(value)
logger.info(u"{0} provided: {1}".format(param.name, unicode(value)))
query.add_parameter(param)
else:
logger.warn(u"Unregistered parameter class for key {0}".format(name))
return query
def _execute_rally_query(self, object_type, query, fetch=True, **kwargs):
"""
Calls Rally Rest API with given query
:param query: RallyQuery object
:return:
"""
connection = self._get_rally_connection()
query_str = query.construct()
logger.info(u"Constructed query: {0}".format(query_str))
try:
result = connection.get(object_type, fetch=fetch, query=query_str, **kwargs)
except Exception as e:
logger.warn(u"An error occurred during getting data")
logger.warn(traceback.format_exc())
raise e
if result.errors:
logger.warn(u"Some errors occurred when executed query: {0}".format(u"\n".join(result.errors)))
return result
def _get_object_by_id(self, object_type, id_param, **kwargs):
"""
Fetch exactly one object of given type.
:param object_type: object type
:param id_param: an parameter that should identify exactly one object
:param kwargs: extra params
:return: request object from Rally
"""
query = RallyQuery()
query.add_parameter(id_param)
result = list(self._execute_rally_query(object_type, query, **kwargs))
if not result:
logger.warn(u"No result found for {0}".format(id_param.construct()))
raise ValueError(u"No result found")
elif len(result) > 1:
logger.warn(u"Found {0} but only one was expected, truncate extra data".format(len(result)))
return result[0] | /robotframework-testmanagement-0.1.12.tar.gz/robotframework-testmanagement-0.1.12/src/TestManagementLibrary/query.py | 0.457621 | 0.188287 | query.py | pypi |
import json
import re
import requests
import os
from typing import Any, Dict, List, Optional, Union
from robot.api import logger
from TestRailAPIClient import JsonDict, TestRailAPIClient
__author__ = "Dmitriy.Zverev"
__license__ = "Apache License, Version 2.0"
class TestRailListener(object):
"""Fixing of testing results and update test case in [ http://www.gurock.com/testrail/ | TestRail ].
== Dependencies ==
| past | https://pypi.org/project/past/ |
| requests | https://pypi.python.org/pypi/requests |
| robot framework | http://robotframework.org |
| TestRailAPIClient |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API] \n
2. Create custom field "case_description" with type "text", which corresponds to the Robot Framework's test case documentation.
== Example ==
1. Create test case in TestRail with case_id = 10\n
2. Add it to test run with id run_id = 20\n
3. Create autotest in Robot Framework
| *** Settings ***
| *** Test Cases ***
| Autotest name
| [Documentation] Autotest documentation
| [Tags] testrailid=10 defects=BUG-1, BUG-2 references=REF-3, REF-4
| Fail Test fail message
4. Run Robot Framework with listener:\n
| set ROBOT_SYSLOG_FILE=syslog.txt
| robot --listener TestRailListener.py:testrail_server_name:tester_user_name:tester_user_password:20:https:update autotest.robot
5. Test with case_id=10 will be marked as failed in TestRail with message "Test fail message" and defects "BUG-1, BUG-2".
Also title, description and references of this test will be updated in TestRail. Parameter "update" is optional.
"""
ROBOT_LISTENER_API_VERSION = 2
ELAPSED_KEY = 'elapsed'
TESTRAIL_CASE_TYPE_ID_AUTOMATED = 1
TESTRAIL_TEST_STATUS_ID_PASSED = 1
TESTRAIL_TEST_STATUS_ID_FAILED = 5
def __init__(self, server: str, user: str, password: str, run_id: str, protocol: str = 'http',
juggler_disable: str = None, update: str = None) -> None:
"""Listener initialization.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https;\n
_juggler_disable_ - indicator to disable juggler logic; if exist, then juggler logic will be disabled;\n
_update_ - indicator to update test case in TestRail; if exist, then test will be updated.
"""
testrail_url = '{protocol}://{server}/'.format(protocol=protocol, server=server)
self._url = testrail_url + 'index.php?/api/v2/'
self._user = user
self._password = password
self.run_id = run_id
self.juggler_disable = juggler_disable
self.update = update
self.tr_client = TestRailAPIClient(server, user, password, run_id, protocol)
self._vars_for_report_link: Optional[Dict[str, str]] = None
logger.info('[TestRailListener] url: {testrail_url}'.format(testrail_url=testrail_url))
logger.info('[TestRailListener] user: {user}'.format(user=user))
logger.info('[TestRailListener] the ID of the test run: {run_id}'.format(run_id=run_id))
def end_test(self, name: str, attributes: JsonDict) -> None:
""" Update test case in TestRail.
*Args:* \n
_name_ - name of test case in Robot Framework;\n
_attributes_ - attributes of test case in Robot Framework.
"""
tags_value = self._get_tags_value(attributes['tags'])
case_id = tags_value['testrailid']
if not case_id:
logger.warn(f"[TestRailListener] No case_id presented for test_case {name}.")
return
if 'skipped' in [tag.lower() for tag in attributes['tags']]:
logger.warn(f"[TestRailListener] SKIPPED test case \"{name}\" with testrailId={case_id} "
"will not be posted to Testrail")
return
# Update test case
if self.update:
references = tags_value['references']
self._update_case_description(attributes, case_id, name, references)
# Send test results
defects = tags_value['defects']
old_test_status_id = self.tr_client.get_test_status_id_by_case_id(self.run_id, case_id)
test_result = self._prepare_test_result(attributes, defects, old_test_status_id, case_id)
try:
self.tr_client.add_result_for_case(self.run_id, case_id, test_result)
except requests.HTTPError as error:
logger.error(f"[TestRailListener] http error on case_id = {case_id}\n{error}")
def _update_case_description(self, attributes: JsonDict, case_id: str, name: str,
references: Optional[str]) -> None:
""" Update test case description in TestRail
*Args:* \n
_attributes_ - attributes of test case in Robot Framework;\n
_case_id_ - case id;\n
_name_ - test case name;\n
_references_ - test references.
"""
logger.info(f"[TestRailListener] update of test {case_id} in TestRail")
description = f"{attributes['doc']}\nPath to test: {attributes['longname']}"
request_fields: Dict[str, Union[str, int, None]] = {
'title': name, 'type_id': self.TESTRAIL_CASE_TYPE_ID_AUTOMATED,
'custom_case_description': description, 'refs': references}
try:
json_result = self.tr_client.update_case(case_id, request_fields)
result = json.dumps(json_result, sort_keys=True, indent=4)
logger.info(f"[TestRailListener] result for method update_case: {result}")
except requests.HTTPError as error:
logger.error(f"[TestRailListener] http error, while execute request:\n{error}")
def _prepare_test_result(self, attributes: JsonDict, defects: Optional[str], old_test_status_id: Optional[int],
case_id: str) -> Dict[str, Union[str, int]]:
"""Create json with test result information.
*Args:* \n
_attributes_ - attributes of test case in Robot Framework;\n
_defects_ - list of defects (in string, comma-separated);\n
_old_test_status_id_ - old test status id;\n
_case_id_ - test case ID.
*Returns:*\n
Dictionary with test results.
"""
link_to_report = self._get_url_report_by_case_id(case_id)
test_time = float(attributes['elapsedtime']) / 1000
comment = f"Autotest name: {attributes['longname']}\nMessage: {attributes['message']}\nTest time:" \
f" {test_time:.3f} s"
if link_to_report:
comment += f'\nLink to Report: {link_to_report}'
if self.juggler_disable:
if attributes['status'] == 'PASS':
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_PASSED
else:
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_FAILED
else:
new_test_status_id = self._prepare_new_test_status_id(attributes['status'], old_test_status_id)
test_result: Dict[str, Union[str, int]] = {
'status_id': new_test_status_id,
'comment': comment,
}
elapsed_time = TestRailListener._time_span_format(test_time)
if elapsed_time:
test_result[TestRailListener.ELAPSED_KEY] = elapsed_time
if defects:
test_result['defects'] = defects
return test_result
def _prepare_new_test_status_id(self, new_test_status: str, old_test_status_id: Optional[int]) -> int:
"""Prepare new test status id by new test status and old test status id.
Alias of this method is "juggler".
If new test status is "PASS", new test status id is "passed".
If new test status is "FAIL" and old test status id is null or "passed" or "failed",
new test status id is "failed".
In all other cases new test status id is equal to old test status id.
*Args:* \n
_new_test_status_ - new test status;\n
_old_test_status_id_ - old test status id.
*Returns:*\n
New test status id.
"""
old_statuses_to_fail = (self.TESTRAIL_TEST_STATUS_ID_PASSED, self.TESTRAIL_TEST_STATUS_ID_FAILED, None)
if new_test_status == 'PASS':
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_PASSED
elif new_test_status == 'FAIL' and old_test_status_id in old_statuses_to_fail:
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_FAILED
else:
assert old_test_status_id is not None
new_test_status_id = old_test_status_id
return new_test_status_id
@staticmethod
def _get_tags_value(tags: List[str]) -> Dict[str, Optional[str]]:
""" Get value from robot framework's tags for TestRail.
*Args:* \n
_tags_ - list of tags.
*Returns:* \n
Dict with attributes.
"""
attributes: Dict[str, Optional[str]] = dict()
matchers = ['testrailid', 'defects', 'references']
for matcher in matchers:
for tag in tags:
match = re.match(matcher, tag)
if match:
split_tag = tag.split('=')
tag_value = split_tag[1]
attributes[matcher] = tag_value
break
else:
attributes[matcher] = None
return attributes
@staticmethod
def _time_span_format(seconds: Any) -> str:
""" Format seconds to time span format.
*Args:*\n
_seconds_ - time in seconds.
*Returns:*\n
Time formatted in Time span.
"""
if isinstance(seconds, float):
seconds = int(seconds)
elif not isinstance(seconds, int):
seconds = 0
if seconds <= 0:
return ''
s = seconds % 60
res = "{}s".format(s)
seconds -= s
if seconds >= 60:
m = (seconds % 60 ** 2) // 60
res = "{}m {}".format(m, res)
seconds -= m * 60
if seconds >= 60 ** 2:
h = seconds // 60 ** 2
res = "{}h {}".format(h, res)
return res
@staticmethod
def _get_vars_for_report_link() -> Dict[str, str]:
"""" Getting value from environment variables for prepare link to report.
If test cases are started by means of CI, then must define the environment variables
in the CI configuration settings to getting url to the test case report.
The following variables are used:
for Teamcity - TEAMCITY_HOST_URL, TEAMCITY_BUILDTYPE_ID, TEAMCITY_BUILD_ID,
REPORT_ARTIFACT_PATH, TORS_REPORT,
for Jenkins - JENKINS_BUILD_URL.
If these variables are not found, then the link to report will not be formed.
== Example ==
1. for Teamcity
| Changing build configuration settings
| REPORT_ARTIFACT_PATH output
| TORS_REPORT report.html
| TEAMCITY_BUILD_ID %teamcity.build.id%
| TEAMCITY_BUILDTYPE_ID %system.teamcity.buildType.id%
| TEAMCITY_HOST_URL https://teamcity.billing.ru
2. for Jenkins
| add to the shell the execution of the docker container parameter
| -e "JENKINS_BUILD_URL = ${BUILD_URL}"
*Returns:*\n
Dictionary with environment variables results.
"""
variables: Dict[str, str] = {}
env_var = os.environ.copy()
if 'TEAMCITY_HOST_URL' in env_var:
try:
teamcity_vars = {'TEAMCITY_HOST_URL',
'TEAMCITY_BUILDTYPE_ID',
'TEAMCITY_BUILD_ID',
'REPORT_ARTIFACT_PATH'}
variables = {var: env_var[var] for var in teamcity_vars}
except KeyError:
logger.error("[TestRailListener] There are no variables for getting a link to the report by tests.")
if env_var.get('TORS_REPORT', '').strip():
variables['TORS_REPORT'] = env_var['TORS_REPORT']
elif 'JENKINS_BUILD_URL' in env_var:
variables = {'JENKINS_BUILD_URL': env_var['JENKINS_BUILD_URL']}
return variables
@property
def vars_for_report_link(self) -> Dict[str, str]:
"""Get variables for report link.
Saves environment variables information once and then returns cached values.
*Returns:*\n
Cached variables for report link.
"""
if not self._vars_for_report_link:
self._vars_for_report_link = self._get_vars_for_report_link()
return self._vars_for_report_link
def _get_url_report_by_case_id(self, case_id: Union[str, int]) -> Optional[str]:
"""" Getting url for Report by id test case.
*Args:* \n
_case_id_ - test case ID.
*Returns:*\n
Report URL.
"""
build_url = ''
report_filename = self.vars_for_report_link.get('TORS_REPORT', 'report.html')
report_uri = f'{report_filename}#search?include=testrailid={case_id}'
if 'TEAMCITY_HOST_URL' in self.vars_for_report_link:
vars = self.vars_for_report_link
base_hostname = vars.get('TEAMCITY_HOST_URL')
buildtype_id = vars.get('TEAMCITY_BUILDTYPE_ID')
build_id = vars.get('TEAMCITY_BUILD_ID')
report_artifact_path = vars.get('REPORT_ARTIFACT_PATH')
build_url = f'{base_hostname}/repository/download/{buildtype_id}/{build_id}:id/{report_artifact_path}'
elif 'JENKINS_BUILD_URL' in self.vars_for_report_link:
build_url = self.vars_for_report_link['JENKINS_BUILD_URL'] + 'robot/report'
return f'{build_url}/{report_uri}' if build_url else None | /robotframework-testrail-correct-link-1.1.tar.gz/robotframework-testrail-correct-link-1.1/src/TestRailListener.py | 0.759047 | 0.331931 | TestRailListener.py | pypi |
from requests import post, get
from typing import Any, cast, Dict, List, Optional, Sequence, Union
DEFAULT_TESTRAIL_HEADERS = {'Content-Type': 'application/json'}
TESTRAIL_STATUS_ID_PASSED = 1
# custom types
JsonDict = Dict[str, Any] # noqa: E993
JsonList = List[JsonDict] # noqa: E993
Id = Union[str, int] # noqa: E993
class TestRailAPIClient(object):
"""Library for working with [http://www.gurock.com/testrail/ | TestRail].
== Dependencies ==
| requests | https://pypi.python.org/pypi/requests |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API]
"""
def __init__(self, server: str, user: str, password: str, run_id: Id, protocol: str = 'http') -> None:
"""Create TestRailAPIClient instance.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https.
"""
self._url = '{protocol}://{server}/index.php?/api/v2/'.format(protocol=protocol, server=server)
self._user = user
self._password = password
self.run_id = run_id
def _send_post(self, uri: str, data: Dict[str, Any]) -> Union[JsonList, JsonDict]:
"""Perform post request to TestRail.
*Args:* \n
_uri_ - URI for test case;\n
_data_ - json with test result.
*Returns:* \n
Request result in json format.
"""
url = self._url + uri
response = post(url, json=data, auth=(self._user, self._password), verify=False)
response.raise_for_status()
return response.json()
def _send_get(self, uri: str, headers: Dict[str, str] = None,
params: Dict[str, Any] = None) -> Union[JsonList, JsonDict]:
"""Perform get request to TestRail.
*Args:* \n
_uri_ - URI for test case;\n
_headers_ - headers for http-request;\n
_params_ - parameters for http-request.
*Returns:* \n
Request result in json format.
"""
url = self._url + uri
response = get(url, headers=headers, params=params, auth=(self._user, self._password), verify=False)
response.raise_for_status()
return response.json()
def get_tests(self, run_id: Id, status_ids: Union[str, Sequence[int]] = None) -> JsonList:
"""Get tests from TestRail test run by run_id.
*Args:* \n
_run_id_ - ID of the test run;\n
_status_ids_ - list of the required test statuses.
*Returns:* \n
Tests information in json format.
"""
uri = 'get_tests/{run_id}'.format(run_id=run_id)
if status_ids:
status_ids = ','.join(str(status_id) for status_id in status_ids)
params = {
'status_id': status_ids
}
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def get_results_for_case(self, run_id: Id, case_id: Id, limit: int = None) -> JsonList:
"""Get results for case by run_id and case_id.
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case;\n
_limit_ - limit of case results.
*Returns:* \n
Cases results in json format.
"""
uri = 'get_results_for_case/{run_id}/{case_id}'.format(run_id=run_id, case_id=case_id)
params = {
'limit': limit
}
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def add_result_for_case(self, run_id: Id, case_id: Id,
test_result_fields: Dict[str, Union[str, int]]) -> None:
"""Add results for case in TestRail test run by run_id and case_id.
*Supported request fields for test result:*\n
| *Name* | *Type* | *Description* |
| status_id | int | The ID of the test status |
| comment | string | The comment / description for the test result |
| version | string | The version or build you tested against |
| elapsed | timespan | The time it took to execute the test, e.g. "30s" or "1m 45s" |
| defects | string | A comma-separated list of defects to link to the test result |
| assignedto_id | int | The ID of a user the test should be assigned to |
| Custom fields are supported as well and must be submitted with their system name, prefixed with 'custom_' |
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case;\n
_test_result_fields_ - result of the test fields dictionary.
*Example:*\n
| Add Result For Case | run_id=321 | case_id=123| test_result={'status_id': 3, 'comment': 'This test is untested', 'defects': 'DEF-123'} |
"""
uri = 'add_result_for_case/{run_id}/{case_id}'.format(run_id=run_id, case_id=case_id)
self._send_post(uri, test_result_fields)
def get_statuses(self) -> JsonList:
"""Get test statuses information from TestRail.
*Returns:* \n
Statuses information in json format.
"""
uri = 'get_statuses'
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonList, response)
def update_case(self, case_id: Id, request_fields: Dict[str, Union[str, int, None]]) -> JsonDict:
"""Update an existing test case in TestRail.
*Supported request fields:*\n
| *Name* | *Type* | *Description* |
| title | string | The title of the test case (required) |
| template_id | int | The ID of the template (field layout) (requires TestRail 5.2 or later) |
| type_id | int | The ID of the case type |
| priority_id | int | The ID of the case priority |
| estimate | timespan | The estimate, e.g. "30s" or "1m 45s" |
| milestone_id | int | The ID of the milestone to link to the test case |
| refs | string | A comma-separated list of references/requirements |
| Custom fields are supported as well and must be submitted with their system name, prefixed with 'custom_' |
*Args:* \n
_case_id_ - ID of the test case;\n
_request_fields_ - request fields dictionary.
*Returns:* \n
Case information in json format.
*Example:*\n
| Update Case | case_id=213 | request_fields={'title': name, 'type_id': 1, 'custom_case_description': description, 'refs': references} |
"""
uri = 'update_case/{case_id}'.format(case_id=case_id)
response = self._send_post(uri, request_fields)
return cast(JsonDict, response)
def get_status_id_by_status_label(self, status_label: str) -> int:
"""Get test status id by status label.
*Args:* \n
_status_label_ - status label of the tests.
*Returns:* \n
Test status ID.
"""
statuses_info = self.get_statuses()
for status in statuses_info:
if status['label'].lower() == status_label.lower():
return status['id']
raise Exception(u"There is no status with label \'{}\' in TestRail".format(status_label))
def get_test_status_id_by_case_id(self, run_id: Id, case_id: Id) -> Optional[int]:
"""Get test last status id by case id.
If there is no last test result returns None.
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case.
*Returns:* \n
Test status ID.
"""
last_case_result = self.get_results_for_case(run_id=run_id, case_id=case_id, limit=1)
return last_case_result[0]['status_id'] if last_case_result else None
def get_project(self, project_id: Id) -> JsonDict:
"""Get project info by project id.
*Args:* \n
_project_id_ - ID of the project.
*Returns:* \n
Request result in json format.
"""
uri = 'get_project/{project_id}'.format(project_id=project_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_suite(self, suite_id: Id) -> JsonDict:
"""Get suite info by suite id.
*Args:* \n
_suite_id_ - ID of the test suite.
*Returns:* \n
Request result in json format.
"""
uri = 'get_suite/{suite_id}'.format(suite_id=suite_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_section(self, section_id: Id) -> JsonDict:
"""Get section info by section id.
*Args:* \n
_section_id_ - ID of the section.
*Returns:* \n
Request result in json format.
"""
uri = 'get_section/{section_id}'.format(section_id=section_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def add_section(self, project_id: Id, name: str, suite_id: Id = None, parent_id: Id = None,
description: str = None) -> JsonDict:
"""Creates a new section.
*Args:* \n
_project_id_ - ID of the project;\n
_name_ - name of the section;\n
_suite_id_ - ID of the test suite(ignored if the project is operating in single suite mode);\n
_parent_id_ - ID of the parent section (to build section hierarchies);\n
_description_ - description of the section.
*Returns:* \n
New section information.
"""
uri = 'add_section/{project_id}'.format(project_id=project_id)
data: Dict[str, Union[int, str]] = {'name': name}
if suite_id is not None:
data['suite_id'] = suite_id
if parent_id is not None:
data['parent_id'] = parent_id
if description is not None:
data['description'] = description
response = self._send_post(uri=uri, data=data)
return cast(JsonDict, response)
def get_sections(self, project_id: Id, suite_id: Id) -> JsonList:
"""Returns existing sections.
*Args:* \n
_project_id_ - ID of the project;\n
_suite_id_ - ID of the test suite.
*Returns:* \n
Information about section.
"""
uri = 'get_sections/{project_id}&suite_id={suite_id}'.format(project_id=project_id, suite_id=suite_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonList, response)
def get_case(self, case_id: Id) -> JsonDict:
"""Get case info by case id.
*Args:* \n
_case_id_ - ID of the test case.
*Returns:* \n
Request result in json format.
"""
uri = 'get_case/{case_id}'.format(case_id=case_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_cases(self, project_id: Id, suite_id: Id = None, section_id: Id = None) -> JsonList:
"""Returns a list of test cases for a test suite or specific section in a test suite.
*Args:* \n
_project_id_ - ID of the project;\n
_suite_id_ - ID of the test suite (optional if the project is operating in single suite mode);\n
_section_id_ - ID of the section (optional).
*Returns:* \n
Information about test cases in section.
"""
uri = 'get_cases/{project_id}'.format(project_id=project_id)
params = {'project_id': project_id}
if suite_id is not None:
params['suite_id'] = suite_id
if section_id is not None:
params['section_id'] = section_id
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def add_case(self, section_id: Id, title: str, steps: List[Dict[str, str]], description: str, refs: str,
type_id: Id, priority_id: Id, **additional_data: Any) -> JsonDict:
"""Creates a new test case.
*Args:* \n
_section_id_ - ID of the section;\n
_title_ - title of the test case;\n
_steps_ - test steps;\n
_description_ - test description;\n
_refs_ - comma-separated list of references;\n
_type_id_ - ID of the case type;\n
_priority_id_ - ID of the case priority;\n
_additional_data_ - additional parameters.
*Returns:* \n
Information about new test case.
"""
uri = 'add_case/{section_id}'.format(section_id=section_id)
data = {
'title': title,
'custom_case_description': description,
'custom_steps_separated': steps,
'refs': refs,
'type_id': type_id,
'priority_id': priority_id
}
for key in additional_data:
data[key] = additional_data[key]
response = self._send_post(uri=uri, data=data)
return cast(JsonDict, response) | /robotframework-testrail-correct-link-1.1.tar.gz/robotframework-testrail-correct-link-1.1/src/TestRailAPIClient.py | 0.919755 | 0.192179 | TestRailAPIClient.py | pypi |
from typing import List, Optional
from concurrent.futures import ThreadPoolExecutor, as_completed, TimeoutError, Future
from requests.exceptions import RequestException
from robot.api import SuiteVisitor, TestSuite
from robot.output import LOGGER
from TestRailAPIClient import TestRailAPIClient, TESTRAIL_STATUS_ID_PASSED
CONNECTION_TIMEOUT = 60 # Value in seconds of timeout connection with testrail for one request
class TestRailPreRunModifier(SuiteVisitor):
"""Pre-run modifier for starting test cases from a certain test run in [http://www.gurock.com/testrail/ | TestRail].
== Dependencies ==
| robot framework | http://robotframework.org |
| TestRailAPIClient |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API] \n
== Example ==
1. Create test cases in TestRail with case_id: 10,11,12. \n
2. Add test cases with case_id:10,12 into test run with run_id = 20. \n
3. Create robot_suite in Robot Framework:
| *** Test Cases ***
| Autotest name 1
| [Documentation] Autotest 1 documentation
| [Tags] testrailid=10
| Fail Test fail message
| Autotest name 2
| [Documentation] Autotest 2 documentation
| [Tags] testrailid=11
| Fail Test fail message
| Autotest name 3
| [Documentation] Autotest 3 documentation
| [Tags] testrailid=12
| Fail Test fail message
4. Run Robot Framework with pre-run modifier:
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:0 robot_suite.robot
5. Test cases "Autotest name 1" and "Autotest name 3" will be executed. Test case "Autotest name 2" will be skipped.
6. To execute tests from TestRail test run only with a certain status, for example "failed" and "blocked":
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:0:failed:blocked robot_suite.robot
6. To execute stable tests from TestRail test run with run analysis depth = 5:
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:5 robot_suite.robot
"""
def __init__(self, server: str, user: str, password: str, run_id: str, protocol: str, # noqa: E951
results_depth: str, *status_names: str) -> None:
"""Pre-run modifier initialization.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https;\n
_results_depth_ - analysis depth of run results;\n
_status_names_ - name of test statuses in TestRail.
"""
self.run_id = run_id
self.status_names = status_names
self.tr_client = TestRailAPIClient(server, user, password, run_id, protocol)
self.results_depth = int(results_depth) if str(results_depth).isdigit() else 0
self._tr_tags_list: Optional[List[str]] = None
self._tr_stable_tags_list: Optional[List[str]] = None
LOGGER.register_syslog()
@property
def tr_stable_tags_list(self) -> List[str]:
"""Gets list of 'testrailid' tags of the stable test cases.
Returns:
List of tags.
"""
if self._tr_stable_tags_list is None:
self._tr_stable_tags_list = self._get_tr_stable_tags_list()
return self._tr_stable_tags_list
@property
def tr_tags_list(self) -> List[str]:
"""Gets 'testrailid' tags.
Returns:
List of tags.
"""
if self._tr_tags_list is None:
self._tr_tags_list = self._get_tr_tags_list()
return self._tr_tags_list
def _log_to_parent_suite(self, suite: TestSuite, message: str) -> None:
"""Log message to the parent suite.
*Args:*\n
_suite_ - Robot Framework test suite object.
_message_ - message.
"""
if suite.parent is None:
LOGGER.error("{suite}: {message}".format(suite=suite, message=message))
def _get_tr_tags_list(self) -> List[str]:
"""Get list of 'testrailid' tags.
If required test statuses from the test run are passed to modifier,
a request is made to the TestRail to obtain information about all the statuses.
Theirs identifiers will be retrieved from list of all the statuses.
This identifiers will be used to receive test tags in the required status.
If statuses aren't passed to modifier,
the tags of all tests in the test run will be obtained regardless of their status.
Returns:
List of tags.
"""
status_ids = None
if self.status_names:
status_ids = [self.tr_client.get_status_id_by_status_label(name) for name in self.status_names]
tests_info = self.tr_client.get_tests(run_id=self.run_id, status_ids=status_ids)
return ['testrailid={}'.format(test["case_id"]) for test in tests_info if test["case_id"] is not None]
def _get_tr_stable_tags_list(self) -> List[str]:
"""Get list of 'testrailid' tags of the stable test cases.
If analysis depth of the run results is passed to modifier and its value greater than zero,
a request is made to the TestRail to receive information about test cases whose last result is 'passed'.
Based on the information received, the results of the latest runs for these test cases are analyzed,
on the basis of which the tags of stable test cases will be received.
Returns:
List of stable tags.
"""
stable_case_ids_list = list()
catched_exceptions = list()
passed_tests_info = self.tr_client.get_tests(run_id=self.run_id, status_ids=[TESTRAIL_STATUS_ID_PASSED])
case_ids = [test["case_id"] for test in passed_tests_info if test["case_id"] is not None]
def future_handler(future: Future) -> None:
"""Get result from future with try/except block and to list.
Args:
future: future object.
"""
case_id = futures[future]
try:
case_results = future.result()
except RequestException as exception:
catched_exceptions.append(exception)
else:
passed_list = [result for result in case_results if
result['status_id'] == TESTRAIL_STATUS_ID_PASSED]
if len(passed_list) == int(self.results_depth):
stable_case_ids_list.append(case_id)
with ThreadPoolExecutor() as executor:
futures = {executor.submit(self.tr_client.get_results_for_case, self.run_id, case_id, self.results_depth):
case_id for case_id in case_ids}
for future in as_completed(futures, timeout=CONNECTION_TIMEOUT):
future_handler(future)
if catched_exceptions:
raise catched_exceptions[0]
return ['testrailid={}'.format(case_id) for case_id in stable_case_ids_list]
def start_suite(self, suite: TestSuite) -> None:
"""Form list of tests for the Robot Framework test suite that are included in the TestRail test run.
If analysis depth of the run results is greater than zero, when first suite is launched
a list of 'testrailid' tags of stable test cases is obtained.
After that the list of tags is written to the class attribute and for subsequent suites the obtaining is not happening.
If analysis depth of the run results is zero, when the first suite is launched
a list of 'testrailid' tags of all test cases in the given status is obtained.
After that the list of tags is written to the class attribute and for subsequent suites the obtaining is not happening.
*Args:*\n
_suite_ - Robot Framework test suite object.
"""
tests = suite.tests
suite.tests = None
try:
if self.results_depth > 0:
suite.tests = [t for t in tests if (set(t.tags) & set(self.tr_stable_tags_list))]
else:
suite.tests = [t for t in tests if (set(t.tags) & set(self.tr_tags_list))]
except (RequestException, TimeoutError) as error:
self._log_to_parent_suite(suite, str(error))
def end_suite(self, suite: TestSuite) -> None:
"""Removing test suites that are empty after excluding tests that are not part of the TestRail test run.
*Args:*\n
_suite_ - Robot Framework test suite object.
"""
suite.suites = [s for s in suite.suites if s.test_count > 0]
if not suite.suites:
self._log_to_parent_suite(suite, "No tests to execute after using TestRail pre-run modifier.") | /robotframework-testrail-correct-link-1.1.tar.gz/robotframework-testrail-correct-link-1.1/src/TestRailPreRunModifier.py | 0.855157 | 0.469824 | TestRailPreRunModifier.py | pypi |
import logging
import pprint
import testrail
from argparse import ArgumentParser, FileType
from datetime import datetime
from pathlib import Path
from lxml import etree
def configure_logging():
"""
This method configures the logging
"""
log_format = '%(asctime)-15s %(levelname)-10s %(message)s'
Path("./output/log").mkdir(parents=True, exist_ok=True)
logging.basicConfig(filename='./output/log/robotframework2testrail.log', format=log_format, level=logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(logging.Formatter('%(message)s'))
logging.getLogger().addHandler(console_handler)
class RobotTestrailReporter(object):
def __init__(self):
configure_logging()
self.rf_to_tr_status = {
"PASS": 1,
"FAIL": 5,
}
self.tr_to_rf_status = {
1: "PASS",
5: "FAIL"
}
self.time_format = '%Y%m%d %H:%M:%S.%f'
self.comment_size_limit = 1000
def find_tests_tagged_by(self, input_file, tag):
"""
This method parses the output xml generated by robotframework
defined by ``path`` and finds all test nodes with starting tag ``tag``.
Returns a list of dictionaries with following keys:
['rf_test_name', 'test_id , 'status_id, 'comment', 'elapsed']
- ``rf_test_name`` is the robotframework test name
- ``test_id`` is the numeric value contained in tag
- ``status_id`` is the conversion of test status to testrail standards
- ``comment`` is the test status message
- ``elapsed`` is the test execution time in s, (min 1 second)
Args:
``input_file`` is the robotframework xml output file
``tag`` is the tag to be found in robotframework tests, it
must contain a numeric part which specifies testcase run id,
example: test_case_id=123
"""
tree = etree.parse(input_file)
tests = tree.xpath(".//tags/tag[starts-with(., '%s')]/ancestor::test" % tag)
results = []
for test in tests:
test_id = test.xpath("tags/tag[starts-with(., '%s')]" % tag)[0].text
test_id = ''.join([d if d.isdigit() else '' for d in test_id])
status = test.find('status')
elapsed = self.get_elapsed(status)
comment = self.get_comment(status)
result = {
'rf_test_name': test.get('name'),
'test_case_id': int(test_id),
'test_id': None,
'status_id': self.rf_to_tr_status[status.get('status')],
'comment': comment,
'elapsed': elapsed
}
results.append(result)
return results
@staticmethod
def publish_results(api, run_id, results):
"""
This method publishes robotframework results to a testrail run
Args:
``api`` is an instance of testrail.client.API already logged in
``run_id`` is the id of the run to update
``results`` is a list of dictionaries that contains test results
"""
api.add_results(results, run_id)
@staticmethod
def replace_caseid_with_testid_by_run(api, results, run_id, ignore_blocked: bool):
"""
This method parse all results generated by find_tests_tagged_by method
and replace all testcases ids with specific testruns ids in ``test_id`` key
of each result
Args:
``api`` is an instance of testrail.client.API already logged in
``results`` is a list of dictionaries that contains test results
``run_id`` is the id of the run to update
"""
tests = api.tests(run_id)
mapping = {}
unwanted = []
for test in tests:
mapping[test['case_id']] = test
for result in results:
if result['test_case_id'] in mapping:
if mapping[result['test_case_id']]['status_id'] == 2 and ignore_blocked:
unwanted.append(result)
else:
result['test_id'] = mapping[result['test_case_id']]['id']
else:
unwanted.append(result)
logging.error("Test Case: %s with tag: %s not present in Run: %s" % (
result['rf_test_name'], result['test_case_id'], run_id))
return [e for e in results if e not in unwanted]
@staticmethod
def replace_caseid_with_testid_by_plan(api, results, plan_id, ignore_blocked: bool):
"""
This method parse all results generated by find_tests_tagged_by method
and replace all testcases ids with specific testruns ids found by ``plan_id``
Args:
``api`` is an instance of testrail.client.API already logged in
``results`` is a list of dictionaries that contains test results
``plan_id`` is the id of plan to scan for test runs to update
"""
runs = []
results_to_return = {} # Dict that contains results grouped by run id
test_cases_ids = [result['test_case_id'] for result in results] # All test cases ids found in output xml
suites = api.plan_with_id(plan_id=plan_id, with_entries=True) # Test Suites found by test plan id
for suite in suites['entries']:
for run in suite['runs']:
runs.append(run['id']) # Array of all run ids
for run in runs:
mapping = {} # Dictionary with K,V => test_case_id, test_run_id
tests = api.tests(run) # Tests included in a specific run
for test in tests:
if test['case_id'] in test_cases_ids:
mapping[test['case_id']] = test # Add mapping only if test is present in output xml
results_to_return[run] = [] # Initialize list of results of specific run
for result in results:
if result['test_case_id'] in mapping:
if mapping[result['test_case_id']]['status_id'] == 2 and ignore_blocked: # Ignore blocked test
pass
else:
result_copy = dict(result)
result_copy['test_id'] = mapping[result['test_case_id']]['id']
results_to_return[run].append(result_copy)
all_found = [r for r in results_to_return.values() if r]
# all_found contains all executed tests found in test runs
all_found = [found['test_case_id'] for found_dict in all_found for found in found_dict]
not_found = set(test_cases_ids) - set(all_found)
if not_found:
logging.error("Following Test Cases are not present in in Runs: %s: %s" % (runs, not_found))
return results_to_return
def get_elapsed(self, status):
"""
This method calculates the elapsed test execution time which is
endtime - starttime in test ``status`` node of output XML.
Returns a string of test execution seconds. Example: '5s'
Args:
``status`` is the status node found via xpath selector with lxml
"""
start_time = status.get('starttime')
end_time = status.get('endtime')
elapsed = datetime.strptime(end_time + '000', self.time_format) - datetime.strptime(start_time + '000',
self.time_format)
elapsed = round(elapsed.total_seconds())
elapsed = 1 if (elapsed < 1) else elapsed # TestRail API doesn't manage msec (min value=1s)
elapsed = "%ss" % elapsed
return elapsed
def get_comment(self, status):
"""
This method returns the test execution ``status`` message which is the
text attribute of test status node in output xml, if no text is found
it retuns an empty string. Comment maximum length is defined by
comment_size_limit variable.
Args:
``status`` is the status node found via xpath selector with lxml
"""
comment = '' if status.text is None else status.text
if comment:
comment = "# Robot Framework result: #\n " + comment[:self.comment_size_limit].replace('\n', '\n ')
comment += '\n...\nLog truncated' if len(str(comment)) > self.comment_size_limit else ''
return comment
def main():
"""
Main method to execute, it reads following arguments from cli:
-f --input => xml file to parse
-u --url => testrail url
-e --email => testrail email
-k --key => testrail password or apikey
-p --planid => testrail planid containing test runs to update
-r --runid => testrail runid to update
-t --tag => tag to be found in robotframework tests, default="test_case_id="
-i --ignore_blocked => Specifies if blocked test statuses shouldn't be updated
"""
parser = ArgumentParser()
parser.add_argument("-f",
"--input",
action="store",
type=FileType('r', encoding='UTF-8'),
dest="input_file",
help="Path to the XML report to parse",
required=True)
parser.add_argument("-u",
"--url",
action="store",
type=str,
dest="url",
help="Testrail url",
required=True)
parser.add_argument("-k",
"--key",
action="store",
type=str,
dest="key",
help="Password or apikey",
required=True)
parser.add_argument("-e",
"--email",
action="store",
type=str,
dest="email",
help="Email address",
required=True)
group_id = parser.add_mutually_exclusive_group(required=True)
group_id.add_argument("-r",
"--runid",
action="store",
type=str,
dest="runid",
help="Testrail run id")
group_id.add_argument("-p",
"--planid",
action="store",
type=str,
dest="planid",
help="Testrail run id")
parser.add_argument("-t",
"--tag",
action="store",
type=str,
dest="tag",
help="Tag prefix to insert in RF tests",
default="test_case_id=")
parser.add_argument("-i",
"--ignore_blocked",
action="store_true",
dest="ignore_blocked",
help="Specifies if blocked test statuses shouldn't be updated")
args = parser.parse_args()
url = args.url
email = args.email
key = args.key
tr_run_id = args.runid
tr_plan_id = args.planid
input_file = args.input_file
tag = args.tag
ignore_blocked = args.ignore_blocked
pp = pprint.PrettyPrinter()
tr_api = testrail.client.API(email=email, key=key, url=url)
reporter = RobotTestrailReporter()
res = reporter.find_tests_tagged_by(input_file, tag)
logging.debug("Tests found in RF XML:\n")
if args.runid is not None:
res = reporter.replace_caseid_with_testid_by_run(api=tr_api, results=res, run_id=tr_run_id, ignore_blocked=ignore_blocked)
if res:
logging.debug("Tests found in RF XML mapped with run test ids:\n")
logging.debug(pp.pformat(res))
reporter.publish_results(api=tr_api, run_id=tr_run_id, results=res)
logging.info("Results for test run: %s\n" % tr_run_id)
res_to_print = ["Test name: %s, Test id: %s, Status: %s" %
(result['rf_test_name'], result['test_id'], reporter.tr_to_rf_status[result['status_id']])
for
result in res]
logging.info("\n".join(res_to_print))
else:
res = reporter.replace_caseid_with_testid_by_plan(api=tr_api, results=res, plan_id=tr_plan_id, ignore_blocked=ignore_blocked)
for runid, results in res.items():
if results:
reporter.publish_results(api=tr_api, run_id=runid, results=results)
logging.debug("Tests found in RF XML mapped with run test id: %s\n" % runid)
logging.debug(pp.pformat(results))
logging.info("\nResults for test run: %s\n" % runid)
res_to_print = ["Test name: %s, Test id: %s, Status: %s" %
(result['rf_test_name'], result['test_id'],
reporter.tr_to_rf_status[result['status_id']])
for
result in results]
logging.info("\n".join(res_to_print))
if __name__ == '__main__':
main() | /robotframework-testrail-reporter-0.0.17.tar.gz/robotframework-testrail-reporter-0.0.17/src/RobotTestrailReporter/RobotTestrailReporter.py | 0.595257 | 0.254162 | RobotTestrailReporter.py | pypi |
import json
import re
import requests
import os
from typing import Any, Dict, List, Optional, Union
from robot.api import logger
from TestRailAPIClient import JsonDict, TestRailAPIClient
__author__ = "Dmitriy.Zverev"
__license__ = "Apache License, Version 2.0"
class TestRailListener(object):
"""Fixing of testing results and update test case in [ http://www.gurock.com/testrail/ | TestRail ].
== Dependencies ==
| past | https://pypi.org/project/past/ |
| requests | https://pypi.python.org/pypi/requests |
| robot framework | http://robotframework.org |
| TestRailAPIClient |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API] \n
2. Create custom field "case_description" with type "text", which corresponds to the Robot Framework's test case documentation.
== Example ==
1. Create test case in TestRail with case_id = 10\n
2. Add it to test run with id run_id = 20\n
3. Create autotest in Robot Framework
| *** Settings ***
| *** Test Cases ***
| Autotest name
| [Documentation] Autotest documentation
| [Tags] testrailid=10 defects=BUG-1, BUG-2 references=REF-3, REF-4
| Fail Test fail message
4. Run Robot Framework with listener:\n
| set ROBOT_SYSLOG_FILE=syslog.txt
| robot --listener TestRailListener.py:testrail_server_name:tester_user_name:tester_user_password:20:https:update autotest.robot
5. Test with case_id=10 will be marked as failed in TestRail with message "Test fail message" and defects "BUG-1, BUG-2".
Also title, description and references of this test will be updated in TestRail. Parameter "update" is optional.
"""
ROBOT_LISTENER_API_VERSION = 2
ELAPSED_KEY = 'elapsed'
TESTRAIL_CASE_TYPE_ID_AUTOMATED = 1
TESTRAIL_TEST_STATUS_ID_PASSED = 1
TESTRAIL_TEST_STATUS_ID_FAILED = 5
def __init__(self, server: str, user: str, password: str, run_id: str, protocol: str = 'http',
juggler_disable: str = None, update: str = None) -> None:
"""Listener initialization.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https;\n
_juggler_disable_ - indicator to disable juggler logic; if exist, then juggler logic will be disabled;\n
_update_ - indicator to update test case in TestRail; if exist, then test will be updated.
"""
testrail_url = '{protocol}://{server}/testrail/'.format(protocol=protocol, server=server)
self._url = testrail_url + 'index.php?/api/v2/'
self._user = user
self._password = password
self.run_id = run_id
self.juggler_disable = juggler_disable
self.update = update
self.tr_client = TestRailAPIClient(server, user, password, run_id, protocol)
self._vars_for_report_link: Optional[Dict[str, str]] = None
logger.info('[TestRailListener] url: {testrail_url}'.format(testrail_url=testrail_url))
logger.info('[TestRailListener] user: {user}'.format(user=user))
logger.info('[TestRailListener] the ID of the test run: {run_id}'.format(run_id=run_id))
def end_test(self, name: str, attributes: JsonDict) -> None:
""" Update test case in TestRail.
*Args:* \n
_name_ - name of test case in Robot Framework;\n
_attributes_ - attributes of test case in Robot Framework.
"""
tags_value = self._get_tags_value(attributes['tags'])
case_id = tags_value['testrailid']
if not case_id:
logger.warn(f"[TestRailListener] No case_id presented for test_case {name}.")
return
if 'skipped' in [tag.lower() for tag in attributes['tags']]:
logger.warn(f"[TestRailListener] SKIPPED test case \"{name}\" with testrailId={case_id} "
"will not be posted to Testrail")
return
# Update test case
if self.update:
references = tags_value['references']
self._update_case_description(attributes, case_id, name, references)
# Send test results
defects = tags_value['defects']
old_test_status_id = self.tr_client.get_test_status_id_by_case_id(self.run_id, case_id)
test_result = self._prepare_test_result(attributes, defects, old_test_status_id, case_id)
try:
self.tr_client.add_result_for_case(self.run_id, case_id, test_result)
except requests.HTTPError as error:
logger.error(f"[TestRailListener] http error on case_id = {case_id}\n{error}")
def _update_case_description(self, attributes: JsonDict, case_id: str, name: str,
references: Optional[str]) -> None:
""" Update test case description in TestRail
*Args:* \n
_attributes_ - attributes of test case in Robot Framework;\n
_case_id_ - case id;\n
_name_ - test case name;\n
_references_ - test references.
"""
logger.info(f"[TestRailListener] update of test {case_id} in TestRail")
description = f"{attributes['doc']}\nPath to test: {attributes['longname']}"
request_fields: Dict[str, Union[str, int, None]] = {
'title': name, 'type_id': self.TESTRAIL_CASE_TYPE_ID_AUTOMATED,
'custom_case_description': description, 'refs': references}
try:
json_result = self.tr_client.update_case(case_id, request_fields)
result = json.dumps(json_result, sort_keys=True, indent=4)
logger.info(f"[TestRailListener] result for method update_case: {result}")
except requests.HTTPError as error:
logger.error(f"[TestRailListener] http error, while execute request:\n{error}")
def _prepare_test_result(self, attributes: JsonDict, defects: Optional[str], old_test_status_id: Optional[int],
case_id: str) -> Dict[str, Union[str, int]]:
"""Create json with test result information.
*Args:* \n
_attributes_ - attributes of test case in Robot Framework;\n
_defects_ - list of defects (in string, comma-separated);\n
_old_test_status_id_ - old test status id;\n
_case_id_ - test case ID.
*Returns:*\n
Dictionary with test results.
"""
link_to_report = self._get_url_report_by_case_id(case_id)
test_time = float(attributes['elapsedtime']) / 1000
comment = f"Autotest name: {attributes['longname']}\nMessage: {attributes['message']}\nTest time:" \
f" {test_time:.3f} s"
if link_to_report:
comment += f'\nLink to Report: {link_to_report}'
if self.juggler_disable:
if attributes['status'] == 'PASS':
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_PASSED
else:
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_FAILED
else:
new_test_status_id = self._prepare_new_test_status_id(attributes['status'], old_test_status_id)
test_result: Dict[str, Union[str, int]] = {
'status_id': new_test_status_id,
'comment': comment,
}
elapsed_time = TestRailListener._time_span_format(test_time)
if elapsed_time:
test_result[TestRailListener.ELAPSED_KEY] = elapsed_time
if defects:
test_result['defects'] = defects
return test_result
def _prepare_new_test_status_id(self, new_test_status: str, old_test_status_id: Optional[int]) -> int:
"""Prepare new test status id by new test status and old test status id.
Alias of this method is "juggler".
If new test status is "PASS", new test status id is "passed".
If new test status is "FAIL" and old test status id is null or "passed" or "failed",
new test status id is "failed".
In all other cases new test status id is equal to old test status id.
*Args:* \n
_new_test_status_ - new test status;\n
_old_test_status_id_ - old test status id.
*Returns:*\n
New test status id.
"""
old_statuses_to_fail = (self.TESTRAIL_TEST_STATUS_ID_PASSED, self.TESTRAIL_TEST_STATUS_ID_FAILED, None)
if new_test_status == 'PASS':
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_PASSED
elif new_test_status == 'FAIL' and old_test_status_id in old_statuses_to_fail:
new_test_status_id = self.TESTRAIL_TEST_STATUS_ID_FAILED
else:
assert old_test_status_id is not None
new_test_status_id = old_test_status_id
return new_test_status_id
@staticmethod
def _get_tags_value(tags: List[str]) -> Dict[str, Optional[str]]:
""" Get value from robot framework's tags for TestRail.
*Args:* \n
_tags_ - list of tags.
*Returns:* \n
Dict with attributes.
"""
attributes: Dict[str, Optional[str]] = dict()
matchers = ['testrailid', 'defects', 'references']
for matcher in matchers:
for tag in tags:
match = re.match(matcher, tag)
if match:
split_tag = tag.split('=')
tag_value = split_tag[1]
attributes[matcher] = tag_value
break
else:
attributes[matcher] = None
return attributes
@staticmethod
def _time_span_format(seconds: Any) -> str:
""" Format seconds to time span format.
*Args:*\n
_seconds_ - time in seconds.
*Returns:*\n
Time formatted in Time span.
"""
if isinstance(seconds, float):
seconds = int(seconds)
elif not isinstance(seconds, int):
seconds = 0
if seconds <= 0:
return ''
s = seconds % 60
res = "{}s".format(s)
seconds -= s
if seconds >= 60:
m = (seconds % 60 ** 2) // 60
res = "{}m {}".format(m, res)
seconds -= m * 60
if seconds >= 60 ** 2:
h = seconds // 60 ** 2
res = "{}h {}".format(h, res)
return res
@staticmethod
def _get_vars_for_report_link() -> Dict[str, str]:
"""" Getting value from environment variables for prepare link to report.
If test cases are started by means of CI, then must define the environment variables
in the CI configuration settings to getting url to the test case report.
The following variables are used:
for Teamcity - TEAMCITY_HOST_URL, TEAMCITY_BUILDTYPE_ID, TEAMCITY_BUILD_ID,
REPORT_ARTIFACT_PATH, TORS_REPORT,
for Jenkins - JENKINS_BUILD_URL.
If these variables are not found, then the link to report will not be formed.
== Example ==
1. for Teamcity
| Changing build configuration settings
| REPORT_ARTIFACT_PATH output
| TORS_REPORT report.html
| TEAMCITY_BUILD_ID %teamcity.build.id%
| TEAMCITY_BUILDTYPE_ID %system.teamcity.buildType.id%
| TEAMCITY_HOST_URL https://teamcity.billing.ru
2. for Jenkins
| add to the shell the execution of the docker container parameter
| -e "JENKINS_BUILD_URL = ${BUILD_URL}"
*Returns:*\n
Dictionary with environment variables results.
"""
variables: Dict[str, str] = {}
env_var = os.environ.copy()
if 'TEAMCITY_HOST_URL' in env_var:
try:
teamcity_vars = {'TEAMCITY_HOST_URL',
'TEAMCITY_BUILDTYPE_ID',
'TEAMCITY_BUILD_ID',
'REPORT_ARTIFACT_PATH'}
variables = {var: env_var[var] for var in teamcity_vars}
except KeyError:
logger.error("[TestRailListener] There are no variables for getting a link to the report by tests.")
if env_var.get('TORS_REPORT', '').strip():
variables['TORS_REPORT'] = env_var['TORS_REPORT']
elif 'JENKINS_BUILD_URL' in env_var:
variables = {'JENKINS_BUILD_URL': env_var['JENKINS_BUILD_URL']}
return variables
@property
def vars_for_report_link(self) -> Dict[str, str]:
"""Get variables for report link.
Saves environment variables information once and then returns cached values.
*Returns:*\n
Cached variables for report link.
"""
if not self._vars_for_report_link:
self._vars_for_report_link = self._get_vars_for_report_link()
return self._vars_for_report_link
def _get_url_report_by_case_id(self, case_id: Union[str, int]) -> Optional[str]:
"""" Getting url for Report by id test case.
*Args:* \n
_case_id_ - test case ID.
*Returns:*\n
Report URL.
"""
build_url = ''
report_filename = self.vars_for_report_link.get('TORS_REPORT', 'report.html')
report_uri = f'{report_filename}#search?include=testrailid={case_id}'
if 'TEAMCITY_HOST_URL' in self.vars_for_report_link:
vars = self.vars_for_report_link
base_hostname = vars.get('TEAMCITY_HOST_URL')
buildtype_id = vars.get('TEAMCITY_BUILDTYPE_ID')
build_id = vars.get('TEAMCITY_BUILD_ID')
report_artifact_path = vars.get('REPORT_ARTIFACT_PATH')
build_url = f'{base_hostname}/repository/download/{buildtype_id}/{build_id}:id/{report_artifact_path}'
elif 'JENKINS_BUILD_URL' in self.vars_for_report_link:
build_url = self.vars_for_report_link['JENKINS_BUILD_URL'] + 'robot/report'
return f'{build_url}/{report_uri}' if build_url else None | /robotframework-testrail-2.0.1.tar.gz/robotframework-testrail-2.0.1/src/TestRailListener.py | 0.763131 | 0.345519 | TestRailListener.py | pypi |
from requests import post, get
from typing import Any, cast, Dict, List, Optional, Sequence, Union
DEFAULT_TESTRAIL_HEADERS = {'Content-Type': 'application/json'}
TESTRAIL_STATUS_ID_PASSED = 1
# custom types
JsonDict = Dict[str, Any] # noqa: E993
JsonList = List[JsonDict] # noqa: E993
Id = Union[str, int] # noqa: E993
class TestRailAPIClient(object):
"""Library for working with [http://www.gurock.com/testrail/ | TestRail].
== Dependencies ==
| requests | https://pypi.python.org/pypi/requests |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API]
"""
def __init__(self, server: str, user: str, password: str, run_id: Id, protocol: str = 'http') -> None:
"""Create TestRailAPIClient instance.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https.
"""
self._url = '{protocol}://{server}/testrail/index.php?/api/v2/'.format(protocol=protocol, server=server)
self._user = user
self._password = password
self.run_id = run_id
def _send_post(self, uri: str, data: Dict[str, Any]) -> Union[JsonList, JsonDict]:
"""Perform post request to TestRail.
*Args:* \n
_uri_ - URI for test case;\n
_data_ - json with test result.
*Returns:* \n
Request result in json format.
"""
url = self._url + uri
response = post(url, json=data, auth=(self._user, self._password), verify=False)
response.raise_for_status()
return response.json()
def _send_get(self, uri: str, headers: Dict[str, str] = None,
params: Dict[str, Any] = None) -> Union[JsonList, JsonDict]:
"""Perform get request to TestRail.
*Args:* \n
_uri_ - URI for test case;\n
_headers_ - headers for http-request;\n
_params_ - parameters for http-request.
*Returns:* \n
Request result in json format.
"""
url = self._url + uri
response = get(url, headers=headers, params=params, auth=(self._user, self._password), verify=False)
response.raise_for_status()
return response.json()
def get_tests(self, run_id: Id, status_ids: Union[str, Sequence[int]] = None) -> JsonList:
"""Get tests from TestRail test run by run_id.
*Args:* \n
_run_id_ - ID of the test run;\n
_status_ids_ - list of the required test statuses.
*Returns:* \n
Tests information in json format.
"""
uri = 'get_tests/{run_id}'.format(run_id=run_id)
if status_ids:
status_ids = ','.join(str(status_id) for status_id in status_ids)
params = {
'status_id': status_ids
}
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def get_results_for_case(self, run_id: Id, case_id: Id, limit: int = None) -> JsonList:
"""Get results for case by run_id and case_id.
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case;\n
_limit_ - limit of case results.
*Returns:* \n
Cases results in json format.
"""
uri = 'get_results_for_case/{run_id}/{case_id}'.format(run_id=run_id, case_id=case_id)
params = {
'limit': limit
}
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def add_result_for_case(self, run_id: Id, case_id: Id,
test_result_fields: Dict[str, Union[str, int]]) -> None:
"""Add results for case in TestRail test run by run_id and case_id.
*Supported request fields for test result:*\n
| *Name* | *Type* | *Description* |
| status_id | int | The ID of the test status |
| comment | string | The comment / description for the test result |
| version | string | The version or build you tested against |
| elapsed | timespan | The time it took to execute the test, e.g. "30s" or "1m 45s" |
| defects | string | A comma-separated list of defects to link to the test result |
| assignedto_id | int | The ID of a user the test should be assigned to |
| Custom fields are supported as well and must be submitted with their system name, prefixed with 'custom_' |
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case;\n
_test_result_fields_ - result of the test fields dictionary.
*Example:*\n
| Add Result For Case | run_id=321 | case_id=123| test_result={'status_id': 3, 'comment': 'This test is untested', 'defects': 'DEF-123'} |
"""
uri = 'add_result_for_case/{run_id}/{case_id}'.format(run_id=run_id, case_id=case_id)
self._send_post(uri, test_result_fields)
def get_statuses(self) -> JsonList:
"""Get test statuses information from TestRail.
*Returns:* \n
Statuses information in json format.
"""
uri = 'get_statuses'
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonList, response)
def update_case(self, case_id: Id, request_fields: Dict[str, Union[str, int, None]]) -> JsonDict:
"""Update an existing test case in TestRail.
*Supported request fields:*\n
| *Name* | *Type* | *Description* |
| title | string | The title of the test case (required) |
| template_id | int | The ID of the template (field layout) (requires TestRail 5.2 or later) |
| type_id | int | The ID of the case type |
| priority_id | int | The ID of the case priority |
| estimate | timespan | The estimate, e.g. "30s" or "1m 45s" |
| milestone_id | int | The ID of the milestone to link to the test case |
| refs | string | A comma-separated list of references/requirements |
| Custom fields are supported as well and must be submitted with their system name, prefixed with 'custom_' |
*Args:* \n
_case_id_ - ID of the test case;\n
_request_fields_ - request fields dictionary.
*Returns:* \n
Case information in json format.
*Example:*\n
| Update Case | case_id=213 | request_fields={'title': name, 'type_id': 1, 'custom_case_description': description, 'refs': references} |
"""
uri = 'update_case/{case_id}'.format(case_id=case_id)
response = self._send_post(uri, request_fields)
return cast(JsonDict, response)
def get_status_id_by_status_label(self, status_label: str) -> int:
"""Get test status id by status label.
*Args:* \n
_status_label_ - status label of the tests.
*Returns:* \n
Test status ID.
"""
statuses_info = self.get_statuses()
for status in statuses_info:
if status['label'].lower() == status_label.lower():
return status['id']
raise Exception(u"There is no status with label \'{}\' in TestRail".format(status_label))
def get_test_status_id_by_case_id(self, run_id: Id, case_id: Id) -> Optional[int]:
"""Get test last status id by case id.
If there is no last test result returns None.
*Args:* \n
_run_id_ - ID of the test run;\n
_case_id_ - ID of the test case.
*Returns:* \n
Test status ID.
"""
last_case_result = self.get_results_for_case(run_id=run_id, case_id=case_id, limit=1)
return last_case_result[0]['status_id'] if last_case_result else None
def get_project(self, project_id: Id) -> JsonDict:
"""Get project info by project id.
*Args:* \n
_project_id_ - ID of the project.
*Returns:* \n
Request result in json format.
"""
uri = 'get_project/{project_id}'.format(project_id=project_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_suite(self, suite_id: Id) -> JsonDict:
"""Get suite info by suite id.
*Args:* \n
_suite_id_ - ID of the test suite.
*Returns:* \n
Request result in json format.
"""
uri = 'get_suite/{suite_id}'.format(suite_id=suite_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_section(self, section_id: Id) -> JsonDict:
"""Get section info by section id.
*Args:* \n
_section_id_ - ID of the section.
*Returns:* \n
Request result in json format.
"""
uri = 'get_section/{section_id}'.format(section_id=section_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def add_section(self, project_id: Id, name: str, suite_id: Id = None, parent_id: Id = None,
description: str = None) -> JsonDict:
"""Creates a new section.
*Args:* \n
_project_id_ - ID of the project;\n
_name_ - name of the section;\n
_suite_id_ - ID of the test suite(ignored if the project is operating in single suite mode);\n
_parent_id_ - ID of the parent section (to build section hierarchies);\n
_description_ - description of the section.
*Returns:* \n
New section information.
"""
uri = 'add_section/{project_id}'.format(project_id=project_id)
data: Dict[str, Union[int, str]] = {'name': name}
if suite_id is not None:
data['suite_id'] = suite_id
if parent_id is not None:
data['parent_id'] = parent_id
if description is not None:
data['description'] = description
response = self._send_post(uri=uri, data=data)
return cast(JsonDict, response)
def get_sections(self, project_id: Id, suite_id: Id) -> JsonList:
"""Returns existing sections.
*Args:* \n
_project_id_ - ID of the project;\n
_suite_id_ - ID of the test suite.
*Returns:* \n
Information about section.
"""
uri = 'get_sections/{project_id}&suite_id={suite_id}'.format(project_id=project_id, suite_id=suite_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonList, response)
def get_case(self, case_id: Id) -> JsonDict:
"""Get case info by case id.
*Args:* \n
_case_id_ - ID of the test case.
*Returns:* \n
Request result in json format.
"""
uri = 'get_case/{case_id}'.format(case_id=case_id)
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS)
return cast(JsonDict, response)
def get_cases(self, project_id: Id, suite_id: Id = None, section_id: Id = None) -> JsonList:
"""Returns a list of test cases for a test suite or specific section in a test suite.
*Args:* \n
_project_id_ - ID of the project;\n
_suite_id_ - ID of the test suite (optional if the project is operating in single suite mode);\n
_section_id_ - ID of the section (optional).
*Returns:* \n
Information about test cases in section.
"""
uri = 'get_cases/{project_id}'.format(project_id=project_id)
params = {'project_id': project_id}
if suite_id is not None:
params['suite_id'] = suite_id
if section_id is not None:
params['section_id'] = section_id
response = self._send_get(uri=uri, headers=DEFAULT_TESTRAIL_HEADERS, params=params)
return cast(JsonList, response)
def add_case(self, section_id: Id, title: str, steps: List[Dict[str, str]], description: str, refs: str,
type_id: Id, priority_id: Id, **additional_data: Any) -> JsonDict:
"""Creates a new test case.
*Args:* \n
_section_id_ - ID of the section;\n
_title_ - title of the test case;\n
_steps_ - test steps;\n
_description_ - test description;\n
_refs_ - comma-separated list of references;\n
_type_id_ - ID of the case type;\n
_priority_id_ - ID of the case priority;\n
_additional_data_ - additional parameters.
*Returns:* \n
Information about new test case.
"""
uri = 'add_case/{section_id}'.format(section_id=section_id)
data = {
'title': title,
'custom_case_description': description,
'custom_steps_separated': steps,
'refs': refs,
'type_id': type_id,
'priority_id': priority_id
}
for key in additional_data:
data[key] = additional_data[key]
response = self._send_post(uri=uri, data=data)
return cast(JsonDict, response) | /robotframework-testrail-2.0.1.tar.gz/robotframework-testrail-2.0.1/src/TestRailAPIClient.py | 0.922137 | 0.219463 | TestRailAPIClient.py | pypi |
from typing import List, Optional
from concurrent.futures import ThreadPoolExecutor, as_completed, TimeoutError, Future
from requests.exceptions import RequestException
from robot.api import SuiteVisitor, TestSuite
from robot.output import LOGGER
from TestRailAPIClient import TestRailAPIClient, TESTRAIL_STATUS_ID_PASSED
CONNECTION_TIMEOUT = 60 # Value in seconds of timeout connection with testrail for one request
class TestRailPreRunModifier(SuiteVisitor):
"""Pre-run modifier for starting test cases from a certain test run in [http://www.gurock.com/testrail/ | TestRail].
== Dependencies ==
| robot framework | http://robotframework.org |
| TestRailAPIClient |
== Preconditions ==
1. [ http://docs.gurock.com/testrail-api2/introduction | Enable TestRail API] \n
== Example ==
1. Create test cases in TestRail with case_id: 10,11,12. \n
2. Add test cases with case_id:10,12 into test run with run_id = 20. \n
3. Create robot_suite in Robot Framework:
| *** Test Cases ***
| Autotest name 1
| [Documentation] Autotest 1 documentation
| [Tags] testrailid=10
| Fail Test fail message
| Autotest name 2
| [Documentation] Autotest 2 documentation
| [Tags] testrailid=11
| Fail Test fail message
| Autotest name 3
| [Documentation] Autotest 3 documentation
| [Tags] testrailid=12
| Fail Test fail message
4. Run Robot Framework with pre-run modifier:
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:0 robot_suite.robot
5. Test cases "Autotest name 1" and "Autotest name 3" will be executed. Test case "Autotest name 2" will be skipped.
6. To execute tests from TestRail test run only with a certain status, for example "failed" and "blocked":
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:0:failed:blocked robot_suite.robot
6. To execute stable tests from TestRail test run with run analysis depth = 5:
| robot --prerunmodifier TestRailPreRunModifier:testrail_server_name:tester_user_name:tester_user_password:20:http:5 robot_suite.robot
"""
def __init__(self, server: str, user: str, password: str, run_id: str, protocol: str, # noqa: E951
results_depth: str, *status_names: str) -> None:
"""Pre-run modifier initialization.
*Args:*\n
_server_ - name of TestRail server;\n
_user_ - name of TestRail user;\n
_password_ - password of TestRail user;\n
_run_id_ - ID of the test run;\n
_protocol_ - connecting protocol to TestRail server: http or https;\n
_results_depth_ - analysis depth of run results;\n
_status_names_ - name of test statuses in TestRail.
"""
self.run_id = run_id
self.status_names = status_names
self.tr_client = TestRailAPIClient(server, user, password, run_id, protocol)
self.results_depth = int(results_depth) if str(results_depth).isdigit() else 0
self._tr_tags_list: Optional[List[str]] = None
self._tr_stable_tags_list: Optional[List[str]] = None
LOGGER.register_syslog()
@property
def tr_stable_tags_list(self) -> List[str]:
"""Gets list of 'testrailid' tags of the stable test cases.
Returns:
List of tags.
"""
if self._tr_stable_tags_list is None:
self._tr_stable_tags_list = self._get_tr_stable_tags_list()
return self._tr_stable_tags_list
@property
def tr_tags_list(self) -> List[str]:
"""Gets 'testrailid' tags.
Returns:
List of tags.
"""
if self._tr_tags_list is None:
self._tr_tags_list = self._get_tr_tags_list()
return self._tr_tags_list
def _log_to_parent_suite(self, suite: TestSuite, message: str) -> None:
"""Log message to the parent suite.
*Args:*\n
_suite_ - Robot Framework test suite object.
_message_ - message.
"""
if suite.parent is None:
LOGGER.error("{suite}: {message}".format(suite=suite, message=message))
def _get_tr_tags_list(self) -> List[str]:
"""Get list of 'testrailid' tags.
If required test statuses from the test run are passed to modifier,
a request is made to the TestRail to obtain information about all the statuses.
Theirs identifiers will be retrieved from list of all the statuses.
This identifiers will be used to receive test tags in the required status.
If statuses aren't passed to modifier,
the tags of all tests in the test run will be obtained regardless of their status.
Returns:
List of tags.
"""
status_ids = None
if self.status_names:
status_ids = [self.tr_client.get_status_id_by_status_label(name) for name in self.status_names]
tests_info = self.tr_client.get_tests(run_id=self.run_id, status_ids=status_ids)
return ['testrailid={}'.format(test["case_id"]) for test in tests_info if test["case_id"] is not None]
def _get_tr_stable_tags_list(self) -> List[str]:
"""Get list of 'testrailid' tags of the stable test cases.
If analysis depth of the run results is passed to modifier and its value greater than zero,
a request is made to the TestRail to receive information about test cases whose last result is 'passed'.
Based on the information received, the results of the latest runs for these test cases are analyzed,
on the basis of which the tags of stable test cases will be received.
Returns:
List of stable tags.
"""
stable_case_ids_list = list()
catched_exceptions = list()
passed_tests_info = self.tr_client.get_tests(run_id=self.run_id, status_ids=[TESTRAIL_STATUS_ID_PASSED])
case_ids = [test["case_id"] for test in passed_tests_info if test["case_id"] is not None]
def future_handler(future: Future) -> None:
"""Get result from future with try/except block and to list.
Args:
future: future object.
"""
case_id = futures[future]
try:
case_results = future.result()
except RequestException as exception:
catched_exceptions.append(exception)
else:
passed_list = [result for result in case_results if
result['status_id'] == TESTRAIL_STATUS_ID_PASSED]
if len(passed_list) == int(self.results_depth):
stable_case_ids_list.append(case_id)
with ThreadPoolExecutor() as executor:
futures = {executor.submit(self.tr_client.get_results_for_case, self.run_id, case_id, self.results_depth):
case_id for case_id in case_ids}
for future in as_completed(futures, timeout=CONNECTION_TIMEOUT):
future_handler(future)
if catched_exceptions:
raise catched_exceptions[0]
return ['testrailid={}'.format(case_id) for case_id in stable_case_ids_list]
def start_suite(self, suite: TestSuite) -> None:
"""Form list of tests for the Robot Framework test suite that are included in the TestRail test run.
If analysis depth of the run results is greater than zero, when first suite is launched
a list of 'testrailid' tags of stable test cases is obtained.
After that the list of tags is written to the class attribute and for subsequent suites the obtaining is not happening.
If analysis depth of the run results is zero, when the first suite is launched
a list of 'testrailid' tags of all test cases in the given status is obtained.
After that the list of tags is written to the class attribute and for subsequent suites the obtaining is not happening.
*Args:*\n
_suite_ - Robot Framework test suite object.
"""
tests = suite.tests
suite.tests = None
try:
if self.results_depth > 0:
suite.tests = [t for t in tests if (set(t.tags) & set(self.tr_stable_tags_list))]
else:
suite.tests = [t for t in tests if (set(t.tags) & set(self.tr_tags_list))]
except (RequestException, TimeoutError) as error:
self._log_to_parent_suite(suite, str(error))
def end_suite(self, suite: TestSuite) -> None:
"""Removing test suites that are empty after excluding tests that are not part of the TestRail test run.
*Args:*\n
_suite_ - Robot Framework test suite object.
"""
suite.suites = [s for s in suite.suites if s.test_count > 0]
if not suite.suites:
self._log_to_parent_suite(suite, "No tests to execute after using TestRail pre-run modifier.") | /robotframework-testrail-2.0.1.tar.gz/robotframework-testrail-2.0.1/src/TestRailPreRunModifier.py | 0.855157 | 0.469824 | TestRailPreRunModifier.py | pypi |
#to generate libdoc documentation run:
# python -m robot.libdoc TftpLibrary TftpLibrary.html
import os
from datetime import datetime
import tftpy
class TftpLibrary(object):
"""
This library provides functionality of TFTP client.
[https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol|Trivial File Transfer Protocol]
isn't a complex protocol so the library contains only small amount of keywords.
Very often TFTP communication is used by telecom equipment for purpose of uploading
configuration or getting log files (e.g. Cisco routers).
Version 1.1 released on 25th of December 2017
What's new in release 1.1:
- Python 3 support
- Setup bugfix by [https://github.com/zwei22|Jinhyuk.Im]
TFTP communication provided by [http://tftpy.sourceforge.net/|tftpy]
Author: [https://github.com/kowalpy|Marcin Kowalczyk]
Website: https://github.com/kowalpy/Robot-Framework-TFTP-Library
Installation:
- run command: pip install robotframework-tftplibrary
OR
- download, unzip and run command: python setup.py install
The simplest example (connect, download file, upload file):
| Tftp Connect | ${tftp_server_address} |
| Tftp Download | ${file_name_01} |
| Tftp Upload | ${file_name_02} |
"""
ROBOT_LIBRARY_SCOPE = 'TEST SUITE'
def __init__(self, timeout=5):
"""
Library import:
| Library | TftpLibrary.py |
Timeout can be configured during import:
| Library | TftpLibrary.py | 10 |
"""
self.tftp_client = None
self.timeout = timeout
def __check_tftp_client(self):
if isinstance(self.tftp_client, tftpy.TftpClient):
return True
else:
err_msg_not_init = "Tftp client not initiated. Use Tftp Connect keyword first."
raise TftpLibraryError(err_msg_not_init)
def tftp_connect(self, tftp_server_address, port_number=69):
"""
Initiates tftpy.TftpClient object providing server address and port number.
Unlike FTP, TFTP does not keep established connection to a server.
However calling [#Tftp Connect|Tftp Connect ] keyword before other operations
is a must. To initiate connection to another TFTP server during test just
call this keyword once again providing valid IP address and port number.
Parameters:
- tftp_server_address - TFTP server IP address
- port_number(optional) - TFTP server port number, by default 69
Example:
| Tftp Connect | ${tftp_server_address} |
"""
try:
self.tftp_client = tftpy.TftpClient(tftp_server_address, port_number)
except Exception as e:
raise TftpLibraryError(e)
def tftp_upload(self, local_file_path, remote_file_name=None):
"""
Sends file from local drive to TFTP server. Before calling this keyword,
[#Tftp Connect|Tftp Connect ] must be called.
Parameters:
- local_file_path - file name or path to a file on a local drive
- remote_file_name (optional) - a name under which file should be saved
If remote_file_name agument is not given, local name would be used
Examples:
| tftp upload | test_file_01.txt | |
| tftp upload | test_file_01.txt | new_file_name_08.txt |
| tftp upload | c:/Temp/new_file.txt | |
| tftp upload | c:/Temp/new_file.txt | new_file_name_108.txt |
"""
if self.__check_tftp_client():
remote_file = ""
local_file_path = os.path.normpath(local_file_path)
if not os.path.isfile(local_file_path):
raise TftpLibraryError("%s is no a valid file path" % local_file_path)
else:
if remote_file_name == None:
file_tuple = os.path.split(local_file_path)
if len(file_tuple)==2:
remote_file = file_tuple[1]
else:
timestamp = datetime.now().strftime("%H%M%S%f")
remote_file = timestamp
else:
remote_file = remote_file_name
try:
local_file_path = str(local_file_path)
remote_file = str(remote_file)
self.tftp_client.upload(remote_file, local_file_path)
except Exception as e:
raise TftpLibraryError(e)
def tftp_download(self, remote_file_name, local_file_path=None):
"""
Downloads file from TFTP server. Before calling this keyword,
[#Tftp Connect|Tftp Connect ] must be called.
Parameters:
- remote_file_name - file name on TFTP server
- local_file_path (optional) - local file name or path where remote file
should be saved
If local_file_path is not given, file is saved in current local directory
(by default folder containing robot framework project file) with the same
name as source file
Examples:
| tftp download | test_file_01.txt | |
| tftp download | test_file_01.txt | test_file_01.txt__ |
| tftp download | test_file_01.txt | c:/Temp |
| tftp download | test_file_01.txt | c:/Temp/new_file.txt |
"""
if self.__check_tftp_client():
local_path = ""
if local_file_path == None:
local_path = remote_file_name
else:
local_path = os.path.normpath(local_file_path)
if os.path.isdir(local_path):
local_path = os.path.join(local_path, remote_file_name)
try:
remote_file_name = str(remote_file_name)
local_path = str(local_path)
self.tftp_client.download(remote_file_name, local_path)
except Exception as e:
raise TftpLibraryError(e)
class TftpLibraryError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
def main():
print("Robot Framework TFTP library. Not intended to run as standalone process. ")
print("Webpage: https://github.com/kowalpy/Robot-Framework-TFTP-Library")
if __name__ == '__main__':
main() | /robotframework-tftplibrary-1.1.zip/robotframework-tftplibrary-1.1/TftpLibrary.py | 0.595375 | 0.338733 | TftpLibrary.py | pypi |
import functools
import re
from robot.api.parsing import Comment, ModelVisitor, Token
def skip_if_disabled(func):
"""
Do not transform node if it's not within passed ``start_line`` and ``end_line`` or
it does match any ``# robotidy: off`` disabler
"""
@functools.wraps(func)
def wrapper(self, node, *args, **kwargs):
if self.disablers.is_node_disabled(node):
return node
return func(self, node, *args, **kwargs)
return wrapper
def get_section_name_from_header_type(node):
header_type = node.header.type if node.header else "COMMENT HEADER"
return {
"SETTING HEADER": "settings",
"VARIABLE HEADER": "variables",
"TESTCASE HEADER": "testcases",
"TASK HEADER": "tasks",
"KEYWORD HEADER": "keywords",
"COMMENT HEADER": "comments",
}.get(header_type, "invalid")
def skip_section_if_disabled(func):
"""
Does the same checks as ``skip_if_disabled`` and additionally checks
if the section header does not contain disabler
"""
@functools.wraps(func)
def wrapper(self, node, *args, **kwargs):
if self.disablers.is_node_disabled(node):
return node
if self.disablers.is_header_disabled(node.lineno):
return node
if self.skip:
section_name = get_section_name_from_header_type(node)
if self.skip.section(section_name):
return node
return func(self, node, *args, **kwargs)
return wrapper
def is_line_start(node):
for token in node.tokens:
if token.type == Token.SEPARATOR:
continue
return token.col_offset == 0
return False
class DisabledLines:
def __init__(self, start_line, end_line, file_end):
self.start_line = start_line
self.end_line = end_line
self.file_end = file_end
self.lines = []
self.disabled_headers = set()
@property
def file_disabled(self):
"""Check if file is disabled. Whole file is only disabled if the first line contains one line disabler."""
if not self.lines:
return False
return self.lines[0] == (1, 1)
def add_disabler(self, start_line, end_line):
self.lines.append((start_line, end_line))
def add_disabled_header(self, lineno):
self.disabled_headers.add(lineno)
def parse_global_disablers(self):
if not self.start_line:
return
end_line = self.end_line if self.end_line else self.start_line
if self.start_line > 1:
self.add_disabler(1, self.start_line - 1)
if end_line < self.file_end:
self.add_disabler(end_line + 1, self.file_end)
def sort_disablers(self):
self.lines = sorted(self.lines, key=lambda x: x[0])
def is_header_disabled(self, line):
return line in self.disabled_headers
def is_node_disabled(self, node, full_match=True):
if full_match:
for start_line, end_line in self.lines:
# lines are sorted on start_line, so we can return on first match
if end_line >= node.end_lineno:
return start_line <= node.lineno
else:
for start_line, end_line in self.lines:
if node.lineno <= end_line and node.end_lineno >= start_line:
return True
return False
class RegisterDisablers(ModelVisitor):
def __init__(self, start_line, end_line):
self.start_line = start_line
self.end_line = end_line
self.disablers = DisabledLines(self.start_line, self.end_line, None)
self.disabler_pattern = re.compile(r"\s*#\s?robotidy:\s?(?P<disabler>on|off)")
self.stack = []
self.file_disabled = False
def any_disabler_open(self):
return any(disabler for disabler in self.stack)
def get_disabler(self, comment):
if not comment.value:
return None
return self.disabler_pattern.match(comment.value)
def close_disabler(self, end_line):
disabler = self.stack.pop()
if disabler:
self.disablers.add_disabler(disabler, end_line)
def visit_File(self, node): # noqa
self.disablers = DisabledLines(self.start_line, self.end_line, node.end_lineno)
self.disablers.parse_global_disablers()
self.stack = []
self.generic_visit(node)
self.disablers.sort_disablers()
self.file_disabled = self.disablers.file_disabled
def visit_SectionHeader(self, node): # noqa
for comment in node.get_tokens(Token.COMMENT):
disabler = self.get_disabler(comment)
if disabler and disabler.group("disabler") == "off":
self.disablers.add_disabled_header(node.lineno)
break
return self.generic_visit(node)
def visit_TestCase(self, node): # noqa
self.stack.append(0)
self.generic_visit(node)
self.close_disabler(node.end_lineno)
def visit_Try(self, node): # noqa
self.generic_visit(node.header)
self.stack.append(0)
for statement in node.body:
self.visit(statement)
self.close_disabler(node.end_lineno)
tail = node
while tail.next:
self.generic_visit(tail.header)
self.stack.append(0)
for statement in tail.body:
self.visit(statement)
end_line = tail.next.lineno - 1 if tail.next else tail.end_lineno
self.close_disabler(end_line=end_line)
tail = tail.next
visit_Keyword = visit_Section = visit_For = visit_ForLoop = visit_If = visit_While = visit_TestCase
def visit_Statement(self, node): # noqa
if isinstance(node, Comment):
comment = node.get_token(Token.COMMENT)
disabler = self.get_disabler(comment)
if not disabler:
return
index = 0 if is_line_start(node) else -1
if disabler.group("disabler") == "on":
if not self.stack[index]: # no disabler open
return
self.disablers.add_disabler(self.stack[index], node.lineno)
self.stack[index] = 0
elif not self.stack[index]:
self.stack[index] = node.lineno
else:
# inline disabler
if self.any_disabler_open():
return
for comment in node.get_tokens(Token.COMMENT):
disabler = self.get_disabler(comment)
if disabler and disabler.group("disabler") == "off":
self.disablers.add_disabler(node.lineno, node.end_lineno) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/disablers.py | 0.669313 | 0.201047 | disablers.py | pypi |
import copy
import dataclasses
import os
import re
import sys
from collections import namedtuple
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Pattern, Set, Tuple
try:
from robot.api import Languages # RF 6.0
except ImportError:
Languages = None
import click
from click.core import ParameterSource
from robotidy import exceptions, files, skip, utils
from robotidy.transformers import TransformConfig, TransformConfigMap, convert_transform_config, load_transformers
class FormattingConfig:
def __init__(
self,
space_count: int,
indent: Optional[int],
continuation_indent: Optional[int],
line_sep: str,
start_line: Optional[int],
end_line: Optional[int],
separator: str,
line_length: int,
):
self.start_line = start_line
self.end_line = end_line
self.space_count = space_count
self.line_length = line_length
if indent is None:
indent = space_count
if continuation_indent is None:
continuation_indent = space_count
if separator == "space":
self.separator = " " * space_count
self.indent = " " * indent
self.continuation_indent = " " * continuation_indent
elif separator == "tab":
self.separator = "\t"
self.indent = "\t"
self.continuation_indent = "\t"
self.line_sep = self.get_line_sep(line_sep)
@staticmethod
def get_line_sep(line_sep):
if line_sep == "windows":
return "\r\n"
elif line_sep == "unix":
return "\n"
elif line_sep == "auto":
return "auto"
else:
return os.linesep
def validate_target_version(value: Optional[str]) -> Optional[int]:
if value is None:
return utils.ROBOT_VERSION.major
target_version = utils.TargetVersion[value.upper()].value
if target_version > utils.ROBOT_VERSION.major:
raise click.BadParameter(
f"Target Robot Framework version ({target_version}) should not be higher than "
f"installed version ({utils.ROBOT_VERSION})."
)
return target_version
def csv_list_type(value: Optional[str]) -> List[str]:
if not value:
return []
return value.split(",")
def convert_transformers_config(
param_name: str,
config: Dict,
force_included: bool = False,
custom_transformer: bool = False,
is_config: bool = False,
) -> List[TransformConfig]:
return [
TransformConfig(tr, force_include=force_included, custom_transformer=custom_transformer, is_config=is_config)
for tr in config.get(param_name, ())
]
def str_to_bool(v):
if isinstance(v, bool):
return v
return v.lower() in ("yes", "true", "1")
def map_class_fields_with_their_types(cls):
"""Returns map of dataclass attributes with their types."""
fields = dataclasses.fields(cls)
return {field.name: field.type for field in fields}
SourceAndConfig = namedtuple("SourceAndConfig", "source config")
@dataclass
class RawConfig:
"""Configuration read directly from cli or configuration file."""
transform: List[TransformConfig] = field(default_factory=list)
custom_transformers: List[TransformConfig] = field(default_factory=list)
configure: List[TransformConfig] = field(default_factory=list)
src: Tuple[str, ...] = None
exclude: Pattern = re.compile(files.DEFAULT_EXCLUDES)
extend_exclude: Pattern = None
skip_gitignore: bool = False
overwrite: bool = False
diff: bool = False
color: bool = True
check: bool = False
spacecount: int = 4
indent: int = None
continuation_indent: int = None
lineseparator: str = "native"
verbose: bool = False
config: str = None
config_path: Path = None
separator: str = "space"
startline: int = None
endline: int = None
line_length: int = 120
list_transformers: str = ""
desc: str = None
output: Path = None
force_order: bool = False
target_version: int = utils.ROBOT_VERSION.major
language: List[str] = field(default_factory=list)
reruns: int = 0
ignore_git_dir: bool = False
skip_comments: bool = False
skip_documentation: bool = False
skip_return_values: bool = False
skip_keyword_call: List[str] = None
skip_keyword_call_pattern: List[str] = None
skip_settings: bool = False
skip_arguments: bool = False
skip_setup: bool = False
skip_teardown: bool = False
skip_timeout: bool = False
skip_template: bool = False
skip_return: bool = False
skip_tags: bool = False
skip_block_comments: bool = False
skip_sections: str = ""
defined_in_cli: Set = field(default_factory=set)
defined_in_config: Set = field(default_factory=set)
@classmethod
def from_cli(cls, ctx: click.Context, **kwargs):
"""Creates RawConfig instance while saving which options were supplied from CLI."""
defined_in_cli = set()
for option in kwargs:
if ctx.get_parameter_source(option) == ParameterSource.COMMANDLINE:
defined_in_cli.add(option)
return cls(**kwargs, defined_in_cli=defined_in_cli)
def from_config_file(self, config: Dict, config_path: Path) -> "RawConfig":
"""Creates new RawConfig instance from dictionary.
Dictionary key:values needs to be normalized and parsed to correct types.
"""
options_map = map_class_fields_with_their_types(self)
parsed_config = {"defined_in_config": {"defined_in_config", "config_path"}, "config_path": config_path}
for key, value in config.items():
if key not in options_map:
raise exceptions.NoSuchOptionError(key, list(options_map.keys())) from None
value_type = options_map[key]
if value_type == bool:
parsed_config[key] = str_to_bool(value)
elif key == "target_version":
parsed_config[key] = validate_target_version(value)
elif key == "language":
parsed_config[key] = csv_list_type(value)
elif value_type == int:
parsed_config[key] = int(value)
elif value_type == List[TransformConfig]:
parsed_config[key] = [convert_transform_config(val, key) for val in value]
elif key == "src":
parsed_config[key] = tuple(value)
elif value_type == Pattern:
parsed_config[key] = utils.validate_regex(value)
else:
parsed_config[key] = value
parsed_config["defined_in_config"].add(key)
from_config = RawConfig(**parsed_config)
return self.merge_with_config_file(from_config)
def merge_with_config_file(self, config: "RawConfig") -> "RawConfig":
"""Merge cli config with the configuration file config.
Use configuration file parameter value only if it was not defined in the cli already.
"""
merged = copy.deepcopy(self)
if not config:
return merged
overwrite_params = config.defined_in_config - self.defined_in_cli
for param in overwrite_params:
merged.__dict__[param] = config.__dict__[param]
return merged
class MainConfig:
"""Main configuration file which contains default configuration and map of sources and their configurations."""
def __init__(self, cli_config: RawConfig):
self.loaded_configs = {}
self.default = self.load_config_from_option(cli_config)
self.default_loaded = Config.from_raw_config(self.default)
self.sources = self.get_sources(self.default.src)
def validate_src_is_required(self):
if self.sources or self.default.list_transformers or self.default.desc:
return
print("No source path provided. Run robotidy --help to see how to use robotidy")
sys.exit(1)
@staticmethod
def load_config_from_option(cli_config: RawConfig) -> RawConfig:
"""If there is config path passed from cli, load it and overwrite default config."""
if cli_config.config:
config_path = Path(cli_config.config)
config_file = files.read_pyproject_config(config_path)
cli_config = cli_config.from_config_file(config_file, config_path)
return cli_config
def get_sources(self, sources: Tuple[str, ...]) -> Optional[Tuple[str, ...]]:
"""Get list of sources to be transformed by Robotidy.
If the sources tuple is empty, look for most common configuration file and load sources from there.
"""
if sources:
return sources
src = Path(".").resolve()
config_path = files.find_source_config_file(src, self.default.ignore_git_dir)
if not config_path:
return None
config = files.read_pyproject_config(config_path)
if not config or "src" not in config:
return None
raw_config = self.default.from_config_file(config, config_path)
loaded_config = Config.from_raw_config(raw_config)
self.loaded_configs[str(loaded_config.config_directory)] = loaded_config
return tuple(config["src"])
def get_sources_with_configs(self):
sources = files.get_paths(
self.sources, self.default.exclude, self.default.extend_exclude, self.default.skip_gitignore
)
for source in sources:
if self.default.config:
loaded_config = self.default_loaded
else:
src = Path(".").resolve() if source == "-" else source
loaded_config = self.get_config_for_source(src)
yield SourceAndConfig(source, loaded_config)
def get_config_for_source(self, source: Path):
config_path = files.find_source_config_file(source, self.default.ignore_git_dir)
if config_path is None:
return self.default_loaded
if str(config_path.parent) in self.loaded_configs:
return self.loaded_configs[str(config_path.parent)]
config_file = files.read_pyproject_config(config_path)
raw_config = self.default.from_config_file(config_file, config_path)
loaded_config = Config.from_raw_config(raw_config)
self.loaded_configs[str(loaded_config.config_directory)] = loaded_config
return loaded_config
class Config:
"""Configuration after loading dynamic attributes like transformer list."""
def __init__(
self,
formatting: FormattingConfig,
skip,
transformers_config: TransformConfigMap,
overwrite: bool,
show_diff: bool,
verbose: bool,
check: bool,
output: Optional[Path],
force_order: bool,
target_version: int,
color: bool,
language: Optional[List[str]],
reruns: int,
config_path: Optional[Path],
):
self.formatting = formatting
self.overwrite = self.set_overwrite_mode(overwrite, check)
self.show_diff = show_diff
self.verbose = verbose
self.check = check
self.output = output
self.color = self.set_color_mode(color)
self.reruns = reruns
self.config_directory = config_path.parent if config_path else None
self.language = self.get_languages(language)
self.transformers = []
self.transformers_lookup = dict()
self.transformers_config = transformers_config
self.load_transformers(transformers_config, force_order, target_version, skip)
@staticmethod
def get_languages(lang):
if Languages is None:
return None
return Languages(lang)
@staticmethod
def set_overwrite_mode(overwrite: bool, check: bool) -> bool:
if overwrite is None:
return not check
return overwrite
@staticmethod
def set_color_mode(color: bool) -> bool:
if not color:
return color
return "NO_COLOR" not in os.environ
@classmethod
def from_raw_config(cls, raw_config: "RawConfig"):
skip_config = skip.SkipConfig(
documentation=raw_config.skip_documentation,
return_values=raw_config.skip_return_values,
keyword_call=raw_config.skip_keyword_call,
keyword_call_pattern=raw_config.skip_keyword_call_pattern,
settings=raw_config.skip_settings,
arguments=raw_config.skip_arguments,
setup=raw_config.skip_setup,
teardown=raw_config.skip_teardown,
template=raw_config.skip_template,
timeout=raw_config.skip_timeout,
return_statement=raw_config.skip_return,
tags=raw_config.skip_tags,
comments=raw_config.skip_comments,
block_comments=raw_config.skip_block_comments,
sections=raw_config.skip_sections,
)
formatting = FormattingConfig(
space_count=raw_config.spacecount,
indent=raw_config.indent,
continuation_indent=raw_config.continuation_indent,
line_sep=raw_config.lineseparator,
start_line=raw_config.startline,
separator=raw_config.separator,
end_line=raw_config.endline,
line_length=raw_config.line_length,
)
transformers_config = TransformConfigMap(
raw_config.transform, raw_config.custom_transformers, raw_config.configure
)
if raw_config.verbose and raw_config.config_path:
click.echo(f"Loaded configuration from {raw_config.config_path}")
return cls(
formatting=formatting,
skip=skip_config,
transformers_config=transformers_config,
overwrite=raw_config.overwrite,
show_diff=raw_config.diff,
verbose=raw_config.verbose,
check=raw_config.check,
output=raw_config.output,
force_order=raw_config.force_order,
target_version=raw_config.target_version,
color=raw_config.color,
language=raw_config.language,
reruns=raw_config.reruns,
config_path=raw_config.config_path,
)
def load_transformers(self, transformers_config: TransformConfigMap, force_order, target_version, skip):
# Workaround to pass configuration to transformer before the instance is created
if "GenerateDocumentation" in transformers_config.transformers:
transformers_config.transformers["GenerateDocumentation"].args["template_directory"] = self.config_directory
transformers = load_transformers(
transformers_config,
force_order=force_order,
target_version=target_version,
skip=skip,
)
for transformer in transformers:
# inject global settings TODO: handle it better
setattr(transformer.instance, "formatting_config", self.formatting)
setattr(transformer.instance, "transformers", self.transformers_lookup)
setattr(transformer.instance, "languages", self.language)
self.transformers.append(transformer.instance)
self.transformers_lookup[transformer.name] = transformer.instance | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/config.py | 0.727782 | 0.217171 | config.py | pypi |
from functools import lru_cache
from pathlib import Path
from typing import Any, Dict, Iterable, Iterator, List, Optional, Pattern, Tuple
try:
import rich_click as click
except ImportError:
import click
import tomli
from pathspec import PathSpec
DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.nox|\.tox|\.venv|venv|\.svn)/"
INCLUDE_EXT = (".robot", ".resource")
DOTFILE_CONFIG = ".robotidy"
CONFIG_NAMES = ("robotidy.toml", "pyproject.toml", DOTFILE_CONFIG)
@lru_cache()
def find_source_config_file(src: Path, ignore_git_dir: bool = False) -> Optional[Path]:
"""Find and return configuration file for the source path.
This method looks iteratively in source parents for directory that contains configuration file and
returns its path. The lru_cache speeds up searching if there are multiple files in the same directory (they will
have the same configuration file).
If ``.git`` directory is found and ``ignore_git_dir`` is set to ``False``, or top directory is reached, this method
returns ``None``.
"""
if src.is_dir():
if not ignore_git_dir and src.name == ".git":
return None
for config_filename in CONFIG_NAMES:
if (src / config_filename).is_file():
return src / config_filename
if not src.parents:
return None
return find_source_config_file(src.parent, ignore_git_dir)
@lru_cache()
def find_project_root(srcs: Iterable[str], ignore_git_dir: bool = False) -> Path:
"""Return a directory containing .git, or robotidy.toml.
That directory will be a common parent of all files and directories
passed in `srcs`.
If no directory in the tree contains a marker that would specify it's the
project root, the root of the file system is returned.
"""
if not srcs:
return Path("/").resolve()
path_srcs = [Path(Path.cwd(), src).resolve() for src in srcs]
# A list of lists of parents for each 'src'. 'src' is included as a
# "parent" of itself if it is a directory
src_parents = [list(path.parents) + ([path] if path.is_dir() else []) for path in path_srcs]
common_base = max(
set.intersection(*(set(parents) for parents in src_parents)),
key=lambda path: path.parts,
)
for directory in (common_base, *common_base.parents):
if not ignore_git_dir and (directory / ".git").exists():
return directory
if any((directory / config_name).is_file() for config_name in CONFIG_NAMES):
return directory
return directory
def load_toml_file(config_path: Path) -> Dict[str, Any]:
try:
with config_path.open("rb") as tf:
config = tomli.load(tf)
return config
except (tomli.TOMLDecodeError, OSError) as e:
raise click.FileError(filename=str(config_path), hint=f"Error reading configuration file: {e}")
def read_pyproject_config(config_path: Path) -> Dict[str, Any]:
config = load_toml_file(config_path)
if config_path.name != DOTFILE_CONFIG or "tool" in config:
config = config.get("tool", {}).get("robotidy", {})
return {k.replace("--", "").replace("-", "_"): v for k, v in config.items()}
@lru_cache()
def get_gitignore(root: Path) -> PathSpec:
"""Return a PathSpec matching gitignore content if present."""
gitignore = root / ".gitignore"
lines: List[str] = []
if gitignore.is_file():
with gitignore.open(encoding="utf-8") as gf:
lines = gf.readlines()
return PathSpec.from_lines("gitwildmatch", lines)
def should_parse_path(
path: Path, exclude: Optional[Pattern[str]], extend_exclude: Optional[Pattern[str]], gitignore: Optional[PathSpec]
) -> bool:
normalized_path = str(path)
for pattern in (exclude, extend_exclude):
match = pattern.search(normalized_path) if pattern else None
if bool(match and match.group(0)):
return False
if gitignore is not None and gitignore.match_file(path):
return False
if path.is_file():
return path.suffix in INCLUDE_EXT
if exclude and exclude.match(path.name):
return False
return True
def get_paths(
src: Tuple[str, ...], exclude: Optional[Pattern], extend_exclude: Optional[Pattern], skip_gitignore: bool
):
root = find_project_root(src)
if skip_gitignore:
gitignore = None
else:
gitignore = get_gitignore(root)
sources = set()
for s in src:
if s == "-":
sources.add("-")
continue
path = Path(s).resolve()
if not should_parse_path(path, exclude, extend_exclude, gitignore):
continue
if path.is_file():
sources.add(path)
elif path.is_dir():
sources.update(iterate_dir((path,), exclude, extend_exclude, gitignore))
elif s == "-":
sources.add(path)
return sources
def iterate_dir(
paths: Iterable[Path],
exclude: Optional[Pattern],
extend_exclude: Optional[Pattern],
gitignore: Optional[PathSpec],
) -> Iterator[Path]:
for path in paths:
if not should_parse_path(path, exclude, extend_exclude, gitignore):
continue
if path.is_dir():
yield from iterate_dir(
path.iterdir(),
exclude,
extend_exclude,
gitignore + get_gitignore(path) if gitignore is not None else None,
)
elif path.is_file():
yield path | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/files.py | 0.785555 | 0.286593 | files.py | pypi |
import re
from typing import List, Optional, Pattern
import click
from robot.api import Token
from robotidy.utils import normalize_name
def parse_csv(value):
if not value:
return []
return [val for val in value.split(",")]
def str_to_bool(value):
return value.lower() == "true"
def validate_regex(value: str) -> Optional[Pattern]:
try:
return re.compile(value)
except re.error:
raise ValueError(f"'{value}' is not a valid regular expression.") from None
class SkipConfig:
"""Skip configuration (global and for each transformer)."""
# Following names will be taken from transformer config and provided to Skip class instead
HANDLES = frozenset(
{
"skip_documentation",
"skip_return_values",
"skip_keyword_call",
"skip_keyword_call_pattern",
"skip_settings",
"skip_arguments",
"skip_setup",
"skip_teardown",
"skip_timeout",
"skip_template",
"skip_return_statement",
"skip_tags",
"skip_comments",
"skip_block_comments",
"skip_sections",
}
)
def __init__(
self,
documentation: bool = False,
return_values: bool = False,
keyword_call: Optional[List] = None,
keyword_call_pattern: Optional[List] = None,
settings: bool = False,
arguments: bool = False,
setup: bool = False,
teardown: bool = False,
timeout: bool = False,
template: bool = False,
return_statement: bool = False,
tags: bool = False,
comments: bool = False,
block_comments: bool = False,
sections: str = "",
):
self.documentation = documentation
self.return_values = return_values
self.keyword_call: List = keyword_call if keyword_call else []
self.keyword_call_pattern: List = keyword_call_pattern if keyword_call_pattern else []
self.settings = settings
self.arguments = arguments
self.setup = setup
self.teardown = teardown
self.timeout = timeout
self.template = template
self.return_statement = return_statement
self.tags = tags
self.comments = comments
self.block_comments = block_comments
self.sections = parse_csv(sections)
def update_with_str_config(self, **kwargs):
for name, value in kwargs.items():
# find the value we're overriding and get its type from it
original_value = self.__dict__[name]
if isinstance(original_value, bool):
self.__dict__[name] = str_to_bool(value)
elif isinstance(original_value, list):
parsed_list = parse_csv(value)
self.__dict__[name].extend(parsed_list)
def __eq__(self, other):
return self.__dict__ == other.__dict__
class Skip:
"""Defines global skip conditions for each transformer."""
def __init__(self, skip_config: SkipConfig):
self.return_values = skip_config.return_values
self.documentation = skip_config.documentation
self.comments = skip_config.comments
self.block_comments = skip_config.block_comments
self.keyword_call_names = {normalize_name(name) for name in skip_config.keyword_call}
self.keyword_call_pattern = {validate_regex(pattern) for pattern in skip_config.keyword_call_pattern}
self.any_keword_call = self.check_any_keyword_call()
self.skip_settings = self.parse_skip_settings(skip_config)
self.skip_sections = set(skip_config.sections)
@staticmethod
def parse_skip_settings(skip_config):
settings = {"settings", "arguments", "setup", "teardown", "timeout", "template", "return_statement", "tags"}
skip_settings = set()
for setting in settings:
if getattr(skip_config, setting):
skip_settings.add(setting)
return skip_settings
def check_any_keyword_call(self):
return self.keyword_call_names or self.keyword_call_pattern
def keyword_call(self, node):
if not getattr(node, "keyword", None) or not self.any_keword_call:
return False
normalized = normalize_name(node.keyword)
if normalized in self.keyword_call_names:
return True
for pattern in self.keyword_call_pattern:
if pattern.search(node.keyword):
return True
return False
def setting(self, name):
if not self.skip_settings:
return False
if "settings" in self.skip_settings:
return True
return name.lower() in self.skip_settings
def comment(self, comment):
if self.comments:
return True
if not self.block_comments:
return False
return comment.tokens and comment.tokens[0].type == Token.COMMENT
def section(self, name):
return name in self.skip_sections
documentation_option = click.option("--skip-documentation", is_flag=True, help="Skip formatting of documentation")
return_values_option = click.option("--skip-return-values", is_flag=True, help="Skip formatting of return values")
keyword_call_option = click.option(
"--skip-keyword-call", type=str, multiple=True, help="Keyword call name that should not be formatted"
)
keyword_call_pattern_option = click.option(
"--skip-keyword-call-pattern",
type=str,
multiple=True,
help="Keyword call name pattern that should not be formatted",
)
settings_option = click.option("--skip-settings", is_flag=True, help="Skip formatting of settings")
arguments_option = click.option("--skip-arguments", is_flag=True, help="Skip formatting of arguments")
setup_option = click.option("--skip-setup", is_flag=True, help="Skip formatting of setup")
teardown_option = click.option("--skip-teardown", is_flag=True, help="Skip formatting of teardown")
timeout_option = click.option("--skip-timeout", is_flag=True, help="Skip formatting of timeout")
template_option = click.option("--skip-template", is_flag=True, help="Skip formatting of template")
return_option = click.option("--skip-return", is_flag=True, help="Skip formatting of return statement")
tags_option = click.option("--skip-tags", is_flag=True, help="Skip formatting of tags")
sections_option = click.option(
"--skip-sections",
type=str,
help="Skip formatting of sections. Provide multiple sections with comma separated value",
)
comments_option = click.option("--skip-comments", is_flag=True, help="Skip formatting of comments")
block_comments_option = click.option("--skip-block-comments", is_flag=True, help="Skip formatting of block comments")
option_group = {
"name": "Skip formatting",
"options": [
"--skip-documentation",
"--skip-return-values",
"--skip-keyword-call",
"--skip-keyword-call-pattern",
"--skip-settings",
"--skip-arguments",
"--skip-setup",
"--skip-teardown",
"--skip-timeout",
"--skip-template",
"--skip-return",
"--skip-tags",
"--skip-comments",
"--skip-block-comments",
"--skip-sections",
],
} | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/skip.py | 0.777004 | 0.253838 | skip.py | pypi |
import sys
from pathlib import Path
from typing import List, Optional, Pattern, Tuple, Union
try:
import rich_click as click
RICH_PRESENT = True
except ImportError: # Fails on vendored-in LSP plugin
import click
RICH_PRESENT = False
from robotidy import app
from robotidy import config as config_module
from robotidy import decorators, files, skip, utils, version
from robotidy.config import RawConfig, csv_list_type, validate_target_version
from robotidy.rich_console import console
from robotidy.transformers import TransformConfigMap, TransformConfigParameter, load_transformers
CLI_OPTIONS_LIST = [
{
"name": "Run only selected transformers",
"options": ["--transform"],
},
{
"name": "Load custom transformers",
"options": ["--load-transformers"],
},
{
"name": "Work modes",
"options": ["--overwrite", "--diff", "--check", "--force-order"],
},
{
"name": "Documentation",
"options": ["--list", "--desc"],
},
{
"name": "Configuration",
"options": ["--configure", "--config", "--ignore-git-dir"],
},
{
"name": "Global formatting settings",
"options": [
"--spacecount",
"--indent",
"--continuation-indent",
"--line-length",
"--lineseparator",
"--separator",
"--startline",
"--endline",
],
},
{"name": "File exclusion", "options": ["--exclude", "--extend-exclude", "--skip-gitignore"]},
skip.option_group,
{
"name": "Other",
"options": [
"--target-version",
"--language",
"--reruns",
"--verbose",
"--color",
"--output",
"--version",
"--help",
],
},
]
if RICH_PRESENT:
click.rich_click.USE_RICH_MARKUP = True
click.rich_click.USE_MARKDOWN = True
click.rich_click.FORCE_TERMINAL = None # workaround rich_click trying to force color in GitHub Actions
click.rich_click.STYLE_OPTION = "bold sky_blue3"
click.rich_click.STYLE_SWITCH = "bold sky_blue3"
click.rich_click.STYLE_METAVAR = "bold white"
click.rich_click.STYLE_OPTION_DEFAULT = "grey37"
click.rich_click.STYLE_OPTIONS_PANEL_BORDER = "grey66"
click.rich_click.STYLE_USAGE = "magenta"
click.rich_click.OPTION_GROUPS = {
"robotidy": CLI_OPTIONS_LIST,
"python -m robotidy": CLI_OPTIONS_LIST,
}
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
def validate_regex_callback(ctx: click.Context, param: click.Parameter, value: Optional[str]) -> Optional[Pattern]:
return utils.validate_regex(value)
def validate_target_version_callback(
ctx: click.Context, param: Union[click.Option, click.Parameter], value: Optional[str]
) -> Optional[int]:
return validate_target_version(value)
def validate_list_optional_value(ctx: click.Context, param: Union[click.Option, click.Parameter], value: Optional[str]):
if not value:
return value
allowed = ["all", "enabled", "disabled"]
if value not in allowed:
raise click.BadParameter(f"Not allowed value. Allowed values are: {', '.join(allowed)}")
return value
def csv_list_type_callback(
ctx: click.Context, param: Union[click.Option, click.Parameter], value: Optional[str]
) -> List[str]:
return csv_list_type(value)
def print_transformer_docs(transformer):
from rich.markdown import Markdown
md = Markdown(str(transformer), code_theme="native", inline_code_lexer="robotframework")
console.print(md)
@decorators.optional_rich
def print_description(name: str, target_version: int):
# TODO: --desc works only for default transformers, it should also print custom transformer desc
transformers = load_transformers(TransformConfigMap([], [], []), allow_disabled=True, target_version=target_version)
transformer_by_names = {transformer.name: transformer for transformer in transformers}
if name == "all":
for transformer in transformers:
print_transformer_docs(transformer)
elif name in transformer_by_names:
print_transformer_docs(transformer_by_names[name])
else:
rec_finder = utils.RecommendationFinder()
similar = rec_finder.find_similar(name, transformer_by_names.keys())
click.echo(f"Transformer with the name '{name}' does not exist.{similar}")
return 1
return 0
def _load_external_transformers(transformers: List, transformers_config: TransformConfigMap, target_version: int):
external = []
transformers_names = {transformer.name for transformer in transformers}
transformers_from_conf = load_transformers(transformers_config, target_version=target_version)
for transformer in transformers_from_conf:
if transformer.name not in transformers_names:
external.append(transformer)
return external
@decorators.optional_rich
def print_transformers_list(global_config: config_module.MainConfig):
from rich.table import Table
target_version = global_config.default.target_version
list_transformers = global_config.default.list_transformers
table = Table(title="Transformers", header_style="bold red")
table.add_column("Name", justify="left", no_wrap=True)
table.add_column("Enabled")
transformers = load_transformers(TransformConfigMap([], [], []), allow_disabled=True, target_version=target_version)
transformers.extend(
_load_external_transformers(transformers, global_config.default_loaded.transformers_config, target_version)
)
for transformer in transformers:
enabled = transformer.name in global_config.default_loaded.transformers_lookup
if list_transformers != "all":
filter_by = list_transformers == "enabled"
if enabled != filter_by:
continue
decorated_enable = "Yes" if enabled else "No"
if enabled != transformer.enabled_by_default:
decorated_enable = f"[bold magenta]{decorated_enable}*"
table.add_row(transformer.name, decorated_enable)
console.print(table)
console.print(
"Transformers are listed in the order they are run by default. If the transformer was enabled/disabled by the "
"configuration the status will be displayed with extra asterisk (*) and in the [magenta]different[/] color."
)
console.print(
"To see detailed docs run:\n"
" [bold]robotidy --desc [blue]transformer_name[/][/]\n"
"or\n"
" [bold]robotidy --desc [blue]all[/][/]\n\n"
"Non-default transformers needs to be selected explicitly with [bold cyan]--transform[/] or "
"configured with param `enabled=True`.\n"
)
@click.command(context_settings=CONTEXT_SETTINGS)
@click.option(
"--transform",
"-t",
type=TransformConfigParameter(),
multiple=True,
metavar="TRANSFORMER_NAME",
help="Transform files from [PATH(S)] with given transformer",
)
@click.option(
"--load-transformers",
"custom_transformers",
type=TransformConfigParameter(),
multiple=True,
metavar="TRANSFORMER_NAME",
help="Load custom transformer from the path and run them after default ones.",
)
@click.option(
"--configure",
"-c",
type=TransformConfigParameter(),
multiple=True,
metavar="TRANSFORMER_NAME:PARAM=VALUE",
help="Configure transformers",
)
@click.argument(
"src",
nargs=-1,
type=click.Path(exists=True, file_okay=True, dir_okay=True, readable=True, allow_dash=True),
metavar="[PATH(S)]",
)
@click.option(
"--exclude",
type=str,
callback=validate_regex_callback,
help=(
"A regular expression that matches files and directories that should be"
" excluded on recursive searches. An empty value means no paths are excluded."
" Use forward slashes for directories on all platforms."
),
show_default=f"{files.DEFAULT_EXCLUDES}",
)
@click.option(
"--extend-exclude",
type=str,
callback=validate_regex_callback,
help=(
"Like **--exclude**, but adds additional files and directories on top of the"
" excluded ones. (Useful if you simply want to add to the default)"
),
)
@click.option(
"--skip-gitignore",
is_flag=True,
show_default=True,
help="Skip **.gitignore** files and do not ignore files listed inside.",
)
@click.option(
"--ignore-git-dir",
is_flag=True,
help="Ignore **.git** directories when searching for the default configuration file. "
"By default first parent directory with **.git** directory is returned and this flag disables this behaviour.",
show_default=True,
)
@click.option(
"--config",
type=click.Path(
exists=True,
file_okay=True,
dir_okay=False,
readable=True,
allow_dash=False,
path_type=str,
),
help="Read configuration from FILE path.",
)
@click.option(
"--overwrite/--no-overwrite",
default=None,
help="Write changes back to file",
show_default=True,
)
@click.option(
"--diff",
is_flag=True,
help="Output diff of each processed file.",
show_default=True,
)
@click.option(
"--color/--no-color",
default=True,
help="Enable ANSI coloring the output",
show_default=True,
)
@click.option(
"--check",
is_flag=True,
help="Don't overwrite files and just return status. Return code 0 means nothing would change. "
"Return code 1 means that at least 1 file would change. Any internal error will overwrite this status.",
show_default=True,
)
@click.option(
"-s",
"--spacecount",
type=click.types.INT,
default=4,
help="The number of spaces between cells",
show_default=True,
)
@click.option(
"--indent",
type=click.types.INT,
default=None,
help="The number of spaces to be used as indentation",
show_default="same as --spacecount value",
)
@click.option(
"--continuation-indent",
type=click.types.INT,
default=None,
help="The number of spaces to be used as separator after ... (line continuation) token",
show_default="same as --spacecount value]",
)
@click.option(
"-ls",
"--lineseparator",
type=click.types.Choice(["native", "windows", "unix", "auto"]),
default="native",
help="""
Line separator to use in the outputs:
- **native**: use operating system's native line endings
- windows: use Windows line endings (CRLF)
- unix: use Unix line endings (LF)
- auto: maintain existing line endings (uses what's used in the first line)
""",
show_default=True,
)
@click.option(
"--separator",
type=click.types.Choice(["space", "tab"]),
default="space",
help="""
Token separator to use in the outputs:
- **space**: use --spacecount spaces to separate tokens
- tab: use a single tabulation to separate tokens
""",
show_default=True,
)
@click.option(
"-sl",
"--startline",
default=None,
type=int,
help="Limit robotidy only to selected area. If **--endline** is not provided, format text only at **--startline**. "
"Line numbers start from 1.",
)
@click.option(
"-el",
"--endline",
default=None,
type=int,
help="Limit robotidy only to selected area. Line numbers start from 1.",
)
@click.option(
"--line-length",
default=120,
type=int,
help="Max allowed characters per line",
show_default=True,
)
@click.option(
"--list",
"-l",
"list_transformers",
callback=validate_list_optional_value,
is_flag=False,
default="",
flag_value="all",
help="List available transformers and exit. "
"Pass optional value **enabled** or **disabled** to filter out list by transformer status.",
)
@click.option(
"--desc",
"-d",
default=None,
metavar="TRANSFORMER_NAME",
help="Show documentation for selected transformer.",
)
@click.option(
"--output",
"-o",
type=click.Path(file_okay=True, dir_okay=False, writable=True, allow_dash=False),
default=None,
metavar="PATH",
help="Use this option to override file destination path.",
)
@click.option("-v", "--verbose", is_flag=True, help="More verbose output", show_default=True)
@click.option(
"--force-order",
is_flag=True,
help="Transform files using transformers in order provided in cli",
)
@click.option(
"--target-version",
"-tv",
type=click.Choice([v.name.lower() for v in utils.TargetVersion], case_sensitive=False),
callback=validate_target_version_callback,
help="Only enable transformers supported in set target version",
show_default="installed Robot Framework version",
)
@click.option(
"--language",
"--lang",
callback=csv_list_type_callback,
help="Parse Robot Framework files using additional languages.",
show_default="en",
)
@click.option(
"--reruns",
"-r",
type=int,
help="Robotidy will rerun the transformations up to reruns times until the code stop changing.",
show_default="0",
)
@skip.comments_option
@skip.documentation_option
@skip.return_values_option
@skip.keyword_call_option
@skip.keyword_call_pattern_option
@skip.settings_option
@skip.arguments_option
@skip.setup_option
@skip.teardown_option
@skip.timeout_option
@skip.template_option
@skip.return_option
@skip.tags_option
@skip.sections_option
@skip.block_comments_option
@click.version_option(version=version.__version__, prog_name="robotidy")
@click.pass_context
@decorators.catch_exceptions
def cli(ctx: click.Context, **kwargs):
"""
Robotidy is a tool for formatting Robot Framework source code.
Full documentation available at <https://robotidy.readthedocs.io> .
"""
cli_config = RawConfig.from_cli(ctx=ctx, **kwargs)
global_config = config_module.MainConfig(cli_config)
global_config.validate_src_is_required()
if global_config.default.list_transformers:
print_transformers_list(global_config)
sys.exit(0)
if global_config.default.desc is not None:
return_code = print_description(global_config.default.desc, global_config.default.target_version)
sys.exit(return_code)
tidy = app.Robotidy(global_config)
status = tidy.transform_files()
sys.exit(status) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/cli.py | 0.52756 | 0.216167 | cli.py | pypi |
from typing import Optional, Set
from robot.api import Token
from robot.api.parsing import CommentSection, EmptyLine
try:
from robot.api import Language
from robot.api.parsing import Config
except ImportError: # RF 6.0
Config, Language = None, None
try:
from robot.parsing.model.blocks import ImplicitCommentSection
except ImportError: # RF < 6.1
ImplicitCommentSection = None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
class Translate(Transformer):
"""
Translate Robot Framework source files from one or many languages to different one.
Following code:
```robotframework
*** Test Cases ***
Test case
[Setup] Keyword
Step
```
will be transformed to (with the German language configured):
```robotframework
*** Testfälle ***
Test case
[Vorbereitung] Keyword
Step
```
You can configure destination language with ``language`` parameter (default ``en``). If your file is not written
in english you also need to configure source language - either using cli option or language header in the
source files:
```
robotidy --configure Translate:enabled=True:language=uk --language pl,de source_in_pl_and_de.robot
```
BDD keywords are not translated by default. Set ``translate_bdd`` parameter to ``True`` to enable it.
If there is more than one alternative to BDD keyword the first one (sorted alphabetically) will be chosen.
It can be overwritten using ``<bdd_keyword>_alternative`` parameters.
"""
ENABLED = False
MIN_VERSION = 6
def __init__(
self,
language: str = "en",
translate_bdd: bool = False,
add_language_header: bool = False,
but_alternative: Optional[str] = None,
given_alternative: Optional[str] = None,
and_alternative: Optional[str] = None,
then_alternative: Optional[str] = None,
when_alternative: Optional[str] = None,
):
super().__init__()
self.in_settings = False
self.translate_bdd = translate_bdd
self.add_language_header = add_language_header
if Language is not None:
self.language = Language.from_name(language)
# reverse mapping, in core it's other_lang: en and we need en: other_lang name
self.settings = {value: key.title() for key, value in self.language.settings.items() if key}
else:
self.language, self.settings = None, None
self._bdd_mapping = None
self.bdd = self.get_translated_bdd(
but_alternative, given_alternative, and_alternative, then_alternative, when_alternative
)
@property
def bdd_mapping(self):
if self._bdd_mapping is None:
self._bdd_mapping = {}
for language in self.languages:
self._bdd_mapping.update({name.title(): "But" for name in language.but_prefixes})
self._bdd_mapping.update({name.title(): "Given" for name in language.given_prefixes})
self._bdd_mapping.update({name.title(): "And" for name in language.and_prefixes})
self._bdd_mapping.update({name.title(): "Then" for name in language.then_prefixes})
self._bdd_mapping.update({name.title(): "When" for name in language.when_prefixes})
return self._bdd_mapping
def get_bdd_keyword(self, container: Set, alternative: Optional[str], param_name: str) -> str:
if alternative is not None:
names = ",".join(sorted(container))
if alternative not in container:
raise InvalidParameterValueError(
self.__class__.__name__,
param_name,
alternative,
f"Provided BDD keyword alternative does not exist in the destination language. Select one of: {names}",
)
return alternative.title()
return sorted(kw.title() for kw in container)[0]
def get_translated_bdd(
self,
but_alternative: Optional[str],
given_alternative: Optional[str],
and_alternative: Optional[str],
then_alternative: Optional[str],
when_alternative: Optional[str],
):
if not self.translate_bdd:
return {}
return {
"But": self.get_bdd_keyword(self.language.but_prefixes, but_alternative, "but_alternative"),
"Given": self.get_bdd_keyword(self.language.given_prefixes, given_alternative, "given_alternative"),
"And": self.get_bdd_keyword(self.language.and_prefixes, and_alternative, "and_alternative"),
"Then": self.get_bdd_keyword(self.language.then_prefixes, then_alternative, "then_alternative"),
"When": self.get_bdd_keyword(self.language.when_prefixes, when_alternative, "when_alternative"),
}
def add_replace_language_header(self, node):
"""
Adds or replaces language headers in transformed files.
If the file already contains language header it will be replaced.
If the destination language is English, it will be removed.
"""
if not self.add_language_header or not node.sections:
return node
if isinstance(node.sections[0], CommentSection) and node.sections[0].header is None:
if node.sections[0].body and isinstance(node.sections[0].body[0], Config):
if self.language.code == "en":
node.sections[0].body.pop(0)
else:
node.sections[0].body[0] = Config.from_params(f"language: {self.language.code}")
else:
node.sections[0].body.insert(0, Config.from_params(f"language: {self.language.code}"))
elif self.language.code != "en":
language_header = Config.from_params(f"language: {self.language.code}")
empty_line = EmptyLine.from_params()
if ImplicitCommentSection:
section = ImplicitCommentSection(body=[language_header, empty_line])
else:
section = CommentSection(body=[language_header, empty_line])
node.sections.insert(0, section)
return node
def visit_File(self, node): # noqa
self.in_settings = False
self.add_replace_language_header(node)
return self.generic_visit(node)
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
"""
Translate BDD keyword in Keyword Call. BDD is translated only if keyword call name starts with BDD,
it is recognized as BDD and there is one space of separation before rest of the keyword name.
Example of keyword name with BDD keyword:
Given I Open Main Page
Source keyword call can be written in any language - that's why we need to translate first word of the keyword
to English then to destination language.
"""
if not self.translate_bdd or not node.keyword:
return node
prefix, *name = node.keyword.split(maxsplit=1)
if not name or not prefix.title() in self.languages.bdd_prefixes:
return node
english_bdd = self.bdd_mapping.get(prefix.title(), None)
if not english_bdd:
return node
translated_bdd = self.bdd[english_bdd]
name_token = node.get_token(Token.KEYWORD)
name_token.value = f"{translated_bdd} {name[0]}"
return node
@skip_section_if_disabled
def translate_section_header(self, node, eng_name):
translated_value = getattr(self.language, eng_name)
translated_value = translated_value.title()
name_token = node.header.data_tokens[0]
name_token.value = f"*** {translated_value} ***"
return self.generic_visit(node)
def visit_SettingSection(self, node): # noqa
self.in_settings = True
node = self.translate_section_header(node, "settings_header")
self.in_settings = False
return node
def visit_TestCaseSection(self, node): # noqa
return self.translate_section_header(node, "test_cases_header")
def visit_KeywordSection(self, node): # noqa
return self.translate_section_header(node, "keywords_header")
def visit_VariableSection(self, node): # noqa
return self.translate_section_header(node, "variables_header")
def visit_CommentSection(self, node): # noqa
if node.header is None:
return node
return self.translate_section_header(node, "comments_header")
@skip_if_disabled
def visit_ForceTags(self, node): # noqa
node_type = node.data_tokens[0].value.title()
# special handling because it's renamed in 6.0
if node_type == "Force Tags":
node_type = "Test Tags" # TODO: Handle Task/Test types
english_value = self.languages.settings.get(node_type, None)
if english_value is None:
return node
translated_value = self.settings.get(english_value, None)
if translated_value is None:
return node
node.data_tokens[0].value = translated_value.title()
return node
visit_TestTags = visit_TaskTags = visit_ForceTags
@skip_if_disabled
def visit_Setup(self, node): # noqa
node_type = node.type.title()
translated_value = self.settings.get(node_type, None)
if translated_value is None:
return node
if not self.in_settings:
translated_value = f"[{translated_value}]"
node.data_tokens[0].value = translated_value
return self.generic_visit(node)
visit_Teardown = (
visit_Template
) = (
visit_Timeout
) = (
visit_Arguments
) = (
visit_Tags
) = (
visit_Documentation
) = (
visit_Metadata
) = (
visit_SuiteSetup
) = (
visit_SuiteTeardown
) = (
visit_TestSetup
) = (
visit_TestTeardown
) = (
visit_TestTemplate
) = (
visit_TestTimeout
) = visit_KeywordTags = visit_LibraryImport = visit_VariablesImport = visit_ResourceImport = visit_Setup | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/Translate.py | 0.931236 | 0.662796 | Translate.py | pypi |
from robot.api.parsing import EmptyLine
from robot.parsing.model.blocks import Keyword
from robotidy.disablers import skip_section_if_disabled
from robotidy.transformers import Transformer
class SmartSortKeywords(Transformer):
"""
Sort keywords in ``*** Keywords ***`` section.
By default sorting is case insensitive, but keywords with leading underscore go to the bottom. Other underscores are
treated as spaces.
Empty lines (or lack of them) between keywords are preserved.
Following code:
```robotframework
*** Keywords ***
_my secrete keyword
Kw2
My Keyword
Kw1
my_another_cool_keyword
my another keyword
Kw3
```
Will be transformed to:
```robotframework
*** Keywords ***
my_another_cool_keyword
my another keyword
Kw3
My Keyword
Kw1
_my secrete keyword
Kw2
```
Default behaviour could be changed using following parameters: ``case_insensitive = True``,
``ignore_leading_underscore = False`` and ``ignore_other_underscore = True``.
"""
ENABLED = False
def __init__(self, case_insensitive=True, ignore_leading_underscore=False, ignore_other_underscore=True):
super().__init__()
self.ci = case_insensitive
self.ilu = ignore_leading_underscore
self.iou = ignore_other_underscore
@skip_section_if_disabled
def visit_KeywordSection(self, node): # noqa
before, after = self.leave_only_keywords(node)
empty_lines = self.pop_empty_lines(node)
node.body.sort(key=self.sort_function)
self.append_empty_lines(node, empty_lines)
node.body = before + node.body + after
return node
@staticmethod
def pop_empty_lines(node):
all_empty = []
for kw in node.body:
kw_empty = []
while kw.body and isinstance(kw.body[-1], EmptyLine):
kw_empty.insert(0, kw.body.pop())
all_empty.append(kw_empty)
return all_empty
@staticmethod
def leave_only_keywords(node):
before = []
after = []
while node.body and not isinstance(node.body[0], Keyword):
before.append(node.body.pop(0))
while node.body and not isinstance(node.body[-1], Keyword):
after.append(node.body.pop(-1))
return before, after
def sort_function(self, kw):
name = kw.name
if self.ci:
name = name.casefold().upper() # to make sure that letters go before underscore
if self.ilu:
name = name.lstrip("_")
if self.iou:
index = len(name) - len(name.lstrip("_"))
name = name[:index] + name[index:].replace("_", " ")
return name
@staticmethod
def append_empty_lines(node, empty_lines):
for kw, lines in zip(node.body, empty_lines):
kw.body.extend(lines) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/SmartSortKeywords.py | 0.823151 | 0.632701 | SmartSortKeywords.py | pypi |
from typing import Iterable
from robot.api.parsing import Token
try:
from robot.api.parsing import Break, Continue
except ImportError:
Continue, Break = None, None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.transformers import Transformer
from robotidy.utils import after_last_dot, normalize_name, wrap_in_if_and_replace_statement
class ReplaceBreakContinue(Transformer):
"""
Replace Continue For Loop and Exit For Loop keyword variants with CONTINUE and BREAK statements.
Following code:
```robotframework
*** Keywords ***
Keyword
FOR ${var} IN 1 2
Continue For Loop
Continue For Loop If $condition
Exit For Loop
Exit For Loop If $condition
END
```
will be transformed to:
```robotframework
*** Keywords ***
Keyword
FOR ${var} IN 1 2
CONTINUE
IF $condition
CONTINUE
END
BREAK
IF $condition
BREAK
END
END
```
"""
MIN_VERSION = 5
def __init__(self):
super().__init__()
self.in_loop = False
def visit_File(self, node): # noqa
self.in_loop = False
return self.generic_visit(node)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
@staticmethod
def create_statement_from_tokens(statement, tokens: Iterable, indent: Token):
return statement([indent, Token(statement.type), *tokens])
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
if not self.in_loop or not node.keyword or node.errors:
return node
normalized_name = after_last_dot(normalize_name(node.keyword))
if "forloop" not in normalized_name:
return node
if normalized_name == "continueforloop":
return self.create_statement_from_tokens(statement=Continue, tokens=node.tokens[2:], indent=node.tokens[0])
elif normalized_name == "exitforloop":
return self.create_statement_from_tokens(statement=Break, tokens=node.tokens[2:], indent=node.tokens[0])
elif normalized_name == "continueforloopif":
return wrap_in_if_and_replace_statement(node, Continue, self.formatting_config.separator)
elif normalized_name == "exitforloopif":
return wrap_in_if_and_replace_statement(node, Break, self.formatting_config.separator)
return node
def visit_For(self, node): # noqa
self.in_loop = True
node = self.generic_visit(node)
self.in_loop = False
return node
visit_While = visit_For | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/ReplaceBreakContinue.py | 0.830422 | 0.562717 | ReplaceBreakContinue.py | pypi |
from typing import List
from robot.api.parsing import Token
from robotidy.disablers import skip_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.skip import Skip
from robotidy.transformers import Transformer
from robotidy.transformers.run_keywords import get_run_keywords
from robotidy.utils import (
collect_comments_from_tokens,
get_line_length_with_sep,
get_new_line,
is_token_value_in_tokens,
join_tokens_with_token,
merge_comments_into_one,
normalize_name,
split_on_token_type,
split_on_token_value,
)
class IndentNestedKeywords(Transformer):
"""
Format indentation inside run keywords variants such as ``Run Keywords`` or
``Run Keyword And Continue On Failure``.
Keywords inside run keywords variants are detected and
whitespace is formatted to outline them. This code:
```robotframework
Run Keyword Run Keyword If ${True} Run keywords Log foo AND Log bar ELSE Log baz
```
will be transformed to:
```robotframework
Run Keyword
... Run Keyword If ${True}
... Run keywords
... Log foo
... AND
... Log bar
... ELSE
... Log baz
```
``AND`` argument inside ``Run Keywords`` can be handled in different ways. It is controlled via ``indent_and``
parameter. For more details see the full documentation.
To skip formatting run keywords inside settings (such as ``Suite Setup``, ``[Setup]``, ``[Teardown]`` etc.) set
``skip_settings`` to ``True``.
"""
ENABLED = False
HANDLES_SKIP = frozenset({"skip_settings"})
def __init__(self, indent_and: str = "split", skip: Skip = None):
super().__init__(skip=skip)
self.indent_and = indent_and
self.validate_indent_and()
self.run_keywords = get_run_keywords()
def validate_indent_and(self):
modes = {"keep_in_line", "split", "split_and_indent"}
if self.indent_and not in modes:
raise InvalidParameterValueError(
self.__class__.__name__,
"indent_and",
self.indent_and,
f"Select one of: {','.join(modes)}",
)
def get_run_keyword(self, kw_name):
kw_norm = normalize_name(kw_name)
return self.run_keywords.get(kw_norm, None)
def get_setting_lines(self, node, indent): # noqa
if self.skip.setting("any") or node.errors or not len(node.data_tokens) > 1:
return None
run_keyword = self.get_run_keyword(node.data_tokens[1].value)
if not run_keyword:
return None
lines = self.parse_sub_kw(node.data_tokens[1:])
if not lines:
return None
return self.split_too_long_lines(lines, indent)
def get_separator(self, column=1, continuation=False):
if continuation:
separator = self.formatting_config.continuation_indent * column
else:
separator = self.formatting_config.separator * column
return Token(Token.SEPARATOR, separator)
def parse_keyword_lines(self, lines, tokens, new_line, eol):
separator = self.get_separator()
for column, line in lines[1:]:
tokens.extend(new_line)
tokens.append(self.get_separator(column, True))
tokens.extend(join_tokens_with_token(line, separator))
tokens.append(eol)
return tokens
@skip_if_disabled
def visit_SuiteSetup(self, node): # noqa
lines = self.get_setting_lines(node, 0)
if not lines:
return node
comments = collect_comments_from_tokens(node.tokens, indent=None)
separator = self.get_separator()
new_line = get_new_line()
tokens = [node.data_tokens[0], separator, *join_tokens_with_token(lines[0][1], separator)]
node.tokens = self.parse_keyword_lines(lines, tokens, new_line, eol=node.tokens[-1])
return (*comments, node)
visit_SuiteTeardown = visit_TestSetup = visit_TestTeardown = visit_SuiteSetup
@skip_if_disabled
def visit_Setup(self, node): # noqa
indent = len(node.tokens[0].value)
lines = self.get_setting_lines(node, indent)
if not lines:
return node
indent = node.tokens[0]
separator = self.get_separator()
new_line = get_new_line(indent)
tokens = [indent, node.data_tokens[0], separator, *join_tokens_with_token(lines[0][1], separator)]
comment = merge_comments_into_one(node.tokens)
if comment:
# need to add comments on first line for [Setup] / [Teardown] settings
comment_sep = Token(Token.SEPARATOR, " ")
tokens.extend([comment_sep, comment])
node.tokens = self.parse_keyword_lines(lines, tokens, new_line, eol=node.tokens[-1])
return node
visit_Teardown = visit_Setup
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
if node.errors or not node.keyword:
return node
run_keyword = self.get_run_keyword(node.keyword)
if not run_keyword:
return node
indent = node.tokens[0]
comments = collect_comments_from_tokens(node.tokens, indent)
assign, kw_tokens = split_on_token_type(node.data_tokens, Token.KEYWORD)
lines = self.parse_sub_kw(kw_tokens)
if not lines:
return node
lines = self.split_too_long_lines(lines, len(self.formatting_config.separator))
separator = self.get_separator()
tokens = [indent]
if assign:
tokens.extend([*join_tokens_with_token(assign, separator), separator])
tokens.extend(join_tokens_with_token(lines[0][1], separator))
new_line = get_new_line(indent)
node.tokens = self.parse_keyword_lines(lines, tokens, new_line, eol=node.tokens[-1])
return (*comments, node)
def split_too_long_lines(self, lines, indent):
"""
Parse indented lines to split too long lines
"""
# TODO: Keep things like ELSE IF <condition>, Run Keyword If <> together no matter what
if "SplitTooLongLine" not in self.transformers:
return lines
allowed_length = self.transformers["SplitTooLongLine"].line_length
sep_len = len(self.formatting_config.separator)
new_lines = []
for column, line in lines:
pre_indent = self.calculate_line_indent(column, indent)
if (
column == 0
or len(line) == 1
or (pre_indent + get_line_length_with_sep(line, sep_len)) <= allowed_length
):
new_lines.append((column, line))
continue
if (pre_indent + get_line_length_with_sep(line[:2], sep_len)) <= allowed_length:
first_line_end = 2
else:
first_line_end = 1
new_lines.append((column, line[:first_line_end]))
new_lines.extend([(column + 1, [arg]) for arg in line[first_line_end:]])
return new_lines
def calculate_line_indent(self, column, starting_indent):
"""Calculate with of the continuation indent.
For example following line will have 4 + 3 + 2x column x 4 indent with:
... argument
"""
return starting_indent + len(self.formatting_config.continuation_indent) * column + 3
def parse_sub_kw(self, tokens, column=0):
if not tokens:
return []
run_keyword = self.get_run_keyword(tokens[0].value)
if not run_keyword:
return [(column, list(tokens))]
lines = [(column, tokens[: run_keyword.resolve])]
tokens = tokens[run_keyword.resolve :]
if run_keyword.branches:
if "ELSE IF" in run_keyword.branches:
while is_token_value_in_tokens("ELSE IF", tokens):
column = max(column, 1)
prefix, branch, tokens = split_on_token_value(tokens, "ELSE IF", 2)
lines.extend(self.parse_sub_kw(prefix, column + 1))
lines.append((column, branch))
if "ELSE" in run_keyword.branches and is_token_value_in_tokens("ELSE", tokens):
return self.split_on_else(tokens, lines, column)
elif run_keyword.split_on_and:
return self.split_on_and(tokens, lines, column)
return lines + self.parse_sub_kw(tokens, column + 1)
def split_on_else(self, tokens, lines, column):
column = max(column, 1)
prefix, branch, tokens = split_on_token_value(tokens, "ELSE", 1)
lines.extend(self.parse_sub_kw(prefix, column + 1))
lines.append((column, branch))
lines.extend(self.parse_sub_kw(tokens, column + 1))
return lines
def split_on_and(self, tokens, lines, column):
if is_token_value_in_tokens("AND", tokens):
while is_token_value_in_tokens("AND", tokens):
prefix, branch, tokens = split_on_token_value(tokens, "AND", 1)
if self.indent_and == "keep_in_line":
lines.extend(self.parse_sub_kw(prefix + branch, column + 1))
else:
indent = int(self.indent_and == "split_and_indent") # indent = 1 for split_and_indent, else 0
lines.extend(self.parse_sub_kw(prefix, column + 1 + indent))
lines.append((column + 1, branch))
indent = int(self.indent_and == "split_and_indent") # indent = 1 for split_and_indent, else 0
lines.extend(self.parse_sub_kw(tokens, column + 1 + indent))
else:
lines.extend([(column + 1, [kw_token]) for kw_token in tokens])
return lines | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/IndentNestedKeywords.py | 0.755907 | 0.703148 | IndentNestedKeywords.py | pypi |
import re
from typing import List
from robot.api.parsing import Comment, Token
try:
from robot.api.parsing import InlineIfHeader
except ImportError:
InlineIfHeader = None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.skip import Skip
from robotidy.transformers import Transformer
from robotidy.transformers.run_keywords import get_run_keywords
from robotidy.utils import ROBOT_VERSION, normalize_name
EOL = Token(Token.EOL)
CONTINUATION = Token(Token.CONTINUATION)
class SplitTooLongLine(Transformer):
"""
Split too long lines.
If line exceeds given length limit (120 by default) it will be split:
```robotframework
*** Keywords ***
Keyword
Keyword With Longer Name ${arg1} ${arg2} ${arg3} # let's assume that arg2 is at 120 char
```
To:
```robotframework
*** Keywords ***
Keyword
# let's assume that arg2 is at 120 char
Keyword With Longer Name
... ${arg1}
... ${arg2}
... ${arg3}
```
Allowed line length is configurable using global parameter ``--line-length``:
```
robotidy --line-length 140 src.robot
```
Or using dedicated for this transformer parameter ``line_length``:
```
robotidy --configure SplitTooLongLine:line_length:140 src.robot
```
``split_on_every_arg``, `split_on_every_value`` and ``split_on_every_setting_arg`` flags (``True`` by default)
controls whether arguments and values are split or fills the line until character limit:
```robotframework
*** Test Cases ***
Test with split_on_every_arg = True (default)
# arguments are split
Keyword With Longer Name
... ${arg1}
... ${arg2}
... ${arg3}
Test with split_on_every_arg = False
# ${arg1} fits under limit, so it stays in the line
Keyword With Longer Name ${arg1}
... ${arg2} ${arg3}
```
Supports global formatting params: ``spacecount`` and ``separator``.
"""
IGNORED_WHITESPACE = {Token.EOL, Token.CONTINUATION}
HANDLES_SKIP = frozenset({"skip_comments", "skip_keyword_call", "skip_keyword_call_pattern", "skip_sections"})
def __init__(
self,
line_length: int = None,
split_on_every_arg: bool = True,
split_on_every_value: bool = True,
split_on_every_setting_arg: bool = True,
split_single_value: bool = False,
align_new_line: bool = False,
skip: Skip = None,
):
super().__init__(skip)
self._line_length = line_length
self.split_on_every_arg = split_on_every_arg
self.split_on_every_value = split_on_every_value
self.split_on_every_setting_arg = split_on_every_setting_arg
self.split_single_value = split_single_value
self.align_new_line = align_new_line
self.robocop_disabler_pattern = re.compile(
r"(# )+(noqa|robocop: ?(?P<disabler>disable|enable)=?(?P<rules>[\w\-,]*))"
)
self.run_keywords = get_run_keywords()
@property
def line_length(self):
return self.formatting_config.line_length if self._line_length is None else self._line_length
def is_run_keyword(self, kw_name):
"""
Skip formatting if the keyword is already handled by IndentNestedKeywords transformer.
Special indentation is preserved thanks for this.
"""
if "IndentNestedKeywords" not in self.transformers:
return False
kw_norm = normalize_name(kw_name)
return kw_norm in self.run_keywords
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_If(self, node): # noqa
if self.is_inline(node):
return node
if node.orelse:
self.generic_visit(node.orelse)
return self.generic_visit(node)
@staticmethod
def is_inline(node):
return ROBOT_VERSION.major > 4 and isinstance(node.header, InlineIfHeader)
def should_transform_node(self, node):
if not self.any_line_too_long(node):
return False
# find if any line contains more than one data tokens - so we have something to split
for line in node.lines:
count = 0
for token in line:
if token.type not in Token.NON_DATA_TOKENS:
count += 1
if count > 1:
return True
return False
def any_line_too_long(self, node):
for line in node.lines:
if self.skip.comments:
line = "".join(token.value for token in line if token.type != Token.COMMENT)
else:
line = "".join(token.value for token in line)
line = self.robocop_disabler_pattern.sub("", line)
line = line.rstrip().expandtabs(4)
if len(line) >= self.line_length:
return True
return False
def visit_KeywordCall(self, node): # noqa
if self.skip.keyword_call(node):
return node
if not self.should_transform_node(node):
return node
if self.disablers.is_node_disabled(node, full_match=False):
return node
if self.is_run_keyword(node.keyword):
return node
return self.split_keyword_call(node)
@skip_if_disabled
def visit_Variable(self, node): # noqa
if not self.should_transform_node(node):
return node
return self.split_variable_def(node)
@skip_if_disabled
def visit_Tags(self, node): # noqa
if self.skip.setting("tags"): # TODO test
return node
return self.split_setting_with_args(node, settings_section=False)
@skip_if_disabled
def visit_Arguments(self, node): # noqa
if self.skip.setting("arguments"):
return node
return self.split_setting_with_args(node, settings_section=False)
@skip_if_disabled
def visit_ForceTags(self, node): # noqa
if self.skip.setting("tags"):
return node
return self.split_setting_with_args(node, settings_section=True)
visit_DefaultTags = visit_TestTags = visit_ForceTags
def split_setting_with_args(self, node, settings_section):
if not self.should_transform_node(node):
return node
if self.disablers.is_node_disabled(node, full_match=False):
return node
if settings_section:
indent = 0
token_index = 1
else:
indent = node.tokens[0]
token_index = 2
line = list(node.tokens[:token_index])
tokens, comments = self.split_tokens(node.tokens, line, self.split_on_every_setting_arg, indent)
if indent:
comments = [Comment([indent, comment, EOL]) for comment in comments]
else:
comments = [Comment([comment, EOL]) for comment in comments]
node.tokens = tokens
return (node, *comments)
@staticmethod
def join_on_separator(tokens, separator):
for token in tokens:
yield token
yield separator
@staticmethod
def split_to_multiple_lines(tokens, indent, separator):
first = True
for token in tokens:
yield indent
if not first:
yield CONTINUATION
yield separator
yield token
yield EOL
first = False
def split_tokens(self, tokens, line, split_on, indent=None):
separator = Token(Token.SEPARATOR, self.formatting_config.separator)
align_new_line = self.align_new_line and not split_on
if align_new_line:
cont_indent = None
else:
cont_indent = Token(Token.SEPARATOR, self.formatting_config.continuation_indent)
split_tokens, comments = [], []
# Comments with separators inside them are split into
# [COMMENT, SEPARATOR, COMMENT] tokens in the AST, so in order to preserve the
# original comment, we need a lookback on the separator tokens.
last_separator = None
for token in tokens:
if token.type in self.IGNORED_WHITESPACE:
continue
if token.type == Token.SEPARATOR:
last_separator = token
elif token.type == Token.COMMENT:
self.join_split_comments(comments, token, last_separator)
elif token.type == Token.ARGUMENT:
if token.value == "":
token.value = "${EMPTY}"
if split_on or not self.col_fit_in_line(line + [separator, token]):
if align_new_line and cont_indent is None: # we are yet to calculate aligned indent
cont_indent = Token(Token.SEPARATOR, self.calculate_align_separator(line))
line.append(EOL)
split_tokens.extend(line)
if indent:
line = [indent, CONTINUATION, cont_indent, token]
else:
line = [CONTINUATION, cont_indent, token]
else:
line.extend([separator, token])
split_tokens.extend(line)
split_tokens.append(EOL)
return split_tokens, comments
@staticmethod
def join_split_comments(comments: List, token: Token, last_separator: Token):
"""Join split comments when splitting line.
AST splits comments with separators, e.g.
"# Comment rest" -> ["# Comment", " ", "rest"].
Notice the third value not starting with a hash - we need to join such comment with previous comment.
"""
if comments and not token.value.startswith("#"):
comments[-1].value += last_separator.value + token.value
else:
comments.append(token)
def calculate_align_separator(self, line: List) -> str:
"""Calculate width of the separator required to align new line to previous line."""
if len(line) <= 2:
# line only fits one column, so we don't have anything to align it for
return self.formatting_config.continuation_indent
first_data_token = next((token.value for token in line if token.type != Token.SEPARATOR), "")
# Decrease by 3 for ... token
align_width = len(first_data_token) + len(self.formatting_config.separator) - 3
return align_width * " "
def split_variable_def(self, node):
if len(node.value) < 2 and not self.split_single_value:
return node
line = [node.data_tokens[0]]
tokens, comments = self.split_tokens(node.tokens, line, self.split_on_every_value)
comments = [Comment([comment, EOL]) for comment in comments]
node.tokens = tokens
return (*comments, node)
def split_keyword_call(self, node):
separator = Token(Token.SEPARATOR, self.formatting_config.separator)
cont_indent = Token(Token.SEPARATOR, self.formatting_config.continuation_indent)
indent = node.tokens[0]
keyword = node.get_token(Token.KEYWORD)
# check if assign tokens needs to be split too
assign = node.get_tokens(Token.ASSIGN)
line = [indent, *self.join_on_separator(assign, separator), keyword]
if assign and not self.col_fit_in_line(line):
head = [
*self.split_to_multiple_lines(assign, indent=indent, separator=cont_indent),
indent,
CONTINUATION,
cont_indent,
keyword,
]
line = []
else:
head = []
tokens, comments = self.split_tokens(
node.tokens[node.tokens.index(keyword) + 1 :], line, self.split_on_every_arg, indent
)
head.extend(tokens)
comment_tokens = []
for comment in comments:
comment_tokens.extend([indent, comment, EOL])
node.tokens = comment_tokens + head
return node
def col_fit_in_line(self, tokens):
return self.len_token_text(tokens) < self.line_length
@staticmethod
def len_token_text(tokens):
return sum(len(token.value) for token in tokens) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/SplitTooLongLine.py | 0.7696 | 0.62621 | SplitTooLongLine.py | pypi |
from robot.api.parsing import DefaultTags, ForceTags, Tags, Token
from robotidy.disablers import skip_section_if_disabled
from robotidy.transformers import Transformer
class OrderTags(Transformer):
"""
Order tags.
Tags are ordered in lexicographic order like this:
```robotframework
*** Test Cases ***
Tags Upper Lower
[Tags] ba Ab Bb Ca Cb aa
My Keyword
*** Keywords ***
My Keyword
[Tags] ba Ab Bb Ca Cb aa
No Operation
```
To:
```robotframework
*** Test Cases ***
Tags Upper Lower
[Tags] aa Ab ba Bb Ca Cb
My Keyword
*** Keywords ***
My Keyword
[Tags] aa Ab ba Bb Ca Cb
No Operation
```
Default order can be changed using following parameters:
- ``case_sensitive = False``
- ``reverse = False``
"""
ENABLED = False
def __init__(
self,
case_sensitive: bool = False,
reverse: bool = False,
default_tags: bool = True,
force_tags: bool = True,
):
super().__init__()
self.key = self.get_key(case_sensitive)
self.reverse = reverse
self.default_tags = default_tags
self.force_tags = force_tags
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_Tags(self, node): # noqa
return self.order_tags(node, indent=True)
def visit_DefaultTags(self, node): # noqa
return self.order_tags(node) if self.default_tags else node
def visit_ForceTags(self, node): # noqa
return self.order_tags(node) if self.force_tags else node
def order_tags(self, node, indent=False):
if self.disablers.is_node_disabled(node):
return node
ordered_tags = sorted(
(tag.value for tag in node.data_tokens[1:]),
key=self.key,
reverse=self.reverse,
)
if len(ordered_tags) <= 1:
return node
comments = node.get_tokens(Token.COMMENT)
tokens = []
if indent:
tokens.append(Token(Token.SEPARATOR, self.formatting_config.indent))
tokens.append(node.data_tokens[0])
tag_tokens = (Token(Token.ARGUMENT, tag) for tag in ordered_tags)
tokens.extend(self.join_tokens(tag_tokens))
tokens.extend(self.join_tokens(comments))
tokens.append(Token(Token.EOL))
node.tokens = tokens
return node
def join_tokens(self, tokens):
joined_tokens = []
separator = Token(Token.SEPARATOR, self.formatting_config.separator)
for token in tokens:
joined_tokens.append(separator)
joined_tokens.append(token)
return joined_tokens
@staticmethod
def get_key(case_sensitive):
return str if case_sensitive else str.casefold | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/OrderTags.py | 0.81457 | 0.804866 | OrderTags.py | pypi |
from itertools import chain
from robot.api.parsing import Comment, ElseHeader, ElseIfHeader, End, If, IfHeader, KeywordCall, Token
try:
from robot.api.parsing import Break, Continue, InlineIfHeader, ReturnStatement
except ImportError:
ReturnStatement, Break, Continue, InlineIfHeader = None, None, None, None
from robotidy.disablers import skip_section_if_disabled
from robotidy.transformers import Transformer
from robotidy.utils import flatten_multiline, get_comments, normalize_name
class InlineIf(Transformer):
"""
Replaces IF blocks with inline IF.
It will only replace IF block if it can fit in one line shorter than `line_length` (default 80) parameter and return
variables matches for all ELSE and ELSE IF branches.
Following code:
```robotframework
*** Test Cases ***
Test
IF $condition1
Keyword argument
END
IF $condition2
${var} Keyword
ELSE
${var} Keyword 2
END
IF $condition1
Keyword argument
Keyword 2
END
```
will be transformed to:
```robotframework
*** Test Cases ***
Test
IF $condition1 Keyword argument
${var} IF $condition2 Keyword ELSE Keyword 2
IF $condition1
Keyword argument
Keyword 2
END
```
Too long inline IFs (over `line_length` character limit) will be replaced with normal IF block.
You can decide to not replace IF blocks containing ELSE or ELSE IF branches by setting `skip_else` to True.
Supports global formatting params: `--startline` and `--endline`.
"""
MIN_VERSION = 5
def __init__(self, line_length: int = 80, skip_else: bool = False):
super().__init__()
self.line_length = line_length
self.skip_else = skip_else
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_If(self, node: If): # noqa
if node.errors or getattr(node.end, "errors", None):
return node
if self.disablers.is_node_disabled(node, full_match=False):
return node
if self.is_inline(node):
return self.handle_inline(node)
self.generic_visit(node)
if node.orelse:
self.generic_visit(node.orelse)
if self.no_end(node):
return node
indent = node.header.tokens[0]
if not (self.should_transform(node) and self.assignment_identical(node)):
return node
return self.to_inline(node, indent.value)
def should_transform(self, node):
if node.header.errors:
return False
if (
len(node.body) > 1
or not node.body
or not isinstance(node.body[0], (KeywordCall, ReturnStatement, Break, Continue))
):
return False
if node.orelse:
return self.should_transform(node.orelse)
return True
@staticmethod
def if_to_branches(if_block):
while if_block:
yield if_block
if_block = if_block.orelse
def assignment_identical(self, node):
else_found = False
assigned = []
for branch in self.if_to_branches(node):
if isinstance(branch.header, ElseHeader):
else_found = True
if not isinstance(branch.body[0], KeywordCall) or not branch.body[0].assign:
assigned.append([])
else:
assigned.append([normalize_name(assign).replace("=", "") for assign in branch.body[0].assign])
if len(assigned) > 1 and assigned[-1] != assigned[-2]:
return False
if any(x for x in assigned):
return else_found
return True
def is_shorter_than_limit(self, inline_if):
line_len = sum(self.if_len(branch) for branch in self.if_to_branches(inline_if))
return line_len <= self.line_length
@staticmethod
def no_end(node):
if not node.end:
return True
if not len(node.end.tokens) == 1:
return False
return not node.end.tokens[0].value
@staticmethod
def is_inline(node):
return isinstance(node.header, InlineIfHeader)
@staticmethod
def if_len(if_st):
return sum(
len(tok.value)
for tok in chain(if_st.body[0].tokens if if_st.body else [], if_st.header.tokens)
if tok.value != "\n"
)
def to_inline(self, node, indent):
tail = node
comments = self.collect_comments_from_if(indent, node)
if_block = head = self.inline_if_from_branch(node, indent)
while tail.orelse:
if self.skip_else:
return node
tail = tail.orelse
comments += self.collect_comments_from_if(indent, tail)
head.orelse = self.inline_if_from_branch(tail, self.formatting_config.separator)
head = head.orelse
if self.is_shorter_than_limit(if_block):
return (*comments, if_block)
return node
def inline_if_from_branch(self, node, indent):
if not node:
return None
separator = self.formatting_config.separator
last_token = Token(Token.EOL) if node.orelse is None else Token(Token.SEPARATOR, separator)
assigned = None
if isinstance(node.body[0], KeywordCall):
assigned = node.body[0].assign
keyword = self.to_inline_keyword(node.body[0], separator, last_token)
elif isinstance(node.body[0], ReturnStatement):
keyword = self.to_inline_return(node.body[0], separator, last_token)
elif isinstance(node.body[0], Break):
keyword = Break(self.to_inline_break_continue_tokens(Token.BREAK, separator, last_token))
elif isinstance(node.body[0], Continue):
keyword = Continue(self.to_inline_break_continue_tokens(Token.CONTINUE, separator, last_token))
else:
return node
# check for ElseIfHeader first since it's child of IfHeader class
if isinstance(node.header, ElseIfHeader):
header = ElseIfHeader(
[Token(Token.ELSE_IF), Token(Token.SEPARATOR, separator), Token(Token.ARGUMENT, node.header.condition)]
)
elif isinstance(node.header, IfHeader):
tokens = [Token(Token.SEPARATOR, indent)]
if assigned:
for assign in assigned:
tokens.extend([Token(Token.ASSIGN, assign), Token(Token.SEPARATOR, separator)])
tokens.extend(
[
Token(Token.INLINE_IF),
Token(Token.SEPARATOR, separator),
Token(Token.ARGUMENT, node.header.condition),
]
)
header = InlineIfHeader(tokens)
elif isinstance(node.header, ElseHeader):
header = ElseHeader([Token(Token.ELSE)])
else:
return node
return If(header=header, body=[keyword])
@staticmethod
def to_inline_keyword(keyword, separator, last_token):
tokens = [Token(Token.SEPARATOR, separator), Token(Token.KEYWORD, keyword.keyword)]
for arg in keyword.get_tokens(Token.ARGUMENT):
tokens.extend([Token(Token.SEPARATOR, separator), arg])
tokens.append(last_token)
return KeywordCall(tokens)
@staticmethod
def to_inline_return(node, separator, last_token):
tokens = [Token(Token.SEPARATOR, separator), Token(Token.RETURN_STATEMENT)]
for value in node.values:
tokens.extend([Token(Token.SEPARATOR, separator), Token(Token.ARGUMENT, value)])
tokens.append(last_token)
return ReturnStatement(tokens)
@staticmethod
def to_inline_break_continue_tokens(token, separator, last_token):
return [Token(Token.SEPARATOR, separator), Token(token), last_token]
@staticmethod
def join_on_separator(tokens, separator):
for token in tokens:
yield token
yield separator
@staticmethod
def collect_comments_from_if(indent, node):
comments = get_comments(node.header.tokens)
for statement in node.body:
comments += get_comments(statement.tokens)
if node.end:
comments += get_comments(node.end)
return [Comment.from_params(comment=comment.value, indent=indent) for comment in comments]
def create_keyword_for_inline(self, kw_tokens, indent, assign):
keyword_tokens = []
for token in kw_tokens:
keyword_tokens.append(Token(Token.SEPARATOR, self.formatting_config.separator))
keyword_tokens.append(token)
return KeywordCall.from_tokens(
[
Token(Token.SEPARATOR, indent + self.formatting_config.separator),
*assign,
*keyword_tokens[1:],
Token(Token.EOL),
]
)
def flatten_if_block(self, node):
node.header.tokens = flatten_multiline(
node.header.tokens, self.formatting_config.separator, remove_comments=True
)
for index, statement in enumerate(node.body):
node.body[index].tokens = flatten_multiline(
statement.tokens, self.formatting_config.separator, remove_comments=True
)
return node
def is_if_multiline(self, node):
for branch in self.if_to_branches(node):
if branch.header.get_token(Token.CONTINUATION):
return True
if any(statement.get_token(Token.CONTINUATION) for statement in branch.body):
return True
return False
def flatten_inline_if(self, node):
indent = node.header.tokens[0].value
comments = self.collect_comments_from_if(indent, node)
node = self.flatten_if_block(node)
head = node
while head.orelse:
head = head.orelse
comments += self.collect_comments_from_if(indent, head)
head = self.flatten_if_block(head)
return comments, node
def handle_inline(self, node):
if self.is_if_multiline(node):
comments, node = self.flatten_inline_if(node)
else:
comments = []
if self.is_shorter_than_limit(node): # TODO ignore comments?
return (*comments, node)
indent = node.header.tokens[0]
separator = self.formatting_config.separator
assign_tokens = node.header.get_tokens(Token.ASSIGN)
assign = [*self.join_on_separator(assign_tokens, Token(Token.SEPARATOR, separator))]
else_present = False
branches = []
while node:
new_comments, if_block, else_found = self.handle_inline_if_create(node, indent.value, assign)
else_present = else_present or else_found
comments += new_comments
branches.append(if_block)
node = node.orelse
if not else_present and assign_tokens:
header = ElseHeader.from_params(indent=indent.value)
keyword = self.create_keyword_for_inline(
[
Token(Token.KEYWORD, "Set Variable"),
*[Token(Token.ARGUMENT, "${None}") for _ in range(len(assign_tokens))],
],
indent.value,
assign,
)
branches.append(If(header=header, body=[keyword]))
if_block = head = branches[0]
for branch in branches[1:]:
head.orelse = branch
head = head.orelse
if_block.end = End([indent, Token(Token.END), Token(Token.EOL)])
return (*comments, if_block)
def handle_inline_if_create(self, node, indent, assign):
comments = self.collect_comments_from_if(indent, node)
body = [self.create_keyword_for_inline(node.body[0].data_tokens, indent, assign)]
else_found = False
if isinstance(node.header, InlineIfHeader):
header = IfHeader.from_params(
condition=node.condition, indent=indent, separator=self.formatting_config.separator
)
elif isinstance(node.header, ElseIfHeader):
header = ElseIfHeader.from_params(
condition=node.condition, indent=indent, separator=self.formatting_config.separator
)
else:
header = ElseHeader.from_params(indent=indent)
else_found = True
return comments, If(header=header, body=body), else_found | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/InlineIf.py | 0.695648 | 0.47384 | InlineIf.py | pypi |
import string
from robotidy.disablers import skip_section_if_disabled
from robotidy.skip import Skip
from robotidy.transformers import Transformer
class NormalizeSectionHeaderName(Transformer):
"""
Normalize section headers names.
Robot Framework is quite flexible with the section header naming. Following lines are equal:
```robotframework
*setting
*** SETTINGS
*** SettingS ***
```
This transformer normalizes naming to follow ``*** SectionName ***`` format (with plural variant):
```robotframework
*** Settings ***
*** Keywords ***
*** Test Cases ***
*** Variables ***
*** Comments ***
```
Optional data after section header (for example data driven column names) is preserved.
It is possible to upper case section header names by passing ``uppercase=True`` parameter:
```robotframework
*** SETTINGS ***
```
"""
HANDLES_SKIP = frozenset({"skip_sections"})
EN_SINGULAR_HEADERS = {"comment", "setting", "variable", "task", "test case", "keyword"}
def __init__(self, uppercase: bool = False, skip: Skip = None):
super().__init__(skip)
self.uppercase = uppercase
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_SectionHeader(self, node): # noqa
if not node.name:
return node
# only normalize, and if found in english ones then add plural
header_name = node.data_tokens[0].value
header_name = header_name.replace("*", "").strip()
if header_name.lower() in self.EN_SINGULAR_HEADERS:
header_name += "s"
if self.uppercase:
header_name = header_name.upper()
else:
header_name = string.capwords(header_name)
# we only modify header token value in order to preserver optional data driven testing column names
node.data_tokens[0].value = f"*** {header_name} ***"
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/NormalizeSectionHeaderName.py | 0.804981 | 0.779238 | NormalizeSectionHeaderName.py | pypi |
from robot.api.parsing import (
Comment,
ElseHeader,
ElseIfHeader,
EmptyLine,
End,
ForHeader,
IfHeader,
ModelVisitor,
Template,
Token,
)
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.transformers import Transformer
from robotidy.utils import is_suite_templated, round_to_four
class AlignTemplatedTestCases(Transformer):
"""
Align templated Test Cases to columns.
Following code:
```robotframework
*** Test Cases *** baz qux
# some comment
test1 hi hello
test2 long test name asdfasdf asdsdfgsdfg
```
will be transformed to:
```robotframework
*** Test Cases *** baz qux
# some comment
test1 hi hello
test2 long test name asdfasdf asdsdfgsdfg
bar1 bar2
```
If you don't want to align test case section that does not contain header names (in above example baz and quz are
header names) then configure `only_with_headers` parameter:
```
robotidy -c AlignSettingsSection:only_with_headers:True <src>
```
For non-templated test cases use ``AlignTestCasesSection`` transformer.
"""
ENABLED = False
def __init__(self, only_with_headers: bool = False, min_width: int = None):
super().__init__()
self.only_with_headers = only_with_headers
self.min_width = min_width
self.widths = None
self.test_name_len = 0
self.test_without_eol = False
self.indent = 0
def visit_File(self, node): # noqa
if not is_suite_templated(node):
return node
self.test_without_eol = False
return self.generic_visit(node)
def visit_If(self, node): # noqa
self.indent += 1
self.generic_visit(node)
self.indent -= 1
return node
visit_Else = visit_ElseIf = visit_For = visit_If
@skip_section_if_disabled
def visit_TestCaseSection(self, node): # noqa
if len(node.header.data_tokens) == 1 and self.only_with_headers:
return node
counter = ColumnWidthCounter(self.disablers)
counter.visit(node)
self.widths = counter.widths
return self.generic_visit(node)
def visit_TestCase(self, node): # noqa
for statement in node.body:
if isinstance(statement, Template) and statement.value is None:
return node
return self.generic_visit(node)
@skip_if_disabled
def visit_Statement(self, statement): # noqa
if statement.type == Token.TESTCASE_NAME:
self.test_name_len = len(statement.data_tokens[0].value) if statement.data_tokens else 0
self.test_without_eol = statement.tokens[-1].type != Token.EOL
elif statement.type == Token.TESTCASE_HEADER:
self.align_header(statement)
elif not isinstance(
statement,
(Comment, EmptyLine, ForHeader, IfHeader, ElseHeader, ElseIfHeader, End),
):
self.align_statement(statement)
return statement
def align_header(self, statement):
tokens = []
# *** Test Cases *** baz qux
# *** Test Cases *** baz qux
for index, token in enumerate(statement.data_tokens[:-1]):
tokens.append(token)
if self.min_width:
separator = max(self.formatting_config.space_count, self.min_width - len(token.value)) * " "
else:
separator = (self.widths[index] - len(token.value) + self.formatting_config.space_count) * " "
tokens.append(Token(Token.SEPARATOR, separator))
tokens.append(statement.data_tokens[-1])
tokens.append(statement.tokens[-1]) # eol
statement.tokens = tokens
return statement
def align_statement(self, statement):
tokens = []
for line in statement.lines:
strip_line = [t for t in line if t.type not in (Token.SEPARATOR, Token.EOL)]
line_pos = 0
exp_pos = 0
widths = self.get_widths(statement)
for token, width in zip(strip_line, widths):
if self.min_width:
exp_pos += max(width + self.formatting_config.space_count, self.min_width)
else:
exp_pos += width + self.formatting_config.space_count
if self.test_without_eol:
self.test_without_eol = False
exp_pos -= self.test_name_len
tokens.append(Token(Token.SEPARATOR, (exp_pos - line_pos) * " "))
tokens.append(token)
line_pos += len(token.value) + exp_pos - line_pos
tokens.append(line[-1])
statement.tokens = tokens
def get_widths(self, statement):
indent = self.indent
if isinstance(statement, (ForHeader, End, IfHeader, ElseHeader, ElseIfHeader)):
indent -= 1
if not indent:
return self.widths
return [max(width, indent * self.formatting_config.space_count) for width in self.widths]
def visit_SettingSection(self, node): # noqa
return node
visit_VariableSection = visit_KeywordSection = visit_CommentSection = visit_SettingSection
class ColumnWidthCounter(ModelVisitor):
def __init__(self, disablers):
self.widths = []
self.disablers = disablers
self.test_name_lineno = -1
self.any_one_line_test = False
self.header_with_cols = False
def visit_TestCaseSection(self, node): # noqa
self.generic_visit(node)
if not self.header_with_cols and not self.any_one_line_test and self.widths:
self.widths[0] = 0
self.widths = [round_to_four(length) for length in self.widths]
def visit_TestCase(self, node): # noqa
for statement in node.body:
if isinstance(statement, Template) and statement.value is None:
return
self.generic_visit(node)
@skip_if_disabled
def visit_Statement(self, statement): # noqa
if statement.type == Token.COMMENT:
return
if statement.type == Token.TESTCASE_HEADER:
if len(statement.data_tokens) > 1:
self.header_with_cols = True
self._count_widths_from_statement(statement)
elif statement.type == Token.TESTCASE_NAME:
if self.widths:
self.widths[0] = max(self.widths[0], len(statement.name))
else:
self.widths.append(len(statement.name))
self.test_name_lineno = statement.lineno
else:
if self.test_name_lineno == statement.lineno:
self.any_one_line_test = True
if not isinstance(statement, (ForHeader, IfHeader, ElseHeader, ElseIfHeader, End)):
self._count_widths_from_statement(statement, indent=1)
def _count_widths_from_statement(self, statement, indent=0):
for line in statement.lines:
line = [t for t in line if t.type not in (Token.SEPARATOR, Token.EOL)]
for index, token in enumerate(line, start=indent):
if index < len(self.widths):
self.widths[index] = max(self.widths[index], len(token.value))
else:
self.widths.append(len(token.value)) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/AlignTemplatedTestCases.py | 0.746046 | 0.723213 | AlignTemplatedTestCases.py | pypi |
from robot.api.parsing import DefaultTags, ForceTags, Tags, Token
from robotidy.disablers import skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
class NormalizeTags(Transformer):
"""
Normalize tag names by normalizing case and removing duplicates.
Example usage:
```
robotidy --transform NormalizeTags:case=lowercase test.robot
```
Other supported cases: uppercase, title case. The default is lowercase.
You can also run it to remove duplicates but preserve current case by setting ``normalize_case`` parameter to False:
```
robotidy --transform NormalizeTags:normalize_case=False test.robot
```
NormalizeTags will change the formatting of the tags by removing the duplicates, new lines and moving comments.
If you want to preserved formatting set ``preserve_format``:
```
robotidy --configure NormalizeTags:preserve_format=True test.robot
```
The duplicates will not be removed with ``preserve_format`` set to ``True``.
"""
CASE_FUNCTIONS = {
"lowercase": str.lower,
"uppercase": str.upper,
"titlecase": str.title,
}
def __init__(self, case: str = "lowercase", normalize_case: bool = True, preserve_format: bool = False):
super().__init__()
self.case = case.lower()
self.normalize_case = normalize_case
self.preserve_format = preserve_format
try:
self.case_function = self.CASE_FUNCTIONS[self.case]
except KeyError:
raise InvalidParameterValueError(
self.__class__.__name__, "case", case, "Supported cases: lowercase, uppercase, titlecase."
)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_Tags(self, node): # noqa
return self.normalize_tags(node, indent=True)
def visit_DefaultTags(self, node): # noqa
return self.normalize_tags(node)
def visit_ForceTags(self, node): # noqa
return self.normalize_tags(node)
def normalize_tags(self, node, indent=False):
if self.disablers.is_node_disabled(node, full_match=False):
return node
if self.preserve_format:
return self.normalize_tags_tokens_preserve_formatting(node)
return self.normalize_tags_tokens_ignore_formatting(node, indent)
def normalize_tags_tokens_preserve_formatting(self, node):
if not self.normalize_case:
return node
for token in node.tokens:
if token.type != Token.ARGUMENT:
continue
token.value = self.case_function(token.value)
return node
def normalize_tags_tokens_ignore_formatting(self, node, indent):
separator = Token(Token.SEPARATOR, self.formatting_config.separator)
setting_name = node.data_tokens[0]
tags = [tag.value for tag in node.data_tokens[1:]]
if self.normalize_case:
tags = self.convert_case(tags)
tags = self.remove_duplicates(tags)
comments = node.get_tokens(Token.COMMENT)
if indent:
tokens = [Token(Token.SEPARATOR, self.formatting_config.indent), setting_name]
else:
tokens = [setting_name]
for tag in tags:
tokens.extend([separator, Token(Token.ARGUMENT, tag)])
if comments:
tokens.extend(self.join_tokens(comments))
tokens.append(Token(Token.EOL))
node.tokens = tuple(tokens)
return node
def convert_case(self, tags):
return [self.case_function(item) for item in tags]
@staticmethod
def remove_duplicates(tags):
return list(dict.fromkeys(tags))
def join_tokens(self, tokens):
joined_tokens = []
separator = Token(Token.SEPARATOR, self.formatting_config.separator)
for token in tokens:
joined_tokens.extend([separator, token])
return joined_tokens | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/NormalizeTags.py | 0.819605 | 0.900223 | NormalizeTags.py | pypi |
from robot.api.parsing import Comment, EmptyLine, Token
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError, RobotidyConfigError
from robotidy.transformers import Transformer
class InvalidSettingsOrderError(InvalidParameterValueError):
def __init__(self, transformer, param_name, param_value, valid_values):
valid_names = ",".join(sorted(valid_values.keys()))
msg = f"Custom order should be provided in comma separated list with valid setting names: {valid_names}"
super().__init__(transformer, param_name, param_value, msg)
class DuplicateInSettingsOrderError(InvalidParameterValueError):
def __init__(self, transformer, param_name, param_value):
provided_order = ",".join(param.lower() for param in param_value)
msg = "Custom order cannot contain duplicated setting names."
super().__init__(transformer, param_name, provided_order, msg)
class SettingInBothOrdersError(RobotidyConfigError):
def __init__(self, transformer, first_order, second_order, duplicates):
names = ",".join(setting.lower() for setting in duplicates)
msg = (
f"{transformer}: Invalid '{first_order}' and '{second_order}' order values. "
f"Following setting names exists in both orders: {names}"
)
super().__init__(msg)
class OrderSettings(Transformer):
"""
Order settings like ``[Arguments]``, ``[Setup]``, ``[Return]`` inside Keywords and Test Cases.
Keyword settings ``[Documentation]``, ``[Tags]``, ``[Timeout]``, ``[Arguments]`` are put before keyword body and
settings like ``[Teardown]``, ``[Return]`` are moved to the end of the keyword:
```robotframework
*** Keywords ***
Keyword
[Teardown] Keyword
[Return] ${value}
[Arguments] ${arg}
[Documentation] this is
... doc
[Tags] sanity
Pass
```
To:
```robotframework
*** Keywords ***
Keyword
[Documentation] this is
... doc
[Tags] sanity
[Arguments] ${arg}
Pass
[Teardown] Keyword
[Return] ${value}
```
Test case settings ``[Documentation]``, ``[Tags]``, ``[Template]``, ``[Timeout]``, ``[Setup]`` are put before
test case body and ``[Teardown]`` is moved to the end of test case.
Default order can be changed using following parameters:
- ``keyword_before = documentation,tags,timeout,arguments``
- ``keyword_after = teardown,return``
- ``test_before = documentation,tags,template,timeout,setup``
- ``test_after = teardown``
Not all settings names need to be passed to given parameter. Missing setting names are not ordered. Example::
robotidy --configure OrderSettings:keyword_before=:keyword_after=
It will order only test cases because all setting names for keywords are missing.
"""
KEYWORD_SETTINGS = {
"documentation": Token.DOCUMENTATION,
"tags": Token.TAGS,
"timeout": Token.TIMEOUT,
"arguments": Token.ARGUMENTS,
"return": Token.RETURN,
"teardown": Token.TEARDOWN,
}
TEST_SETTINGS = {
"documentation": Token.DOCUMENTATION,
"tags": Token.TAGS,
"timeout": Token.TIMEOUT,
"template": Token.TEMPLATE,
"setup": Token.SETUP,
"teardown": Token.TEARDOWN,
}
def __init__(
self,
keyword_before: str = "documentation,tags,timeout,arguments",
keyword_after: str = "teardown,return",
test_before: str = "documentation,tags,template,timeout,setup",
test_after: str = "teardown",
):
super().__init__()
self.keyword_before = self.get_order(keyword_before, "keyword_before", self.KEYWORD_SETTINGS)
self.keyword_after = self.get_order(keyword_after, "keyword_after", self.KEYWORD_SETTINGS)
self.test_before = self.get_order(test_before, "test_before", self.TEST_SETTINGS)
self.test_after = self.get_order(test_after, "test_after", self.TEST_SETTINGS)
self.all_keyword_settings = {*self.keyword_before, *self.keyword_after}
self.all_test_settings = {*self.test_before, *self.test_after}
self.assert_no_duplicates_in_orders()
def get_order(self, order, param_name, name_map):
if not order:
return []
parts = order.lower().split(",")
try:
return [name_map[part] for part in parts]
except KeyError:
raise InvalidSettingsOrderError(self.__class__.__name__, param_name, order, name_map)
def assert_no_duplicates_in_orders(self):
"""Checks if settings are not duplicated in after/before section and in the same section itself."""
orders = {
"keyword_before": set(self.keyword_before),
"keyword_after": set(self.keyword_after),
"test_before": set(self.test_before),
"test_after": set(self.test_after),
}
# check if there is no duplicate in single order, ie test_after=setup,setup
for name, order_set in orders.items():
if len(self.__dict__[name]) != len(order_set):
raise DuplicateInSettingsOrderError(self.__class__.__name__, name, self.__dict__[name])
# check if there is no duplicate in opposite orders, ie test_before=tags test_after=tags
shared_keyword = orders["keyword_before"].intersection(orders["keyword_after"])
shared_test = orders["test_before"].intersection(orders["test_after"])
if shared_keyword:
raise SettingInBothOrdersError(self.__class__.__name__, "keyword_before", "keyword_after", shared_keyword)
if shared_test:
raise SettingInBothOrdersError(self.__class__.__name__, "test_before", "test_after", shared_test)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
@skip_if_disabled
def visit_Keyword(self, node): # noqa
return self.order_settings(node, self.all_keyword_settings, self.keyword_before, self.keyword_after)
@skip_if_disabled
def visit_TestCase(self, node): # noqa
return self.order_settings(node, self.all_test_settings, self.test_before, self.test_after)
def order_settings(self, node, setting_types, before, after):
if not node.body:
return node
settings = dict()
not_settings, trailing_after = [], []
after_seen = False
# when after_seen is set to True then all statements go to trailing_after and last non data
# will be appended after tokens defined in `after` set (like [Return])
comments, header_line = [], []
for child in node.body:
if isinstance(child, Comment):
if child.lineno == node.lineno: # comment in the same line as test/kw name
header_line.append(child)
else:
comments.append(child)
elif getattr(child, "type", "invalid") in setting_types:
after_seen = after_seen or child.type in after
settings[child.type] = (comments, child)
comments = []
elif after_seen:
trailing_after.extend(comments)
comments = []
trailing_after.append(child)
else:
not_settings.extend(comments)
comments = []
not_settings.append(child)
trailing_after.extend(comments)
# comments after last data statement are considered as comment outside body
trailing_non_data = []
while trailing_after and isinstance(trailing_after[-1], (EmptyLine, Comment)):
trailing_non_data.insert(0, trailing_after.pop())
not_settings += trailing_after
node.body = (
header_line
+ self.add_in_order(before, settings)
+ not_settings
+ self.add_in_order(after, settings)
+ trailing_non_data
)
return node
@staticmethod
def add_in_order(order, settings_in_node):
nodes = []
for token_type in order:
if token_type not in settings_in_node:
continue
comments, node = settings_in_node[token_type]
nodes.extend(comments)
nodes.append(node)
return nodes | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/OrderSettings.py | 0.813609 | 0.575528 | OrderSettings.py | pypi |
from robot.api.parsing import Token
try:
from robot.api.parsing import InlineIfHeader, ReturnStatement
except ImportError:
InlineIfHeader = None
ReturnStatement = None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.skip import Skip
from robotidy.transformers import Transformer
from robotidy.utils import join_comments
class NormalizeSeparators(Transformer):
"""
Normalize separators and indents.
All separators (pipes included) are converted to fixed length of 4 spaces (configurable via global argument
``--spacecount``).
To not format documentation configure ``skip_documentation`` to ``True``.
"""
HANDLES_SKIP = frozenset(
{
"skip_documentation",
"skip_keyword_call",
"skip_keyword_call_pattern",
"skip_comments",
"skip_block_comments",
"skip_sections",
}
)
def __init__(self, flatten_lines: bool = False, align_new_line: bool = False, skip: Skip = None):
super().__init__(skip=skip)
self.indent = 0
self.flatten_lines = flatten_lines
self.is_inline = False
self.align_new_line = align_new_line
self._allowed_line_length = None # we can only retrieve it after all transformers are initialized
@property
def allowed_line_length(self) -> int:
"""Get line length from SplitTooLongLine transformer or global config."""
if self._allowed_line_length is None:
if "SplitTooLongLine" in self.transformers:
self._allowed_line_length = self.transformers["SplitTooLongLine"].line_length
else:
self._allowed_line_length = self.formatting_config.line_length
return self._allowed_line_length
def visit_File(self, node): # noqa
self.indent = 0
return self.generic_visit(node)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def indented_block(self, node):
self.visit_Statement(node.header)
self.indent += 1
node.body = [self.visit(item) for item in node.body]
self.indent -= 1
return node
def visit_TestCase(self, node): # noqa
return self.indented_block(node)
visit_Keyword = visit_While = visit_TestCase # noqa
def visit_For(self, node):
node = self.indented_block(node)
self.visit_Statement(node.end)
return node
def visit_Try(self, node):
node = self.indented_block(node)
if node.next:
self.visit(node.next)
if node.end:
self.visit_Statement(node.end)
return node
def visit_If(self, node):
if self.is_inline and InlineIfHeader and isinstance(node.header, InlineIfHeader): # nested inline if is ignored
return node
self.is_inline = self.is_inline or (InlineIfHeader and isinstance(node.header, InlineIfHeader))
self.visit_Statement(node.header)
self.indent += 1
node.body = [self.visit(item) for item in node.body]
self.indent -= 1
if node.orelse:
self.visit_If(node.orelse)
if node.end:
self.visit_Statement(node.end)
self.is_inline = False
return node
@skip_if_disabled
def visit_Documentation(self, doc): # noqa
if self.skip.documentation or self.flatten_lines:
has_pipes = doc.tokens[0].value.startswith("|")
return self.handle_spaces(doc, has_pipes, only_indent=True)
return self.visit_Statement(doc)
def visit_KeywordCall(self, keyword): # noqa
if self.skip.keyword_call(keyword):
return keyword
return self.visit_Statement(keyword)
@skip_if_disabled
def visit_Comment(self, node): # noqa
if self.skip.comment(node):
return node
has_pipes = node.tokens[0].value.startswith("|")
return self.handle_spaces(node, has_pipes)
def is_keyword_inside_inline_if(self, node):
return self.is_inline and not isinstance(node, InlineIfHeader)
@skip_if_disabled
def visit_Statement(self, statement): # noqa
if statement is None:
return None
has_pipes = statement.tokens[0].value.startswith("|")
if has_pipes or not self.flatten_lines:
return self.handle_spaces(statement, has_pipes)
else:
return self.handle_spaces_and_flatten_lines(statement)
@staticmethod
def has_trailing_sep(tokens):
return tokens and tokens[-1].type == Token.SEPARATOR
def handle_spaces_and_flatten_lines(self, statement):
"""Normalize separators and flatten multiline statements to one line."""
add_eol, prev_sep = False, False
add_indent = not self.is_keyword_inside_inline_if(statement)
new_tokens, comments = [], []
for token in statement.tokens:
if token.type == Token.SEPARATOR:
if prev_sep:
continue
prev_sep = True
if add_indent:
token.value = self.formatting_config.indent * self.indent
else:
token.value = self.formatting_config.separator
elif token.type == Token.EOL:
add_eol = True
continue
elif token.type == Token.CONTINUATION:
continue
elif token.type == Token.COMMENT:
comments.append(token)
continue
else:
prev_sep = False
new_tokens.append(token)
add_indent = False
if not self.is_inline and self.has_trailing_sep(new_tokens):
new_tokens.pop()
if comments:
joined_comments = join_comments(comments)
if self.has_trailing_sep(new_tokens):
joined_comments = joined_comments[1:]
new_tokens.extend(joined_comments)
if add_eol:
new_tokens.append(Token(Token.EOL))
statement.tokens = new_tokens
self.generic_visit(statement)
return statement
def handle_spaces(self, statement, has_pipes, only_indent=False):
new_tokens = []
prev_token = None
first_col_width = 0
first_data_token = True
is_sep_after_first_data_token = False
align_continuation = self.align_new_line
for line in statement.lines:
prev_sep = False
line_length = 0
for index, token in enumerate(line):
if token.type == Token.SEPARATOR:
if prev_sep:
continue
prev_sep = True
if index == 0 and not self.is_keyword_inside_inline_if(statement):
token.value = self.formatting_config.indent * self.indent
elif not only_indent:
if prev_token and prev_token.type == Token.CONTINUATION:
if align_continuation:
token.value = first_col_width * " "
else:
token.value = self.formatting_config.continuation_indent
else:
token.value = self.formatting_config.separator
else:
prev_sep = False
if align_continuation:
if first_data_token:
first_col_width += max(len(token.value), 3) - 3 # remove ... token length
# Check if first line is not longer than allowed line length - we cant align over limit
align_continuation = align_continuation and first_col_width < self.allowed_line_length
first_data_token = False
elif not is_sep_after_first_data_token and token.type != Token.EOL:
is_sep_after_first_data_token = True
first_col_width += len(self.formatting_config.separator)
prev_token = token
if has_pipes and index == len(line) - 2:
token.value = token.value.rstrip()
line_length += len(token.value)
new_tokens.append(token)
statement.tokens = new_tokens
self.generic_visit(statement)
return statement | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/NormalizeSeparators.py | 0.789923 | 0.293962 | NormalizeSeparators.py | pypi |
import re
import string
from typing import Optional
from robot.api.parsing import Token
from robot.variables.search import VariableIterator
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
from robotidy.transformers.run_keywords import get_run_keywords
from robotidy.utils import is_token_value_in_tokens, normalize_name, split_on_token_type, split_on_token_value
class RenameKeywords(Transformer):
"""
Enforce keyword naming.
Title Case is applied to keyword name and underscores are replaced by spaces.
You can keep underscores if you set remove_underscores to False:
```
robotidy --transform RenameKeywords -c RenameKeywords:remove_underscores=False .
```
It is also possible to configure `replace_pattern` parameter to find and replace regex pattern. Use `replace_to`
to set replacement value. This configuration (underscores are used instead of spaces):
```
robotidy --transform RenameKeywords -c RenameKeywords:replace_pattern=^(?i)rename\s?me$:replace_to=New_Shining_Name .
```
will transform following code:
```robotframework
*** Keywords ***
rename Me
Keyword Call
```
To:
```robotframework
*** Keywords ***
New Shining Name
Keyword Call
```
Use `ignore_library = True` parameter to control if the library name part (Library.Keyword) of keyword call
should be renamed.
"""
ENABLED = False
def __init__(
self,
replace_pattern: Optional[str] = None,
replace_to: Optional[str] = None,
remove_underscores: bool = True,
ignore_library: bool = True,
):
super().__init__()
self.ignore_library = ignore_library
self.remove_underscores = remove_underscores
self.replace_pattern = self.parse_pattern(replace_pattern)
self.replace_to = "" if replace_to is None else replace_to
self.run_keywords = get_run_keywords()
def parse_pattern(self, replace_pattern):
if replace_pattern is None:
return None
try:
return re.compile(replace_pattern)
except re.error as err:
raise InvalidParameterValueError(
self.__class__.__name__,
"replace_pattern",
replace_pattern,
f"It should be a valid regex expression. Regex error: '{err.msg}'",
)
def get_run_keyword(self, kw_name):
kw_norm = normalize_name(kw_name)
return self.run_keywords.get(kw_norm, None)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def rename_node(self, token, is_keyword_call):
if self.replace_pattern is not None:
new_value = self.rename_with_pattern(token.value, is_keyword_call=is_keyword_call)
else:
new_value = self.normalize_name(token.value, is_keyword_call=is_keyword_call)
new_value = new_value.strip()
if not new_value: # do not allow renaming that removes keywords altogether
return
token.value = new_value
def normalize_name(self, value, is_keyword_call):
var_found = False
parts = []
remaining = ""
for prefix, match, remaining in VariableIterator(value, ignore_errors=True):
var_found = True
# rename strips whitespace, so we need to preserve it if needed
if not prefix.strip() and parts:
parts.extend([" ", match])
else:
parts.extend([self.rename_part(prefix, is_keyword_call), match])
if var_found:
parts.append(self.rename_part(remaining, is_keyword_call))
return "".join(parts).strip()
return self.rename_part(value, is_keyword_call)
def rename_part(self, part: str, is_keyword_call: bool):
if is_keyword_call and self.ignore_library:
lib_name, *kw_name = part.rsplit(".", maxsplit=1)
if not kw_name:
return self.remove_underscores_and_capitalize(part)
return f"{lib_name}.{self.remove_underscores_and_capitalize(kw_name[0])}"
return ".".join([self.remove_underscores_and_capitalize(name_part) for name_part in part.split(".")])
def remove_underscores_and_capitalize(self, value: str):
if self.remove_underscores:
value = value.replace("_", " ")
value = re.sub(r" +", " ", value) # replace one or more spaces by one
words = []
split_words = value.split(" ")
# capitalize first letter of every word, leave rest untouched
for index, word in enumerate(split_words):
if not word:
if index in (0, len(split_words) - 1): # leading and trailing whitespace
words.append("")
else:
words.append(word[0].upper() + word[1:])
return " ".join(words)
def rename_with_pattern(self, value: str, is_keyword_call: bool):
lib_name = ""
if is_keyword_call and "." in value:
# rename only non lib part
found_lib = -1
for prefix, _, _ in VariableIterator(value):
found_lib = prefix.find(".")
break
if found_lib != -1:
lib_name = value[: found_lib + 1]
value = value[found_lib + 1 :]
else:
lib_name, value = value.split(".", maxsplit=1)
lib_name += "."
if lib_name and not self.ignore_library:
lib_name = self.remove_underscores_and_capitalize(lib_name)
return lib_name + self.remove_underscores_and_capitalize(
self.replace_pattern.sub(repl=self.replace_to, string=value)
)
@skip_if_disabled
def visit_KeywordName(self, node): # noqa
name_token = node.get_token(Token.KEYWORD_NAME)
if not name_token or not name_token.value:
return node
self.rename_node(name_token, is_keyword_call=False)
return node
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
name_token = node.get_token(Token.KEYWORD)
if not name_token or not name_token.value:
return node
# ignore assign, separators and comments
_, tokens = split_on_token_type(node.data_tokens, Token.KEYWORD)
self.parse_run_keyword(tokens)
return node
def parse_run_keyword(self, tokens):
if not tokens:
return
self.rename_node(tokens[0], is_keyword_call=True)
run_keyword = self.get_run_keyword(tokens[0].value)
if not run_keyword:
return
tokens = tokens[run_keyword.resolve :]
if run_keyword.branches:
if "ELSE IF" in run_keyword.branches:
while is_token_value_in_tokens("ELSE IF", tokens):
prefix, branch, tokens = split_on_token_value(tokens, "ELSE IF", 2)
self.parse_run_keyword(prefix)
if "ELSE" in run_keyword.branches and is_token_value_in_tokens("ELSE", tokens):
prefix, branch, tokens = split_on_token_value(tokens, "ELSE", 1)
self.parse_run_keyword(prefix)
self.parse_run_keyword(tokens)
return
elif run_keyword.split_on_and:
return self.split_on_and(tokens)
self.parse_run_keyword(tokens)
def split_on_and(self, tokens):
if not is_token_value_in_tokens("AND", tokens):
for token in tokens:
self.rename_node(token, is_keyword_call=True)
return
while is_token_value_in_tokens("AND", tokens):
prefix, branch, tokens = split_on_token_value(tokens, "AND", 1)
self.parse_run_keyword(prefix)
self.parse_run_keyword(tokens)
@skip_if_disabled
def visit_SuiteSetup(self, node): # noqa
if node.errors:
return node
self.parse_run_keyword(node.data_tokens[1:])
return node
visit_SuiteTeardown = visit_TestSetup = visit_TestTeardown = visit_SuiteSetup
@skip_if_disabled
def visit_Setup(self, node): # noqa
if node.errors:
return node
self.parse_run_keyword(node.data_tokens[1:])
return node
visit_Teardown = visit_Setup | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/RenameKeywords.py | 0.820469 | 0.65839 | RenameKeywords.py | pypi |
from robot.api.parsing import Comment, EmptyLine
try:
from robot.api.parsing import ReturnStatement
except ImportError:
ReturnStatement = None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.transformers import Transformer
from robotidy.utils import (
after_last_dot,
create_statement_from_tokens,
normalize_name,
wrap_in_if_and_replace_statement,
)
class ReplaceReturns(Transformer):
"""
Replace return statements (such as [Return] setting or Return From Keyword keyword) with RETURN statement.
Following code:
```robotframework
*** Keywords ***
Keyword
Return From Keyword If $condition 2
Sub Keyword
[Return] 1
Keyword 2
Return From Keyword ${arg}
```
will be transformed to:
```robotframework
*** Keywords ***
Keyword
IF $condition
RETURN 2
END
Sub Keyword
RETURN 1
Keyword 2
RETURN ${arg}
```
"""
MIN_VERSION = 5
def __init__(self):
super().__init__()
self.return_statement = None
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_Keyword(self, node): # noqa
self.return_statement = None
node = self.generic_visit(node)
if self.return_statement:
skip_lines = []
indent = self.return_statement.tokens[0]
while node.body and isinstance(node.body[-1], (EmptyLine, Comment)):
skip_lines.append(node.body.pop())
return_stmt = create_statement_from_tokens(
statement=ReturnStatement, tokens=self.return_statement.tokens[2:], indent=indent
)
node.body.append(return_stmt)
node.body.extend(skip_lines)
return node
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
if not node.keyword or node.errors:
return node
normalized_name = after_last_dot(normalize_name(node.keyword))
if normalized_name == "returnfromkeyword":
return create_statement_from_tokens(
statement=ReturnStatement, tokens=node.tokens[2:], indent=node.tokens[0]
)
elif normalized_name == "returnfromkeywordif":
return wrap_in_if_and_replace_statement(node, ReturnStatement, self.formatting_config.separator)
return node
@skip_if_disabled
def visit_Return(self, node): # noqa
self.return_statement = node
@skip_if_disabled
def visit_Error(self, node): # noqa
"""Remove duplicate [Return]"""
for error in node.errors:
if "Setting 'Return' is allowed only once" in error:
return None
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/ReplaceReturns.py | 0.805632 | 0.683439 | ReplaceReturns.py | pypi |
from robot.api.parsing import Token
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.skip import Skip
from robotidy.transformers import Transformer
class ReplaceEmptyValues(Transformer):
"""
Replace empty values with ``${EMPTY}`` variable.
Empty variables, lists or elements in the list can be defined in the following way:
```robotframework
*** Variables ***
${EMPTY_VALUE}
@{EMPTY_LIST}
&{EMPTY_DICT}
@{LIST_WITH_EMPTY}
... value
...
... value3
```
To be more explicit, this transformer replace such values with ``${EMPTY}`` variables:
```robotframework
*** Variables ***
${EMPTY_VALUE} ${EMPTY}
@{EMPTY_LIST} @{EMPTY}
&{EMPTY_DICT} &{EMPTY}
@{LIST_WITH_EMPTY}
... value
... ${EMPTY}
... value3
```
"""
HANDLES_SKIP = frozenset({"skip_sections"})
def __init__(self, skip: Skip = None):
super().__init__(skip)
@skip_section_if_disabled
def visit_VariableSection(self, node): # noqa
return self.generic_visit(node)
@skip_if_disabled
def visit_Variable(self, node): # noqa
if node.errors or not node.name:
return node
args = node.get_tokens(Token.ARGUMENT)
sep = Token(Token.SEPARATOR, self.formatting_config.separator)
new_line_sep = Token(Token.SEPARATOR, self.formatting_config.continuation_indent)
if args:
tokens = []
prev_token = None
for token in node.tokens:
if token.type == Token.ARGUMENT and not token.value:
if not prev_token or prev_token.type != Token.SEPARATOR:
tokens.append(new_line_sep)
tokens.append(Token(Token.ARGUMENT, "${EMPTY}"))
else:
if token.type == Token.EOL:
token.value = token.value.lstrip(" ")
tokens.append(token)
prev_token = token
else:
tokens = [node.tokens[0], sep, Token(Token.ARGUMENT, node.name[0] + "{EMPTY}"), *node.tokens[1:]]
node.tokens = tokens
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/ReplaceEmptyValues.py | 0.757884 | 0.7666 | ReplaceEmptyValues.py | pypi |
from robot.api.parsing import Comment, EmptyLine, End, Token
try:
from robot.api.parsing import InlineIfHeader
except ImportError:
InlineIfHeader = None
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.skip import Skip
from robotidy.transformers import Transformer
class AddMissingEnd(Transformer):
"""
Add missing END token to FOR loops and IF statements.
Following code:
```robotframework
*** Keywords ***
Keyword
FOR ${x} IN foo bar
Log ${x}
```
will be transformed to:
```robotframework
*** Keywords ***
Keyword
FOR ${x} IN foo bar
Log ${x}
END
```
"""
HANDLES_SKIP = frozenset({"skip_sections"})
def __init__(self, skip: Skip = None):
super().__init__(skip)
def fix_block(self, node, expected_type):
self.generic_visit(node)
self.fix_header_name(node, expected_type)
outside = []
if not node.end: # fix statement position only if END was missing
node.body, outside = self.collect_inside_statements(node)
self.fix_end(node)
return (node, *outside)
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
@skip_if_disabled
def visit_For(self, node): # noqa
return self.fix_block(node, Token.FOR)
@skip_if_disabled
def visit_While(self, node): # noqa
return self.fix_block(node, Token.WHILE)
@skip_if_disabled
def visit_Try(self, node): # noqa
self.generic_visit(node)
if node.type != Token.TRY:
return node
self.fix_header_name(node, node.type)
outside = []
if not node.end: # fix statement position only if END was missing
node.body, outside = self.collect_inside_statements(node)
try_branch = self.get_last_except(node)
if try_branch:
try_branch.body, outside_try = self.collect_inside_statements(try_branch)
outside += outside_try
self.fix_end(node)
return (node, *outside)
@skip_if_disabled
def visit_If(self, node): # noqa
self.generic_visit(node)
if node.type != Token.IF:
return node
if InlineIfHeader and isinstance(node.header, InlineIfHeader):
self.fix_header_name(node, "IF")
return node
self.fix_header_name(node, node.type)
outside = []
if not node.end:
node.body, outside = self.collect_inside_statements(node)
or_else = self.get_last_or_else(node)
if or_else:
or_else.body, outside_or_else = self.collect_inside_statements(or_else)
outside += outside_or_else
self.fix_end(node)
return (node, *outside)
def fix_end(self, node):
"""Fix END (missing END, End -> END, END position should be the same as FOR etc)."""
if node.header.tokens[0].type == Token.SEPARATOR:
indent = node.header.tokens[0]
else:
indent = Token(Token.SEPARATOR, self.formatting_config.separator)
node.end = End([indent, Token(Token.END, Token.END), Token(Token.EOL)])
@staticmethod
def fix_header_name(node, header_name):
node.header.data_tokens[0].value = header_name
def collect_inside_statements(self, node):
"""Split statements from node for those that belong to it and outside nodes.
In this example with missing END:
FOR ${i} IN RANGE 10
Keyword
Other Keyword
RF will store 'Other Keyword' inside FOR block even if it should be outside.
"""
new_body = [[], []]
is_outside = False
starting_col = self.get_column(node)
for child in node.body:
if not isinstance(child, EmptyLine) and self.get_column(child) <= starting_col:
is_outside = True
new_body[is_outside].append(child)
while new_body[0] and isinstance(new_body[0][-1], EmptyLine):
new_body[1].insert(0, new_body[0].pop())
return new_body
@staticmethod
def get_column(node):
if hasattr(node, "header"):
return node.header.data_tokens[0].col_offset
if isinstance(node, Comment):
token = node.get_token(Token.COMMENT)
return token.col_offset
if not node.data_tokens:
return node.col_offset
return node.data_tokens[0].col_offset
@staticmethod
def get_last_or_else(node):
if not node.orelse:
return None
or_else = node.orelse
while or_else.orelse:
or_else = or_else.orelse
return or_else
@staticmethod
def get_last_except(node):
if not node.next:
return None
try_branch = node.next
while try_branch.next:
try_branch = try_branch.next
return try_branch | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/AddMissingEnd.py | 0.729134 | 0.715039 | AddMissingEnd.py | pypi |
import re
from pathlib import Path
from typing import Optional
from jinja2 import Template
from jinja2.exceptions import TemplateError
from robot.api.parsing import Documentation, ModelVisitor, Token
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
GOOGLE_TEMPLATE = """ Short description.
{% if keyword.arguments|length > 0 %}
{{ formatting.cont_indent }}Args:
{%- for arg in keyword.arguments %}
{{ formatting.cont_indent }}{{ formatting.cont_indent }}{{ arg.name }}: <description>{% endfor %}
{% endif -%}
{% if keyword.returns|length > 0 %}
{{ formatting.cont_indent }}Returns:
{%- for value in keyword.returns %}
{{ formatting.cont_indent }}{{ formatting.cont_indent }}{{ value }}: <description>{% endfor %}
{% endif -%}
"""
class Argument:
def __init__(self, arg):
if "=" in arg:
self.name, self.default = arg.split("=", 1)
else:
self.name = arg
self.default = None
self.full_name = arg
def __str__(self):
return self.full_name
class KeywordData:
def __init__(self, name, arguments, returns):
self.name = name
self.arguments = arguments
self.returns = returns
class FormattingData:
def __init__(self, cont_indent, separator):
self.cont_indent = cont_indent
self.separator = separator
class ArgumentsAndReturnsVisitor(ModelVisitor):
def __init__(self):
self.arguments = []
self.returns = []
self.doc_exists = False
def visit_Keyword(self, node): # noqa
self.arguments = []
self.returns = []
# embedded variables
for variable in node.header.data_tokens[0].tokenize_variables():
if variable.type == Token.VARIABLE:
self.arguments.append(Argument(variable.value))
self.doc_exists = False
self.generic_visit(node)
def visit_Documentation(self, node): # noqa
self.doc_exists = True
def visit_Arguments(self, node): # noqa
if node.errors:
return
self.arguments = [Argument(arg) for arg in node.values]
def visit_ReturnStatement(self, node): # noqa
if node.errors:
return
self.returns = list(node.values)
visit_Return = visit_ReturnStatement
class GenerateDocumentation(Transformer):
"""
Generate keyword documentation with the documentation template.
By default, GenerateDocumentation uses Google documentation template.
Following keyword:
```robotframework
*** Keywords ***
Keyword
[Arguments] ${arg}
${var} ${var2} Step
RETURN ${var} ${var2}
```
will produce following documentation:
```robotframework
*** Keywords ***
Keyword
[Documentation]
...
... Arguments:
... ${arg}:
...
... Returns:
... ${var}
... ${var2}
[Arguments] ${arg}
${var} ${var2} Step
RETURN ${var} ${var2}
```
It is possible to create own template and insert dynamic text like keyword name, argument default values
or static text (like ``[Documentation] Documentation stub``). See our docs for more details.
Generated documentation will be affected by ``NormalizeSeparators`` transformer that's why it is best to
skip formatting documentation by this transformer:
```
> robotidy --configure GenerateDocumentation:enabled=True --configure NormalizeSeparators:skip_documentation=True src
```
"""
ENABLED = False
WHITESPACE_PATTERN = re.compile(r"(\s{2,}|\t)", re.UNICODE)
def __init__(self, overwrite: bool = False, doc_template: str = "google", template_directory: Optional[str] = None):
self.overwrite = overwrite
self.doc_template = self.load_template(doc_template, template_directory)
self.args_returns_finder = ArgumentsAndReturnsVisitor()
super().__init__()
def load_template(self, template: str, template_directory: Optional[str] = None) -> str:
try:
return Template(self.get_template(template, template_directory))
except TemplateError as err:
raise InvalidParameterValueError(
self.__class__.__name__,
"doc_template",
"template content",
f"Failed to load the template: {err}",
)
def get_template(self, template: str, template_directory: Optional[str] = None) -> str:
if template == "google":
return GOOGLE_TEMPLATE
template_path = Path(template)
if not template_path.is_file():
if not template_path.is_absolute() and template_directory is not None:
template_path = Path(template_directory) / template_path
if not template_path.is_file():
raise InvalidParameterValueError(
self.__class__.__name__,
"doc_template",
template,
"The template path does not exist or cannot be found.",
)
with open(template_path) as fp:
return fp.read()
def visit_Keyword(self, node): # noqa
self.args_returns_finder.visit(node)
if not self.overwrite and self.args_returns_finder.doc_exists:
return node
formatting = FormattingData(self.formatting_config.continuation_indent, self.formatting_config.separator)
kw_data = KeywordData(node.name, self.args_returns_finder.arguments, self.args_returns_finder.returns)
generated = self.doc_template.render(keyword=kw_data, formatting=formatting)
doc_node = self.create_documentation_from_string(generated)
if self.overwrite:
self.generic_visit(node) # remove existing [Documentation]
node.body.insert(0, doc_node)
return node
def visit_Documentation(self, node): # noqa
return None
def create_documentation_from_string(self, doc_string):
new_line = [Token(Token.EOL), Token(Token.SEPARATOR, self.formatting_config.indent), Token(Token.CONTINUATION)]
tokens = [
Token(Token.SEPARATOR, self.formatting_config.indent),
Token(Token.DOCUMENTATION, "[Documentation]"),
]
for index, line in enumerate(doc_string.splitlines()):
if index != 0:
tokens.extend(new_line)
for value in self.WHITESPACE_PATTERN.split(line):
if not value:
continue
if value.strip():
tokens.append(Token(Token.ARGUMENT, value))
else:
tokens.append(Token(Token.SEPARATOR, value))
tokens.append(Token(Token.EOL))
return Documentation(tokens) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/GenerateDocumentation.py | 0.77768 | 0.37339 | GenerateDocumentation.py | pypi |
try:
from robot.api.parsing import InlineIfHeader, TryHeader
except ImportError:
InlineIfHeader, TryHeader = None, None
from robotidy.disablers import skip_if_disabled
from robotidy.skip import Skip
from robotidy.transformers.aligners_core import AlignKeywordsTestsSection
from robotidy.utils import is_suite_templated
class AlignTestCasesSection(AlignKeywordsTestsSection):
"""
Align ``*** Test Cases ***`` section to columns.
Align non-templated tests and settings into columns with predefined width. There are two possible alignment types
(configurable via ``alignment_type``):
- ``fixed`` (default): pad the tokens to the fixed width of the column
- ``auto``: pad the tokens to the width of the longest token in the column
Example output:
```robotframework
*** Test Cases ***
Test
${var} Create Resource ${argument} value
Assert value
Multi
... line
... args
```
Column widths can be configured via ``widths`` (default ``24``). It accepts comma separated list of column widths.
Tokens that are longer than width of the column go into "overflow" state. It's possible to decide in this
situation (by configuring ``handle_too_long``):
- ``overflow`` (default): align token to the next column
- ``compact_overflow``: try to fit next token between current (overflowed) token and next column
- ``ignore_rest``: ignore remaining tokens in the line
- ``ignore_line``: ignore whole line
It is possible to skip formatting on various types of the syntax (documentation, keyword calls with specific names
or settings).
"""
def __init__(
self,
widths: str = "",
alignment_type: str = "fixed",
handle_too_long: str = "overflow",
compact_overflow_limit: int = 2,
skip_documentation: str = "True", # noqa - override skip_documentation from Skip
skip: Skip = None,
):
super().__init__(widths, alignment_type, handle_too_long, compact_overflow_limit, skip)
def visit_File(self, node): # noqa
if is_suite_templated(node):
return node
return self.generic_visit(node)
@skip_if_disabled
def visit_TestCase(self, node): # noqa
self.create_auto_widths_for_context(node)
self.generic_visit(node)
self.remove_auto_widths_for_context()
return node
def visit_Keyword(self, node): # noqa
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/AlignTestCasesSection.py | 0.856437 | 0.784236 | AlignTestCasesSection.py | pypi |
import re
import string
from typing import Optional
from robot.api.parsing import Token
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
IGNORE_CHARS = {"(", "[", "{", "!", "?"}
def cap_string_until_succeed(word: str):
"""
Yield characters from the word and capitalize character until we are able to make char uppercase.
"""
capitalize = True
for char in word:
if capitalize:
# chars like numbers, -, dots, commas etc. will not change case, and we should not capitalize further
if char == char.upper() and char not in IGNORE_CHARS:
capitalize = False
else:
char = char.upper()
capitalize = not char.isupper()
yield char
def cap_word(word: str):
"""
Capitalize the word. The word can start with ( or contain ':
word -> Word
(word -> (Word
word's -> Word's
"""
if not word or any(c.isupper() for c in word): # ignore JIRA or sOme
return word
new_word = word.capitalize()
if new_word != word:
return new_word
return "".join(cap_string_until_succeed(word))
class RenameTestCases(Transformer):
r"""
Enforce test case naming.
Capitalize first letter of test case name, remove trailing dot and strip leading/trailing whitespace. If
capitalize_each_word is true, will capitalize each word in test case name.
It is also possible to configure `replace_pattern` parameter to find and replace regex pattern. Use `replace_to`
to set replacement value. This configuration:
```
robotidy --transform RenameTestCases -c RenameTestCases:replace_pattern=[A-Z]{3,}-\d{2,}:replace_to=foo
```
will transform following code:
```robotframework
*** Test Cases ***
test ABC-123
No Operation
```
To:
```robotframework
*** Test Cases ***
Test foo
No Operation
```
```
robotidy --transform RenameTestCases -c RenameTestCases:capitalize_each_word=True
```
will transform following code:
```robotframework
*** Test Cases ***
compare XML with json
No Operation
```
To:
```robotframework
*** Test Cases ***
Compare XML With Json
No Operation
```
"""
ENABLED = False
def __init__(
self,
replace_pattern: Optional[str] = None,
replace_to: Optional[str] = None,
capitalize_each_word: bool = False,
):
super().__init__()
try:
self.replace_pattern = re.compile(replace_pattern) if replace_pattern is not None else None
except re.error as err:
raise InvalidParameterValueError(
self.__class__.__name__,
"replace_pattern",
replace_pattern,
f"It should be a valid regex expression. Regex error: '{err.msg}'",
)
self.replace_to = "" if replace_to is None else replace_to
self.capitalize_each_word = capitalize_each_word
@skip_section_if_disabled
def visit_TestCaseSection(self, node): # noqa
return self.generic_visit(node)
@skip_if_disabled
def visit_TestCaseName(self, node): # noqa
token = node.get_token(Token.TESTCASE_NAME)
if token.value:
if self.capitalize_each_word:
value = token.value.strip()
token.value = " ".join(cap_word(word) for word in value.split(" "))
else:
token.value = token.value[0].upper() + token.value[1:]
if self.replace_pattern is not None:
token.value = self.replace_pattern.sub(repl=self.replace_to, string=token.value)
if token.value.endswith("."):
token.value = token.value[:-1]
token.value = token.value.strip()
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/RenameTestCases.py | 0.844985 | 0.746647 | RenameTestCases.py | pypi |
import ast
from robot.api.parsing import Token
from robotidy.disablers import skip_section_if_disabled
from robotidy.exceptions import InvalidParameterValueError
from robotidy.transformers import Transformer
# TODO: preserve comments?
class RemoveEmptySettings(Transformer):
"""
Remove empty settings.
You can configure which settings are affected by parameter ``work_mode``. Possible values:
- overwrite_ok (default): does not remove settings that are overwriting suite settings (Test Setup,
Test Teardown, Test Template, Test Timeout or Default Tags)
- always : works on every settings
Empty settings that are overwriting suite settings will be converted to be more explicit
(given that there is related suite settings present):
```robotframework
*** Keywords ***
Keyword
No timeout
[Documentation] Empty timeout means no timeout even when Test Timeout has been used.
[Timeout]
```
To:
```robotframework
*** Keywords ***
No timeout
[Documentation] Disabling timeout with NONE works too and is more explicit.
[Timeout] NONE
```
You can disable that behavior by changing ``more_explicit`` parameter value to ``False``.
"""
def __init__(self, work_mode: str = "overwrite_ok", more_explicit: bool = True):
super().__init__()
if work_mode not in ("overwrite_ok", "always"):
raise InvalidParameterValueError(
self.__class__.__name__, "work_mode", work_mode, "Possible values:\n overwrite_ok\n always"
)
self.work_mode = work_mode
self.more_explicit = more_explicit
self.overwritten_settings = set()
self.child_types = {
Token.SETUP,
Token.TEARDOWN,
Token.TIMEOUT,
Token.TEMPLATE,
Token.TAGS,
}
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
def visit_Statement(self, node): # noqa
# when not setting type or setting type but not empty
if node.type not in Token.SETTING_TOKENS or len(node.data_tokens) != 1:
return node
if self.disablers.is_node_disabled(node):
return node
# when empty and not overwriting anything - remove
if (
node.type not in self.child_types
or self.work_mode == "always"
or node.type not in self.overwritten_settings
):
return None
if self.more_explicit:
indent = node.tokens[0].value if node.tokens[0].type == Token.SEPARATOR else ""
setting_token = node.data_tokens[0]
node.tokens = [
Token(Token.SEPARATOR, indent),
setting_token,
Token(Token.SEPARATOR, self.formatting_config.separator),
Token(Token.ARGUMENT, "NONE"),
Token(Token.EOL, "\n"),
]
return node
def visit_File(self, node): # noqa
if self.work_mode == "overwrite_ok":
self.overwritten_settings = self.find_overwritten_settings(node)
self.generic_visit(node)
self.overwritten_settings = set()
@staticmethod
def find_overwritten_settings(node):
auto_detector = FindSuiteSettings()
auto_detector.visit(node)
return auto_detector.suite_settings
class FindSuiteSettings(ast.NodeVisitor):
def __init__(self):
self.suite_settings = set()
def check_setting(self, node, overwritten_type):
if len(node.data_tokens) != 1:
self.suite_settings.add(overwritten_type)
def visit_TestSetup(self, node): # noqa
self.check_setting(node, Token.SETUP)
def visit_TestTeardown(self, node): # noqa
self.check_setting(node, Token.TEARDOWN)
def visit_TestTemplate(self, node): # noqa
self.check_setting(node, Token.TEMPLATE)
def visit_TestTimeout(self, node): # noqa
self.check_setting(node, Token.TIMEOUT)
def visit_DefaultTags(self, node): # noqa
self.check_setting(node, Token.TAGS) | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/RemoveEmptySettings.py | 0.416322 | 0.653887 | RemoveEmptySettings.py | pypi |
try:
from robot.api.parsing import InlineIfHeader, TryHeader
except ImportError:
InlineIfHeader, TryHeader = None, None
from robotidy.disablers import skip_if_disabled
from robotidy.skip import Skip
from robotidy.transformers.aligners_core import AlignKeywordsTestsSection
class AlignKeywordsSection(AlignKeywordsTestsSection):
"""
Align ``*** Keywords ***`` section to columns.
Align keyword calls and settings into columns with predefined width. There are two possible alignment types
(configurable via ``alignment_type``):
- ``fixed`` (default): pad the tokens to the fixed width of the column
- ``auto``: pad the tokens to the width of the longest token in the column
Example output:
```robotframework
*** Keywords ***
Keyword
${var} Create Resource ${argument} value
Assert value
Multi
... line
... args
```
Column widths can be configured via ``widths`` (default ``24``). It accepts comma separated list of column widths.
Tokens that are longer than width of the column go into "overflow" state. It's possible to decide in this
situation (by configuring ``handle_too_long``):
- ``overflow`` (default): align token to the next column
- ``compact_overflow``: try to fit next token between current (overflowed) token and the next column
- ``ignore_rest``: ignore remaining tokens in the line
- ``ignore_line``: ignore whole line
It is possible to skip formatting on various types of the syntax (documentation, keyword calls with specific names
or settings).
"""
def __init__(
self,
widths: str = "",
alignment_type: str = "fixed",
handle_too_long: str = "overflow",
compact_overflow_limit: int = 2,
skip_documentation: str = "True", # noqa - override skip_documentation from Skip
skip: Skip = None,
):
super().__init__(widths, alignment_type, handle_too_long, compact_overflow_limit, skip)
@skip_if_disabled
def visit_Keyword(self, node): # noqa
self.create_auto_widths_for_context(node)
self.generic_visit(node)
self.remove_auto_widths_for_context()
return node
def visit_TestCase(self, node): # noqa
return node | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/AlignKeywordsSection.py | 0.863248 | 0.779951 | AlignKeywordsSection.py | pypi |
from robot.api.parsing import ElseHeader, ElseIfHeader, End, If, IfHeader, KeywordCall, Token
from robotidy.disablers import skip_if_disabled, skip_section_if_disabled
from robotidy.transformers import Transformer
from robotidy.utils import after_last_dot, is_var, normalize_name
def insert_separators(indent, tokens, separator):
yield Token(Token.SEPARATOR, indent)
for token in tokens[:-1]:
yield token
yield Token(Token.SEPARATOR, separator)
yield tokens[-1]
yield Token(Token.EOL)
class ReplaceRunKeywordIf(Transformer):
"""
Replace ``Run Keyword If`` keyword calls with IF expressions.
Following code:
```robotframework
*** Keywords ***
Keyword
Run Keyword If ${condition}
... Keyword ${arg}
... ELSE IF ${condition2} Keyword2
... ELSE Keyword3
```
Will be transformed to:
```robotframework
*** Keywords ***
Keyword
IF ${condition}
Keyword ${arg}
ELSE IF ${condition2}
Keyword2
ELSE
Keyword3
END
```
Any return value will be applied to every ``ELSE``/``ELSE IF`` branch:
```robotframework
*** Keywords ***
Keyword
${var} Run Keyword If ${condition} Keyword ELSE Keyword2
```
Output:
```robotframework
*** Keywords ***
Keyword
IF ${condition}
${var} Keyword
ELSE
${var} Keyword2
END
```
Run Keywords inside Run Keyword If will be split into separate keywords:
```robotframework
*** Keywords ***
Keyword
Run Keyword If ${condition} Run Keywords Keyword ${arg} AND Keyword2
```
Output:
```robotframework
*** Keywords ***
Keyword
IF ${condition}
Keyword ${arg}
Keyword2
END
```
"""
@skip_section_if_disabled
def visit_Section(self, node): # noqa
return self.generic_visit(node)
@skip_if_disabled
def visit_KeywordCall(self, node): # noqa
if not node.keyword:
return node
if after_last_dot(normalize_name(node.keyword)) == "runkeywordif":
return self.create_branched(node)
return node
def create_branched(self, node):
separator = node.tokens[0]
assign = node.get_tokens(Token.ASSIGN)
raw_args = node.get_tokens(Token.ARGUMENT)
if len(raw_args) < 2:
return node
end = End([separator, Token(Token.END), Token(Token.EOL)])
prev_if = None
for branch in reversed(list(self.split_args_on_delimiters(raw_args, ("ELSE", "ELSE IF"), assign=assign))):
if branch[0].value == "ELSE":
if len(branch) < 2:
return node
args = branch[1:]
if self.check_for_useless_set_variable(args, assign):
continue
header = ElseHeader([separator, Token(Token.ELSE), Token(Token.EOL)])
elif branch[0].value == "ELSE IF":
if len(branch) < 3:
return node
header = ElseIfHeader(
[
separator,
Token(Token.ELSE_IF),
Token(Token.SEPARATOR, self.formatting_config.separator),
branch[1],
Token(Token.EOL),
]
)
args = branch[2:]
else:
if len(branch) < 2:
return node
header = IfHeader(
[
separator,
Token(Token.IF),
Token(Token.SEPARATOR, self.formatting_config.separator),
branch[0],
Token(Token.EOL),
]
)
args = branch[1:]
keywords = self.create_keywords(args, assign, separator.value + self.formatting_config.indent)
if_block = If(header=header, body=keywords, orelse=prev_if)
prev_if = if_block
prev_if.end = end
return prev_if
def create_keywords(self, arg_tokens, assign, indent):
keyword_name = normalize_name(arg_tokens[0].value)
if keyword_name == "runkeywords":
return [
self.args_to_keyword(keyword[1:], assign, indent)
for keyword in self.split_args_on_delimiters(arg_tokens, ("AND",))
]
elif is_var(keyword_name):
keyword_token = Token(Token.KEYWORD_NAME, "Run Keyword")
arg_tokens = [keyword_token] + arg_tokens
return [self.args_to_keyword(arg_tokens, assign, indent)]
def args_to_keyword(self, arg_tokens, assign, indent):
separated_tokens = list(
insert_separators(
indent,
[*assign, Token(Token.KEYWORD, arg_tokens[0].value), *arg_tokens[1:]],
self.formatting_config.separator,
)
)
return KeywordCall.from_tokens(separated_tokens)
@staticmethod
def split_args_on_delimiters(args, delimiters, assign=None):
split_points = [index for index, arg in enumerate(args) if arg.value in delimiters]
prev_index = 0
for split_point in split_points:
yield args[prev_index:split_point]
prev_index = split_point
yield args[prev_index : len(args)]
if assign and "ELSE" in delimiters and not any(arg.value == "ELSE" for arg in args):
values = [Token(Token.ARGUMENT, "${None}")] * len(assign)
yield [Token(Token.ELSE), Token(Token.ARGUMENT, "Set Variable"), *values]
@staticmethod
def check_for_useless_set_variable(tokens, assign):
if not assign or normalize_name(tokens[0].value) != "setvariable" or len(tokens[1:]) != len(assign):
return False
for var, var_assign in zip(tokens[1:], assign):
if normalize_name(var.value) != normalize_name(var_assign.value):
return False
return True | /robotframework-tidy-4.5.0.tar.gz/robotframework-tidy-4.5.0/robotidy/transformers/ReplaceRunKeywordIf.py | 0.779196 | 0.623835 | ReplaceRunKeywordIf.py | pypi |
from robotlibcore import DynamicCore, keyword
from robot.errors import DataError
from robot.utils import timestr_to_secs, secs_to_timestr, is_truthy
from robot.api import logger
from timeit import default_timer as timer
from .version import VERSION
__version__ = VERSION
def html_row(status, benchmark_name, lower_than, difference, higher_than):
difference = secs_to_timestr(ms_to_s(difference))
lower_than = secs_to_timestr(ms_to_s(lower_than))
higher_than = secs_to_timestr(ms_to_s(higher_than))
return '<tr class="{}"><td>{}</td><td>{}</td><td>{}</td><td>{}</td></tr>'.format(status, benchmark_name, lower_than, difference, higher_than)
def timestr_to_millisecs(timestr):
return int(timestr_to_secs(timestr) * 1000)
def ms_to_s(ms):
return ms / 1000.0
def _is_within_range(difference, lower_than, higher_than):
return difference <= lower_than and difference >= higher_than
def timer_done(timer):
return None not in [timer['start'], timer['stop'], timer['lower_than']]
def assert_string(benchmark_name, difference, lower_than, higher_than):
difference = secs_to_timestr(ms_to_s(difference))
lower_than = secs_to_timestr(ms_to_s(lower_than))
higher_than = secs_to_timestr(ms_to_s(higher_than))
return 'Difference ({}) in ”{}" is not in between {} and {}'.format(difference, benchmark_name, lower_than, higher_than)
class Timer(DynamicCore):
""" Timer is small utility library that allows measuring the x amount of events within a single suite without the need to implement timing information into a separate scripts via robot.result api's.
Library allows multiple timers to be ongoing at any time by providing a benchmark a name or just a single benchmark if no name is given.
Each single timer can then be verified if its duration was within a given range or just lower than what was expected or all timers can be verified in one go if they where configured properly.
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = __version__
def __init__(self):
self.benchmarks = {}
DynamicCore.__init__(self, [])
@keyword
def start_timer(self, benchmark_name='default'):
"""
Starts a single timer
Parameters:
- ``benchmark_name`` Name of the benchmark, optional. Defaults to 'default'
Example:
| Start Timer | mytimer |
"""
logger.trace("Timer::start_timer({})".format(benchmark_name))
# TODO: Maybe issue a warning when overwriting existing timers ?
if benchmark_name in self.benchmarks:
self.benchmarks[benchmark_name]['start'] = timer()
self.benchmarks[benchmark_name]['stop'] = None
else:
self.benchmarks[benchmark_name] = {'start': timer(), 'stop': None, 'lower_than': None, 'higher_than': 0}
@keyword
def stop_timer(self, benchmark_name='default'):
logger.trace("Timer::stop_timer({})".format(benchmark_name))
"""
Stops a single timer
Parameters:
- ``benchmark_name`` Name of the benchmark, optional. Defaults to 'default'
Example:
| Stop Timer | mytimer |
"""
if benchmark_name not in self.benchmarks:
raise DataError('Benchmark "%s" not started.' % benchmark_name)
self.benchmarks[benchmark_name]['stop'] = timer()
@keyword
def configure_timer(self, lower_than, higher_than=0, benchmark_name='default'):
"""
Configures/creates a single timer so that it can be verified later on.
Parameters:
- ``lower_than`` Timestr value to check if the timer's total execution time is lower than.
- ``higher_than`` Timestr value to check if the timer's minimum value is higher than this, optional. Defaults to '0'
- ``benchmark_name`` Name of the benchmark, optional. Defaults to 'default'
Example:
This will create a timer by name "anothertimer" that can then be checked that it lasted at least 5 seconds but not more than 10.
| Configure Timer | 10 seconds | 5 seconds | anothertimer |
"""
logger.trace("Timer::configure_timer({},{}, {})".format(lower_than, higher_than, benchmark_name))
if benchmark_name not in self.benchmarks:
self.benchmarks[benchmark_name] = {'start': None, 'stop': None, 'lower_than': None, 'higher_than': 0}
self.benchmarks[benchmark_name]['lower_than'] = timestr_to_millisecs(lower_than)
self.benchmarks[benchmark_name]['higher_than'] = timestr_to_millisecs(higher_than)
@keyword
def verify_all_timers(self, fail_on_errors=True):
"""
Verifies all timers within a testsuite. Timer's must be done, eg `Start Timer` and `Stop Timer` keywords must have been called for it and it has to have been configured with `Configure Timer` keyword and lower_than parameter.
Keyword will also write a html table into the logs that shows all finished timers and their status.
Parameters:
- ``fail_on_errors`` Should we throw an error if any timers are not within given ranges. Defaults to True
Example:
| Verify All Timers | fail_on_errors=False |
"""
logger.trace("Timer::verify_all_timers({})".format(fail_on_errors))
failures = []
fail_on_errors = is_truthy(fail_on_errors)
html = ['<table class="statistics"><tr><th>Timer</th><th>Lower than</th><th>Execution Time</th><th>Higher Than</th></tr>']
for item in filter(lambda timer: timer_done(timer[1]), self.benchmarks.items()):
benchmark_name = item[0]
benchmark_data = item[1]
difference = int((benchmark_data['stop'] - benchmark_data['start']) * 1000)
lower_than = benchmark_data['lower_than']
higher_than = benchmark_data['higher_than']
if not _is_within_range(difference, lower_than, higher_than):
html.append(html_row("fail", benchmark_name, lower_than, difference, higher_than))
failures.append(assert_string(benchmark_name, difference, lower_than, higher_than))
else:
html.append(html_row("pass", benchmark_name, lower_than, difference, higher_than))
html.append("</table")
logger.info("".join(html), html=True)
if failures:
if fail_on_errors:
raise AssertionError("\n".join(failures))
else:
logger.warn("\n".join(failures))
if failures:
return False
return True
@keyword
def verify_single_timer(self, lower_than, higher_than=0, benchmark_name='default'):
"""
Verifies a single timer. Will call `Configure Timer` with same parameters if timer has been succesfully stopped.
Parameters:
- ``lower_than`` Timestr value to check if the timer's total execution time is lower than.
- ``higher_than`` Timestr value to check if the timer's minimum value is higher than this, optional. Defaults to '0'
- ``benchmark_name`` Name of the benchmark, optional. Defaults to 'default'
Example:
| `Start Timer` | yetananother | |
| Sleep | 5 Seconds | |
| `Stop Timer` | yetananother | |
| `Verify Single Timer` | 4 Seconds | benchmarkname=yetananother |
"""
logger.trace("Timer::verify_single_timer({},{},{})".format(lower_than, higher_than, benchmark_name))
if benchmark_name not in self.benchmarks:
raise DataError('Benchmark "%s" not started.' % benchmark_name)
self.configure_timer(lower_than, higher_than, benchmark_name)
benchmark_data = self.benchmarks[benchmark_name]
if not benchmark_data['stop']:
raise DataError('Benchmark "%s" not finished.' % benchmark_name)
difference = int((benchmark_data['stop'] - benchmark_data['start']) * 1000)
lower_than = benchmark_data['lower_than']
higher_than = benchmark_data['higher_than']
if not _is_within_range(difference, lower_than, higher_than):
raise AssertionError(assert_string(benchmark_name, difference, lower_than, higher_than))
return True
@keyword
def remove_all_timers(self):
"""
Removes all timers that have been configured, started or stopped. This is useful in case you want to produce per suite or per test case reports as then you can remove all timers in corresponding teardowns.
"""
logger.trace("Timer::remove_all_timers()")
self.benchmarks = {}
@keyword
def remove_single_timer(self, benchmark_name='default'):
"""
Removes a single timer data
Parameters:
- ``benchmark_name`` Name of the benchmark, optional. Defaults to 'default'
Example:
| Remove Single Timer | yetananothertimer |
"""
logger.trace("Timer::remove_single_timer({})".format(benchmark_name))
if benchmark_name in self.benchmarks:
del self.benchmarks[benchmark_name] | /robotframework-timer-0.0.6.tar.gz/robotframework-timer-0.0.6/src/Timer/__init__.py | 0.813572 | 0.357287 | __init__.py | pypi |
Robot Framework Tools
=====================
[Python](http://python.org) Tools for [Robot Framework](
http://robotframework.org) and Test Libraries.
* A [`testlibrary`][1] framework for creating Dynamic Test Libraries.
* A [`ContextHandler`][1.1] framework for `testlibrary`
to create switchable sets of different Keyword implementations.
* A [`SessionHandler`][1.2] framework for `testlibrary`
to auto-generate Keywords for session management.
* A [`TestLibraryInspector`][2].
* An interactive [`TestRobot`][3].
* A [`RemoteRobot`][4], combining `TestRobot`
with external [`RobotRemoteServer`](
https://pypi.python.org/pypi/robotremoteserver)
* A [`ToolsLibrary`][5],
accompanying Robot Framework's standard Test Libraries.
* A [`robotshell`][6] extension for [IPython](http://ipython.org).
# 0. Setup
----------
Supported __Python__ versions: __2.7.x__, __3.3.x__ and later
Package name: __robotframework-tools__
Package extra features:
* __[remote]__: `RemoteRobot`
* __[robotshell]__
### Requirements
* [`six>=1.9`](https://pypi.python.org/pypi/six)
* [`path.py>=7.0`](https://pypi.python.org/pypi/path.py)
* [`moretools>=0.1.5`](https://pypi.python.org/pypi/moretools)
* [`robotframework>=2.8`](https://pypi.python.org/pypi/robotframework)
* __Python 3.x__: [`robotframework-python3>=2.8.3`](
https://pypi.python.org/pypi/robotframework-python3)
Extra requirements for __[remote]__:
* [`robotremoteserver`](https://pypi.python.org/pypi/robotremoteserver)
Extra requirements for __[robotshell]__:
* [`ipython>=3.0`](https://pypi.python.org/pypi/ipython)
### Installation
python setup.py install
Or with [pip](http://www.pip-installer.org):
pip install .
Or from [PyPI](https://pypi.python.org/pypi/robotframework-tools):
pip install robotframework-tools
* With all extra features:
pip install robotframework-tools[remote,robotshell]
# 1. Creating Dynamic Test Libraries
------------------------------------
[1]: #markdown-header-1-creating-dynamic-test-libraries
from robottools import testlibrary
TestLibrary = testlibrary()
Defined in a module also called `TestLibrary`,
this generated Dynamic `TestLibrary` type
could now directly be imported in Robot Framework.
It features all the required methods:
* `get_keyword_names`
* `get_keyword_arguments`
* `get_keyword_documentation`
* `run_keyword`
### Keywords
The `TestLibrary` has no Keywords so far...
To add some just use the `TestLibrary.keyword` decorator:
@TestLibrary.keyword
def some_keyword(self, arg, *rest):
...
A keyword function can be defined anywhere in any scope.
The `TestLibrary.keyword` decorator
always links it to the `TestLibrary`
(but always returns the original function object).
And when called as a Keyword from Robot Framework
the `self` parameter will always get the `TestLibrary` instance.
You may want to define your keyword methods
at your Test Library class scope.
Just derive your actual Dynamic Test Library class from `TestLibrary`:
# SomeLibrary.py
class SomeLibrary(TestLibrary):
def no_keyword(self, ...):
...
@TestLibrary.keyword
def some_other_keyword(self, arg, *rest):
...
To get a simple interactive `SomeLibrary` overview just instantiate it:
In : lib = SomeLibrary()
You can inspect all Keywords in Robot CamelCase style
(and call them for testing):
In : lib.SomeKeyword
Out: SomeLibrary.Some Keyword [ arg | *rest ]
By default the Keyword names and argument lists are auto-generated
from the function definition.
You can override that:
@TestLibrary.keyword(name='KEYword N@me', args=['f|r$t', 'se[ond', ...])
def function(self, *args):
...
### Keyword Options
When you apply custom decorators to your Keyword functions
which don't return the original function objects,
you would have to take care of preserving the original argspec for Robot.
`testlibrary` can handle this for you:
def some_decorator(func):
def wrapper(...):
return func(...)
# You still have to take care of the function(-->Keyword) name:
wrapper.__name__ = func.__name__
return wrapper
TestLibrary = testlibrary(
register_keyword_options=[
# Either just:
some_decorator,
# Or with some other name:
('some_option', some_decorator),
],
)
@TestLibrary.keyword.some_option
def some_keyword_with_options(self, arg, *rest):
...
There are predefined options. Currently:
* `unicode_to_str` - Convert all `unicode` values (pybot's default) to `str`.
You can specify `default_keyword_options` that will always be applied:
TestLibrary = testlibrary(
register_keyword_options=[
('some_option', some_decorator),
],
default_keyword_options=[
'unicode_to_str',
'some_option',
)
To bypass the `default_keyword_options` for single Keywords:
@TestLibrary.keyword.no_options
def some_keyword_without_options(self, arg, *rest):
...
@TestLibrary.keyword.reset_options.some_option
def some_keyword_without_default_options(self, arg, *rest):
...
## 1.1 Adding switchable Keyword contexts
-----------------------------------------
[1.1]: #markdown-header-11-adding-switchable-keyword-contexts
from robottools import ContextHandler
TODO...
## 1.2 Adding session management
--------------------------------
[1.2]: #markdown-header-12-adding-session-management
from robottools import SessionHandler
Whenever your Test Library needs to deal with sessions,
like network connections,
which you want to open, switch, close,
and when you don't always want to specify
the actual session to use as a Keyword argument,
just do:
class SomeConnection(SessionHandler):
# All methods starting with `open`
# will be turned into session opener Keywords.
# `self` will get the Test Library instance.
def open(self, host, *args):
return internal_connection_handler(host)
def open_in_a_different_way(self, host):
return ...
TestLibrary = testlibrary(
session_handlers=[SomeConnection],
)
The following Keywords will be generated:
* `TestLibrary.Open Some Connection [ host | *args ]`
* `TestLibrary.Open Named Some Connection [ alias | host | *args ]`
* `TestLibrary.Open Some Connection In A Different Way [ host ]`
* `TestLibrary.Open Named Some Connection In A Different Way [ alias | host ]`
* `TestLibrary.Swith Some Connection [ alias ]`
* `TestLibrary.Close Some Connection [ ]`
You can access the currently active session instance,
as returned from an opener Keyword,
with an auto-generated property:
@TestLibrary.keyword
def some_keyword(self):
self.some_connection.do_something()
If there is no active session,
a `TestLibrary.SomeConnectionError` will be raised.
`Close Some Connection` will only release all references
to the stored session object.
To add custom logic just add a `close` method to your `SessionHandler`:
class SomeConnection(SessionHandler):
...
def close(self, connection):
# `self` will get the Test Library instance.
...
# 2. Inspecting Test Libraries
------------------------------
[2]: #markdown-header-2-inspecting-test-libraries
from robottools import TestLibraryInspector
Now you can load any Test Library in two ways:
builtin = TestLibraryInspector('BuiltIn')
oslib = TestLibraryInspector.OperatingSystem
TODO...
# 3. Using Robot Framework interactively
----------------------------------------
[3]: #markdown-header-3-using-robot-framework-interactively
from robottools import TestRobot
test = TestRobot('Test')
The `TestRobot` basically uses the same Robot Framework internals
for loading Test Libraries and running Keywords
as `pybot` and its alternatives,
so you can expect the same behavior from your Keywords.
All functionalitiy is exposed in CamelCase:
test.Import('SomeLibrary')
TODO...
# 4. Using Robot Framework remotely
-----------------------------------
[4]: #markdown-header-4-using-robot-framework-remotely
from robottools.remote import RemoteRobot
`RemoteRobot` is derived from `robottools.TestRobot`
and external `robotremoteserver.RobotRemoteServer`,
which is derived from Python's `SimpleXMLRPCServer`.
The `__init__()` method shares most of its basic arguments
with `RobotRemoteServer`:
def __init__(
self, libraries, host='127.0.0.1', port=8270, port_file=None,
allow_stop=True, allow_import=None,
register_keywords=True, introspection=True,
):
...
The differences:
* Instead of a single pre-initialized Test Library instance,
you can provide a sequence of multiple Test Library names,
which will be imported and initialized using `TestRobot.Import()`.
* The additional argument `allow_import`
takes a sequence of Test Library names,
which can later be imported remotely
via the `Import Remote Library` Keyword described below.
* `RemoteRobot` also directly registers Keywords as remote methods
(`RobotRemoteServer` only registers a __Dynamic Library API__).
You can change this by setting `register_keywords=False`.
* `RemoteRobot` calls `SimpleXMLRPCServer.register_introspection_functions()`.
You can change this by setting `introspection=False`.
Once initialized the `RemoteRobot` will immediately start its service.
You can connect with any XML-RPC client
like Python's `xmlrpc.client.ServerProxy`
(__Python 2.7__: `xmlrpclib.ServerProxy`).
To access the `RemoteRobot` from your Test Scripts,
you can use Robot Framework's standard `Remote` Library.
Once connected it will provide all the Keywords from the Test Libraries
imported by the `RemoteRobot`.
Besides `RobotRemoteServer`'s additional `Stop Remote Server` Keyword
`RemoteRobot` further provides these extra Keywords:
* `Import Remote Library [ name ]`
>Remotely import the Test Library with given `name`.
>
>Does the same remotely as `BuiltIn.Import Library` does locally.
>The Test Library must be allowed on server side.
>
>The `Remote` client library must be reloaded
>to make the new Keywords accessible.
>This can be done with `ToolsLibrary.Reload Library`.
# 5. Using the ToolsLibrary
---------------------------
[5]: #markdown-header-5-using-the-toolslibrary
The `ToolsLibrary` is a Dynamic Test Library,
which provides these additional general purpose Keywords:
* `Reload Library [ name | *args]`
>Reload an already imported Test Library
>with given `name` and optional `args`.
>
>This also leads to a reload of the Test Library Keywords,
>which allows Test Libraries to dynamically extend or change them.
The `ToolsLibrary` is based on `robottools.testlibrary`.
To use it directly in __Python__:
from ToolsLibrary import ToolsLibrary
tools = ToolsLibrary()
Then you can call the Keywords in `tools.CamelCase(...)` style.
# 6. Using IPython as a Robot Framework shell
---------------------------------------------
[6]: #markdown-header-6-using-ipython-as-a-robot-framework-shell
In : %load_ext robotshell
Now all the `robottools.TestRobot` functionality
is exposed as IPython `%magic` functions...
[Robot.Default]
In : %Import SomeLibrary
Out: [Library] SomeLibrary
As with a `robottools.TestRobot` you can call Keywords
with or without the Test Library prefix.
You can simply assign the return values to normal Python variables.
And there are two ways of separating the arguments:
[Robot.Default]
In : ret = %SomeKeyword value ...
[TRACE] Arguments: [ 'value', '...' ]
[TRACE] Return: ...
[Robot.Default]
In : ret = %SomeLibrary.SomeOtherKeyword | with some value | ...
[TRACE] Arguments: [ 'with some value', '...' ]
[TRACE] Return: ...
You can create new `Robot`s and switch between them:
[Robot.Default]
In : %Robot Test
Out: [Robot] Test
[Robot.Test]
In : %Robot.Default
Out: [Robot] Default
[Robot.Default]
In :
If a Keyword fails the traceback is just printed like in a Robot Log.
If it fails unexpectedly you may want to debug it.
Just turn on `%robot_debug` mode
and the Keyword's exception will be re-raised.
Combine it with IPython's automatic `%pdb` mode
and you'll get a nice Test Library debugging environment.
### Variables
Robot Framework uses `${...}` and `@{...}` syntax for accessing variables.
In `%magic` function call parameters
IPython already substitutes Python variables inside `{...}`
with their `str()` conversion.
This conflicts with Robot variable syntax.
To access a Robot variable you need to use double braces:
%Keyword ${{var}}
Or to expand a list variable:
%Keyword @{{listvar}}
This way you can also pass Python variables directly to a Robot Keyword.
If the `Robot` can't find the variable in its own dictionary,
lookup is first extended to IPython's `user_ns` (shell level)
and finally to Python's `builtins`.
| /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/README.md | 0.880887 | 0.700714 | README.md | pypi |
# robotframework-tools
```
import robottools
print(robottools.__version__)
print(robottools.__description__)
```
* `testlibrary()` [creates Dynamic Test Libraries][1]
* A [`ContextHandler`][1.1] framework for `testlibrary`
to create switchable sets of different Keyword implementations.
* A [`SessionHandler`][1.2] framework for `testlibrary`
to auto-generate Keywords for session management.
* A [`TestLibraryInspector`][2].
* A [`RemoteRobot`][4], combining `TestRobot`
with external [`RobotRemoteServer`](
https://pypi.python.org/pypi/robotremoteserver)
* A [`ToolsLibrary`][5],
accompanying Robot Framework's standard Test Libraries.
* A [`robotshell`][6] extension for [IPython](http://ipython.org).
[1]: #1.-Creating-Dynamic-Test-Libraries
<https://bitbucket.org/userzimmermann/robotframework-tools>
<https://github.com/userzimmermann/robotframework-tools>
# 0. Setup
__Supported Python versions__:
[2.7](http://docs.python.org/2.7),
[3.3](http://docs.python.org/3.3),
[3.4](http://docs.python.org/3.4)
Just install the latest release
from [PyPI](https://pypi.python.org/pypi/robotframework-tools)
with [pip](http://www.pip-installer.org):
```
# !pip install robotframework-tools
```
or from [Binstar](https://binstar.org/userzimmermann/robotframework-tools)
with [conda](http://conda.pydata.org):
```
# !conda install -c userzimmermann robotframework-tools
```
Both automatically install requirements:
```
robottools.__requires__
```
* __Python 2.7__: `robotframework>=2.8`
* __Python 3.x__: `robotframework-python3>=2.8.4`
`RemoteRobot` and `robotshell` have extra requirements:
```
robottools.__extras__
```
Pip doesn't install them by default.
Just append any comma separated extra tags in `[]` brackets to the package name.
To install with all extra requirements:
```
# !pip install robotframework-tools[all]
```
This `README.ipynb` will also be installed. Just copy it:
```
# robottools.__notebook__.copy('path/name.ipynb')
```
# 1. Creating Dynamic Test Libraries
```
from robottools import testlibrary
TestLibrary = testlibrary()
```
This generated Dynamic `TestLibrary` class
could now directly be imported in Robot Framework.
It features all the Dynamic API methods:
* `get_keyword_names`
* `get_keyword_arguments`
* `get_keyword_documentation`
* `run_keyword`
### Keywords
The `TestLibrary` has no Keywords so far...
To add some just use the `TestLibrary.keyword` decorator:
```
@TestLibrary.keyword
def some_keyword(self, arg, *rest):
pass
```
A keyword function can be defined anywhere in any scope.
The `TestLibrary.keyword` decorator
always links it to the `TestLibrary`
(but always returns the original function object).
And when called as a Keyword from Robot Framework
the `self` parameter will always get the `TestLibrary` instance.
You may want to define your keyword methods
at your Test Library class scope.
Just derive your actual Dynamic Test Library class from `TestLibrary`:
```
class SomeLibrary(TestLibrary):
def no_keyword(self, *args):
pass
@TestLibrary.keyword
def some_other_keyword(self, *args):
pass
```
To get a simple interactive `SomeLibrary` overview just instantiate it:
```
lib = SomeLibrary()
```
You can inspect all Keywords in Robot CamelCase style
(and call them for testing):
```
lib.SomeKeyword
```
By default the Keyword names and argument lists are auto-generated
from the function definition.
You can override that:
```
@TestLibrary.keyword(name='KEYword N@me', args=['f|r$t', 'se[ond'])
def function(self, *args):
pass
```
### Keyword Options
When you apply custom decorators to your Keyword functions
which don't return the original function objects,
you would have to take care of preserving the original argspec for Robot.
`testlibrary` can handle this for you:
```
def some_decorator(func):
def wrapper(self, *args):
return func(self, *args)
# You still have to take care of the function(-->Keyword) name:
wrapper.__name__ = func.__name__
return wrapper
TestLibrary = testlibrary(
register_keyword_options=[
# Either just:
some_decorator,
# Or with some other name:
('some_option', some_decorator),
],
)
@TestLibrary.keyword.some_option
def some_keyword_with_options(self, arg, *rest):
pass
```
There are predefined options. Currently:
* `unicode_to_str` - Convert all `unicode` values (pybot's default) to `str`.
You can specify `default_keyword_options` that will always be applied:
```
TestLibrary = testlibrary(
register_keyword_options=[
('some_option', some_decorator),
],
default_keyword_options=[
'unicode_to_str',
'some_option',
],
)
```
To bypass the `default_keyword_options` for single Keywords:
```
@TestLibrary.keyword.no_options
def some_keyword_without_options(self, arg, *rest):
pass
@TestLibrary.keyword.reset_options.some_option
def some_keyword_without_default_options(self, arg, *rest):
pass
```
| /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/README.ipynb | 0.500732 | 0.920218 | README.ipynb | pypi |
__all__ = ['TestLibraryType']
from textwrap import dedent
from decorator import decorator
from .keywords import Keyword, KeywordsDict
def check_keywords(func):
"""Decorator for Test Library methods,
which checks if an instance-bound .keywords mapping exists.
"""
def caller(func, self, *args, **kwargs):
if self.keywords is type(self).keywords:
raise RuntimeError(dedent("""
'%s' instance has no instance-bound .keywords mapping.
Was Test Library's base __init__ called?
""" % type(self).__name__))
return func(self, *args, **kwargs)
return decorator(caller, func)
class TestLibraryType(object):
"""A base class for Robot Test Libraries.
- Should not be initialized directly.
- :func:`testlibrary` dynamically creates derived classes
to use as (a base for) a custom Test Library.
"""
@check_keywords
def get_keyword_names(self):
"""Get all Capitalized Keyword names.
- Part of Robot Framework's Dynamic Test Library API.
"""
return [str(name) for name, kw in self.keywords]
@check_keywords
def run_keyword(self, name, args, kwargs={}):
"""Run the Keyword given by its `name`
with the given `args` and optional `kwargs`.
- Part of Robot Framework's Dynamic Test Library API.
"""
keyword = self.keywords[name]
return keyword(*args, **kwargs)
@check_keywords
def get_keyword_documentation(self, name):
"""Get the doc string of the Keyword given by its `name`.
- Part of Robot Framework's Dynamic Test Library API.
"""
if name == '__intro__':
#TODO
return ""
if name == '__init__':
#TODO
return ""
keyword = self.keywords[name]
return keyword.__doc__
@check_keywords
def get_keyword_arguments(self, name):
"""Get the arguments definition of the Keyword given by its `name`.
- Part of Robot Framework's Dynamic Test Library API.
"""
keyword = self.keywords[name]
return list(keyword.args())
def __init__(self):
"""Initializes the Test Library base.
- Creates a new :class:`KeywordsDict` mapping
for storing bound :class:`Keyword` instances
corresponding to the method function objects
in the Test Library class' :class:`KeywordsDict` mapping,
which was populated by the <Test Library class>.keyword decorator.
- Sets the initially active contexts.
"""
self.contexts = []
for name, handler in self.context_handlers:
self.contexts.append(handler.default)
self.keywords = KeywordsDict()
for name, func in type(self).keywords:
self.keywords[name] = Keyword(name, func, libinstance=self)
@check_keywords
def __getattr__(self, name):
"""CamelCase access to the bound :class:`Keyword` instances.
"""
try:
return getattr(self.keywords, name)
except AttributeError:
raise AttributeError(
"'%s' instance has no attribute or Keyword '%s'"
% (type(self).__name__, name))
@check_keywords
def __dir__(self):
"""Return the CamelCase Keyword names.
"""
return dir(self.keywords) | /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/robottools/library/base.py | 0.635336 | 0.246953 | base.py | pypi |
__all__ = ['Meta']
from moretools import camelize, decamelize
class Meta(object):
"""The meta options manager for :class:`.SessionHandler`.
- Based on the handler's class name
and a user-defined `Handler.Meta` class.
"""
def __init__(self, handlerclsname=None, options=None):
"""Generate several variants of a session handler name
for use in identifiers and message strings.
- Based on the `handlerclsname`
and/or the attributes of the optional
`Handler.Meta` class in `options`,
which can define name (variant) prefixes/suffixes
and/or explicit name variants.
"""
# Check all prefix definitions and generate actual prefix strings
prefixes = {}
def gen_prefix(key, default, append=''):
"""Check the prefix definition
for name variant identified by `key`.
- Set to `default` if not defined.
- Always `append` the given extra string.
"""
try:
prefix = getattr(
options, (key and key + '_') + 'name_prefix')
except AttributeError:
prefix = default
else:
prefix = prefix and str(prefix) or ''
if prefix and not prefix.endswith(append):
prefix += append
# Finally add to the prefixes dictionary
prefixes[key] = prefix
def gen_plural_prefix(key, append=''):
"""Check the prefix definition
for plural name variant identified by plural_`key`.
- Set to singular `key` prefix if not defined.
- Always `append` the given extra string.
"""
plural_key = 'plural' + (key and '_' + key)
default = prefixes[key]
gen_prefix(plural_key, default, append)
# Base name prefixes
gen_prefix('', '', '_')
gen_plural_prefix('', '_')
gen_prefix('upper', camelize(prefixes['']))
gen_plural_prefix('upper')
# Identifier name prefixes
gen_prefix('identifier', '', '_')
gen_plural_prefix('identifier', '_')
gen_prefix('upper_identifier', camelize(prefixes['identifier']))
# Verbose name prefixes
gen_prefix('verbose', '', ' ')
gen_plural_prefix('verbose', ' ')
# Check all suffix definitions and generate actual suffix strings
suffixes = {}
def gen_suffix(key, default, prepend=''):
"""Check the suffix definition
for name variant identified by `key`.
- Set to `default` if not defined.
- Always `prepend` the given extra string.
"""
try:
suffix = getattr(options, key + '_name_suffix')
except AttributeError:
suffix = default
else:
suffix = suffix and str(suffix) or ''
if suffix and not suffix.startswith(prepend):
suffix = prepend + suffix
# Finally add to the suffixes dictionary
suffixes[key] = suffix
def gen_plural_suffix(key, prepend=''):
"""Check the suffix definition
for plural name variant identified by plural_`key`.
- Set to singular `key` suffix + 's' if not defined.
- Always `prepend` the given extra string.
"""
plural_key = 'plural' + (key and '_' + key)
default = suffixes[key] and suffixes[key] + 's'
gen_suffix(plural_key, default, prepend)
# Identifier name suffixes
gen_suffix('', '', '_')
gen_plural_suffix('', '_')
gen_suffix('upper', camelize(suffixes['']))
gen_plural_suffix('upper')
# Identifier name suffixes
## gen_suffix('identifier', 'session', '_')
gen_suffix('identifier', '', '_')
gen_plural_suffix('identifier', '_')
gen_suffix('upper_identifier', camelize(suffixes['identifier']))
# Verbose name suffixes
## gen_suffix('verbose', 'Session', ' ')
gen_suffix('verbose', '', ' ')
gen_plural_suffix('verbose', ' ')
# Check explicit name variant definitions
variants = {}
for variantkey in (
'', 'plural', 'upper', 'plural_upper',
'identifier', 'plural_identifier', 'upper_identifier',
'verbose', 'plural_verbose'
):
defname = (variantkey and variantkey + '_') + 'name'
variant = getattr(options, defname, None)
# Non-empty string or None
variant = variant and (str(variant) or None) or None
variants[variantkey] = variant
# Create final base name (helper) variants
# (NOT stored in final meta object (self))
key = ''
name = (
variants[key]
or prefixes[key] + decamelize(handlerclsname) + suffixes[key])
key = 'plural'
plural_name = (
variants[key] and prefixes[key] + variants[key] + suffixes[key]
or None)
key = 'upper'
upper_name = (
variants[key]
or variants[''] and camelize(variants[''])
or prefixes[key] + handlerclsname + suffixes[key])
key = 'plural_upper'
plural_upper_name = (
variants[key]
or variants['plural']
and prefixes[key] + camelize(variants['plural']) + (
suffixes[key] or (not variants['plural'] and 's' or ''))
or None)
# Create final identifier/verbose name variants
# (stored in final meta object (self))
key = 'identifier'
self.identifier_name = (
variants[key]
or prefixes[key] + name + suffixes[key])
key = 'plural_identifier'
self.plural_identifier_name = (
variants[key]
or prefixes[key] + (plural_name or name) + (
suffixes[key] or (not plural_name and 's' or '')))
key = 'upper_identifier'
self.upper_identifier_name = (
variants[key]
or prefixes[key] + upper_name + suffixes[key])
key = 'verbose'
self.verbose_name = (
variants[key]
or prefixes[key] + upper_name + suffixes[key])
key = 'plural_verbose'
self.plural_verbose_name = (
variants[key]
or prefixes[key] + (plural_upper_name or upper_name) + (
suffixes[key] or (not plural_upper_name and 's' or ''))) | /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/robottools/library/session/metaoptions.py | 0.784938 | 0.255791 | metaoptions.py | pypi |
from six import with_metaclass, string_types
__all__ = ['normboolclass', 'normbooltype']
from six.moves import map
from moretools import boolclass
from robot.utils import normalize
class Type(type(boolclass.base)):
"""Base metaclass for :func:`normboolclass` created classes.
"""
def __contains__(cls, value):
"""Look for a normalized `value` in `.true` and `.false` lists.
"""
return super(Type, cls).__contains__(cls.normalize(value))
class NormalizedBool(with_metaclass(Type, boolclass.base)):
"""Base class for :func:`normboolclass` created classes.
"""
def __init__(self, value):
"""Create a NormalizedBool instance with a normalized value.
"""
try:
super(NormalizedBool, self).__init__(
type(self).normalize(value))
except ValueError as e:
raise type(e)(repr(value))
def normboolclass(typename='NormalizedBool', true=None, false=None,
ignore='', caseless=True, spaceless=True,
base=NormalizedBool):
if not issubclass(base, NormalizedBool):
raise TypeError("'base' is no subclass of normboolclass.base: %s"
% base)
# to be stored as .normalize method of created class
def normalizer(value):
"""Normalize `value` based on normalizing options
given to :func:`normboolclass`.
- Any non-string values are just passed through.
"""
if not isinstance(value, string_types):
return value
return normalize(value, ignore=normalizer.ignore,
caseless=normalizer.caseless, spaceless=normalizer.spaceless)
# store the normalizing options
normalizer.ignore = ignore
normalizer.caseless = caseless
normalizer.spaceless = spaceless
if true:
true = list(map(normalizer, true))
if false:
false = list(map(normalizer, false))
Bool = boolclass(typename, true=true, false=false, base=base)
type(Bool).normalize = staticmethod(normalizer)
return Bool
normbooltype = normboolclass
normboolclass.base = NormalizedBool | /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/robottools/utils/normbool.py | 0.785555 | 0.410343 | normbool.py | pypi |
from six import text_type as unicode
__all__ = ['normstringclass', 'normstringtype']
from six.moves import UserString
from robot.utils import normalize
class NormalizedString(unicode):
"""Base class for :func:`normstringclass` created classes.
"""
def __init__(self, value):
"""Initialize a NormalizedString instance with a normalized value.
"""
self.__dict__['normalized'] = type(self).normalize(self)
@property
def normalized(self):
return self.__dict__['normalized']
def __cmp__(self, other):
return cmp(self.normalized, type(self).normalize(other))
def __eq__(self, other):
return self.normalized == type(self).normalize(other)
def __ne__(self, other):
return self.normalized != type(self).normalize(other)
def __lt__(self, other):
return self.normalized < type(self).normalize(other)
def __le__(self, other):
return self.normalized <= type(self).normalize(other)
def __gt__(self, other):
return self.normalized > type(self).normalize(other)
def __ge__(self, other):
return self.normalized >= type(self).normalize(other)
def __contains__(self, string):
return type(self).normalize(string) in self.normalized
def normstringclass(typename='NormalizedString',
ignore='', caseless=True, spaceless=True, base=NormalizedString):
if not issubclass(base, NormalizedString):
raise TypeError("'base' is no subclass of normstringclass.base: %s"
% base)
# to be stored as .normalize method of created class
def normalizer(value):
"""Normalize `value` based on normalizing options
given to :func:`normstringclass`.
"""
if isinstance(value, UserString):
value = value.data
return normalize(value, ignore=normalizer.ignore,
caseless=normalizer.caseless, spaceless=normalizer.spaceless)
# store the normalizing options
normalizer.ignore = ignore
normalizer.caseless = caseless
normalizer.spaceless = spaceless
class Type(type(base)):
normalize = staticmethod(normalizer)
return Type(typename, (base,), {})
normstringtype = normstringclass
normstringclass.base = NormalizedString | /robotframework-tools-0.1a134.zip/robotframework-tools-0.1a134/robottools/utils/normstr.py | 0.90774 | 0.386011 | normstr.py | pypi |
from Mobile import Mobile
class uiautomatorlibrary(Mobile):
"""
robotframework-uiautomatorlibrary is an Android device testing library for Robot Framework.
It uses uiautomator - Python wrapper for Android uiautomator tool (https://pypi.python.org/pypi/uiautomator/0.1.30) internally.
*Before running tests*
You can use `Set Serial` to specify which device to perform the test.
*Identify UI object*
If the UI object can be identified just by one selector, you can use keyword to manipulate the object directly.
For example:
| Swipe Left | description=Settings | | # swipe the UI object left by description |
| Swipe Left | description=Settings | clickable=True | # swipe the UI object left by description and text |
If the UI object is in other or UI object (other layout or something else), you can always get the object layer by layer.
For example:
| ${some_parent_object} | Get Object | className=android.widget.FrameLayout |
| ${some_child_object} | Get Child | ${some_parent_object} | text=ShownTextOnChildObject |
*Selectors*
If the keyword argument expects _**selectors_, the following parameters are supported. (more details https://github.com/xiaocong/uiautomator#selector):
- text, textContains, textMatches, textStartsWith
- className, classNameMatches
- description, descriptionContains, descriptionMatches, descriptionStartsWith
- checkable, checked, clickable, longClickable
- scrollable, enabled,focusable, focused, selected
- packageName, packageNameMatches
- resourceId, resourceIdMatches
- index, instance
p.s. These parameters are case sensitive.
*Input*
The keyword Type allows you to type in languages other than English.
You have to :
1. Install MyIME.apk (in support folder) to device.
2. Set MyIME as your input method editor in the setting.
*Operations without UI*
If you want to use keywords with *[Test Agent]* tag.
You have to install TestAgent.apk (in support folder) to device.
"""
ROBOT_LIBRARY_VERSION = '0.4'
ROBOT_LIBRARY_DOC_FORMAT = 'ROBOT'
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_EXIT_ON_FAILURE = True
def __init__(self):
"""
"""
Mobile.__init__(self) | /robotframework-uiautomatorlibrary-0.4.zip/robotframework-uiautomatorlibrary-0.4/uiautomatorlibrary/__init__.py | 0.840521 | 0.384681 | __init__.py | pypi |
import os
from keywords import *
from version import VERSION
from utils import LibraryListener
from robot.libraries.BuiltIn import BuiltIn
__version__ = VERSION
class Selenium2Library(
_LoggingKeywords,
_RunOnFailureKeywords,
_BrowserManagementKeywords,
_ElementKeywords,
_TableElementKeywords,
_FormElementKeywords,
_SelectElementKeywords,
_JavaScriptKeywords,
_CookieKeywords,
_ScreenshotKeywords,
_WaitingKeywords,
_WebKeywords
):
"""Selenium2Library is a web testing library for Robot Framework.
It uses the Selenium 2 (WebDriver) libraries internally to control a web browser.
See http://seleniumhq.org/docs/03_webdriver.html for more information on Selenium 2
and WebDriver.
Selenium2Library runs tests in a real browser instance. It should work in
most modern browsers and can be used with both Python and Jython interpreters.
= Before running tests =
Prior to running test cases using Selenium2Library, Selenium2Library must be
imported into your Robot test suite (see `importing` section), and the
`Open Browser` keyword must be used to open a browser to the desired location.
*--- Note important change starting with Version 1.7.0 release ---*
= Locating or specifying elements =
All keywords in Selenium2Library that need to find an element on the page
take an argument, either a `locator` or now a `webelement`. `locator`
is a string that describes how to locate an element using a syntax
specifying different location strategies. `webelement` is a variable that
holds a WebElement instance, which is a representation of the element.
*Using locators*
---------------
By default, when a locator value is provided, it is matched against the
key attributes of the particular element type. For example, `id` and
`name` are key attributes to all elements, and locating elements is easy
using just the `id` as a `locator`. For example:
| Click Element my_element
It is also possible to specify the approach Selenium2Library should take
to find an element by specifying a lookup strategy with a locator
prefix. Supported strategies are:
| *Strategy* | *Example* | *Description* |
| identifier | Click Element `|` identifier=my_element | Matches by @id or @name attribute |
| id | Click Element `|` id=my_element | Matches by @id attribute |
| name | Click Element `|` name=my_element | Matches by @name attribute |
| xpath | Click Element `|` xpath=//div[@id='my_element'] | Matches with arbitrary XPath expression |
| dom | Click Element `|` dom=document.images[56] | Matches with arbitrary DOM express |
| link | Click Element `|` link=My Link | Matches anchor elements by their link text |
| partial link | Click Element `|` partial link=y Lin | Matches anchor elements by their partial link text |
| css | Click Element `|` css=div.my_class | Matches by CSS selector |
| jquery | Click Element `|` jquery=div.my_class | Matches by jQuery/sizzle selector |
| sizzle | Click Element `|` sizzle=div.my_class | Matches by jQuery/sizzle selector |
| tag | Click Element `|` tag=div | Matches by HTML tag name |
| default* | Click Link `|` default=page?a=b | Matches key attributes with value after first '=' |
* Explicitly specifying the default strategy is only necessary if locating
elements by matching key attributes is desired and an attribute value
contains a '='. The following would fail because it appears as if _page?a_
is the specified lookup strategy:
| Click Link page?a=b
This can be fixed by changing the locator to:
| Click Link default=page?a=b
*Using webelements*
------------------
Starting with version 1.7 of the Selenium2Library, one can pass an argument
that contains a WebElement instead of a string locator. To get a WebElement,
use the new `Get WebElements` keyword. For example:
| ${elem} = | Get WebElement | id=my_element |
| Click Element | ${elem} | |
Locating Tables, Table Rows, Columns, etc.
------------------------------------------
Table related keywords, such as `Table Should Contain`, work differently.
By default, when a table locator value is provided, it will search for
a table with the specified `id` attribute. For example:
| Table Should Contain my_table text
More complex table lookup strategies are also supported:
| *Strategy* | *Example* | *Description* |
| css | Table Should Contain `|` css=table.my_class `|` text | Matches by @id or @name attribute |
| xpath | Table Should Contain `|` xpath=//table/[@name="my_table"] `|` text | Matches by @id or @name attribute |
= Custom Locators =
If more complex lookups are required than what is provided through the default locators, custom lookup strategies can
be created. Using custom locators is a two part process. First, create a keyword that returns the WebElement
that should be acted on.
| Custom Locator Strategy | [Arguments] | ${browser} | ${criteria} | ${tag} | ${constraints} |
| | ${retVal}= | Execute Javascript | return window.document.getElementById('${criteria}'); |
| | [Return] | ${retVal} |
This keyword is a reimplementation of the basic functionality of the `id` locator where `${browser}` is a reference
to the WebDriver instance and `${criteria}` is the text of the locator (i.e. everything that comes after the = sign).
To use this locator it must first be registered with `Add Location Strategy`.
| Add Location Strategy custom Custom Locator Strategy
The first argument of `Add Location Strategy` specifies the name of the lookup strategy (which must be unique). After
registration of the lookup strategy, the usage is the same as other locators. See `Add Location Strategy` for more details.
= Timeouts =
There are several `Wait ...` keywords that take timeout as an
argument. All of these timeout arguments are optional. The timeout
used by all of them can be set globally using the
`Set Selenium Timeout` keyword. The same timeout also applies to
`Execute Async Javascript`.
All timeouts can be given as numbers considered seconds (e.g. 0.5 or 42)
or in Robot Framework's time syntax (e.g. '1.5 seconds' or '1 min 30 s').
For more information about the time syntax see:
http://robotframework.googlecode.com/svn/trunk/doc/userguide/RobotFrameworkUserGuide.html#time-format.
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = VERSION
def __init__(self,
timeout=5.0,
implicit_wait=0.0,
run_on_failure='Capture Page Screenshot',
screenshot_root_directory=None,
web_gif='False'
):
"""Selenium2Library can be imported with optional arguments.
`timeout` is the default timeout used to wait for all waiting actions.
It can be later set with `Set Selenium Timeout`.
'implicit_wait' is the implicit timeout that Selenium waits when
looking for elements.
It can be later set with `Set Selenium Implicit Wait`.
See `WebDriver: Advanced Usage`__ section of the SeleniumHQ documentation
for more information about WebDriver's implicit wait functionality.
__ http://seleniumhq.org/docs/04_webdriver_advanced.html#explicit-and-implicit-waits
`run_on_failure` specifies the name of a keyword (from any available
libraries) to execute when a Selenium2Library keyword fails. By default
`Capture Page Screenshot` will be used to take a screenshot of the current page.
Using the value "Nothing" will disable this feature altogether. See
`Register Keyword To Run On Failure` keyword for more information about this
functionality.
`screenshot_root_directory` specifies the default root directory that screenshots should be
stored in. If not provided the default directory will be where robotframework places its logfile.
`web_gif` Enable/Disable gif generation for each Test Case, Defalut setting is False/FALSE
Examples:
| Library `|` Selenium2Library `|` 15 | # Sets default timeout to 15 seconds |
| Library `|` Selenium2Library `|` 0 `|` 5 | # Sets default timeout to 0 seconds and default implicit_wait to 5 seconds |
| Library `|` Selenium2Library `|` 5 `|` run_on_failure=Log Source | # Sets default timeout to 5 seconds and runs `Log Source` on failure |
| Library `|` Selenium2Library `|` implicit_wait=5 `|` run_on_failure=Log Source | # Sets default implicit_wait to 5 seconds and runs `Log Source` on failure |
| Library `|` Selenium2Library `|` timeout=10 `|` run_on_failure=Nothing | # Sets default timeout to 10 seconds and does nothing on failure |
| Library `|` Selenium2Library `|` timeout=10 `|` web_gif=TRUE | # Sets default timeout to 10 seconds and enable gif file generation for each case |
"""
for base in Selenium2Library.__bases__:
base.__init__(self)
self.screenshot_root_directory = screenshot_root_directory
self.set_selenium_timeout(timeout)
self.set_selenium_implicit_wait(implicit_wait)
self.register_keyword_to_run_on_failure(run_on_failure)
self.web_Set_gif_flag(web_gif)
if self._web_gen_gif == True:
self.ROBOT_LIBRARY_LISTENER = LibraryListener() | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/__init__.py | 0.735547 | 0.48987 | __init__.py | pypi |
import os
from selenium.common.exceptions import WebDriverException
from keywordgroup import KeywordGroup
class _JavaScriptKeywords(KeywordGroup):
def __init__(self):
self._cancel_on_next_confirmation = False
# Public
def alert_should_be_present(self, text=''):
"""Verifies an alert is present and dismisses it.
If `text` is a non-empty string, then it is also verified that the
message of the alert equals to `text`.
Will fail if no alert is present. Note that following keywords
will fail unless the alert is dismissed by this
keyword or another like `Get Alert Message`.
"""
alert_text = self.get_alert_message()
if text and alert_text != text:
raise AssertionError("Alert text should have been '%s' but was '%s'"
% (text, alert_text))
def choose_cancel_on_next_confirmation(self):
"""Cancel will be selected the next time `Confirm Action` is used."""
self._cancel_on_next_confirmation = True
def choose_ok_on_next_confirmation(self):
"""Undo the effect of using keywords `Choose Cancel On Next Confirmation`. Note
that Selenium's overridden window.confirm() function will normally automatically
return true, as if the user had manually clicked OK, so you shouldn't
need to use this command unless for some reason you need to change
your mind prior to the next confirmation. After any confirmation, Selenium will resume using the
default behavior for future confirmations, automatically returning
true (OK) unless/until you explicitly use `Choose Cancel On Next Confirmation` for each
confirmation.
Note that every time a confirmation comes up, you must
consume it by using a keywords such as `Get Alert Message`, or else
the following selenium operations will fail.
"""
self._cancel_on_next_confirmation = False
def confirm_action(self):
"""Dismisses currently shown confirmation dialog and returns it's message.
By default, this keyword chooses 'OK' option from the dialog. If
'Cancel' needs to be chosen, keyword `Choose Cancel On Next
Confirmation` must be called before the action that causes the
confirmation dialog to be shown.
Examples:
| Click Button | Send | # Shows a confirmation dialog |
| ${message}= | Confirm Action | # Chooses Ok |
| Should Be Equal | ${message} | Are your sure? |
| | | |
| Choose Cancel On Next Confirmation | | |
| Click Button | Send | # Shows a confirmation dialog |
| Confirm Action | | # Chooses Cancel |
"""
text = self._close_alert(not self._cancel_on_next_confirmation)
self._cancel_on_next_confirmation = False
return text
def execute_javascript(self, *code):
"""Executes the given JavaScript code.
`code` may contain multiple lines of code and may be divided into
multiple cells in the test data. In that case, the parts are
catenated together without adding spaces.
If `code` is an absolute path to an existing file, the JavaScript
to execute will be read from that file. Forward slashes work as
a path separator on all operating systems.
The JavaScript executes in the context of the currently selected
frame or window as the body of an anonymous function. Use _window_ to
refer to the window of your application and _document_ to refer to the
document object of the current frame or window, e.g.
_document.getElementById('foo')_.
This keyword returns None unless there is a return statement in the
JavaScript. Return values are converted to the appropriate type in
Python, including WebElements.
Examples:
| Execute JavaScript | window.my_js_function('arg1', 'arg2') | |
| Execute JavaScript | ${CURDIR}/js_to_execute.js | |
| ${sum}= | Execute JavaScript | return 1 + 1; |
| Should Be Equal | ${sum} | ${2} |
"""
js = self._get_javascript_to_execute(''.join(code))
self._info("Executing JavaScript:\n%s" % js)
return self._current_browser().execute_script(js)
def execute_async_javascript(self, *code):
"""Executes asynchronous JavaScript code.
Similar to `Execute Javascript` except that scripts executed with
this keyword must explicitly signal they are finished by invoking the
provided callback. This callback is always injected into the executed
function as the last argument.
Scripts must complete within the script timeout or this keyword will
fail. See the `Timeouts` section for more information.
Examples:
| Execute Async JavaScript | var callback = arguments[arguments.length - 1]; | window.setTimeout(callback, 2000); |
| Execute Async JavaScript | ${CURDIR}/async_js_to_execute.js | |
| ${retval}= | Execute Async JavaScript | |
| ... | var callback = arguments[arguments.length - 1]; | |
| ... | function answer(){callback("text");}; | |
| ... | window.setTimeout(answer, 2000); | |
| Should Be Equal | ${retval} | text |
"""
js = self._get_javascript_to_execute(''.join(code))
self._info("Executing Asynchronous JavaScript:\n%s" % js)
return self._current_browser().execute_async_script(js)
def get_alert_message(self, dismiss=True):
"""Returns the text of current JavaScript alert.
By default the current JavaScript alert will be dismissed.
This keyword will fail if no alert is present. Note that
following keywords will fail unless the alert is
dismissed by this keyword or another like `Get Alert Message`.
"""
if dismiss:
return self._close_alert()
else:
return self._read_alert()
def dismiss_alert(self, accept=True):
""" Returns true if alert was confirmed, false if it was dismissed
This keyword will fail if no alert is present. Note that
following keywords will fail unless the alert is
dismissed by this keyword or another like `Get Alert Message`.
"""
return self._handle_alert(accept)
# Private
def _close_alert(self, confirm=True):
try:
text = self._read_alert()
alert = self._handle_alert(confirm)
return text
except WebDriverException:
raise RuntimeError('There were no alerts')
def _read_alert(self):
alert = None
try:
alert = self._current_browser().switch_to_alert()
text = ' '.join(alert.text.splitlines()) # collapse new lines chars
return text
except WebDriverException:
raise RuntimeError('There were no alerts')
def _handle_alert(self, confirm=True):
try:
alert = self._current_browser().switch_to_alert()
if not confirm:
alert.dismiss()
return False
else:
alert.accept()
return True
except WebDriverException:
raise RuntimeError('There were no alerts')
def _get_javascript_to_execute(self, code):
codepath = code.replace('/', os.sep)
if not (os.path.isabs(codepath) and os.path.isfile(codepath)):
return code
self._html('Reading JavaScript from file <a href="file://%s">%s</a>.'
% (codepath.replace(os.sep, '/'), codepath))
codefile = open(codepath)
try:
return codefile.read().strip()
finally:
codefile.close() | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_javascript.py | 0.641647 | 0.250964 | _javascript.py | pypi |
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.remote.webelement import WebElement
from Selenium2Library import utils
from Selenium2Library.locators import ElementFinder
from Selenium2Library.locators import CustomLocator
from keywordgroup import KeywordGroup
from time import sleep
try:
basestring # attempt to evaluate basestring
def isstr(s):
return isinstance(s, basestring)
except NameError:
def isstr(s):
return isinstance(s, str)
class _ElementKeywords(KeywordGroup):
def __init__(self):
self._element_finder = ElementFinder()
# Public, get element(s)
def get_webelement(self, locator):
"""Returns the first WebElement matching the given locator.
See `introduction` for details about locating elements.
"""
return self._element_find(locator, True, True)
def get_webelements(self, locator):
"""Returns list of WebElement objects matching locator.
See `introduction` for details about locating elements.
"""
return self._element_find(locator, False, True)
# Public, element lookups
def current_frame_contains(self, text, loglevel='INFO'):
"""Verifies that current frame contains `text`.
See `Page Should Contain ` for explanation about `loglevel` argument.
"""
if not self._is_text_present(text):
self.log_source(loglevel)
raise AssertionError("Page should have contained text '%s' "
"but did not" % text)
self._info("Current page contains text '%s'." % text)
def current_frame_should_not_contain(self, text, loglevel='INFO'):
"""Verifies that current frame contains `text`.
See `Page Should Contain ` for explanation about `loglevel` argument.
"""
if self._is_text_present(text):
self.log_source(loglevel)
raise AssertionError("Page should not have contained text '%s' "
"but it did" % text)
self._info("Current page should not contain text '%s'." % text)
def element_should_contain(self, locator, expected, message=''):
"""Verifies element identified by `locator` contains text `expected`.
If you wish to assert an exact (not a substring) match on the text
of the element, use `Element Text Should Be`.
`message` can be used to override the default error message.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Verifying element '%s' contains text '%s'."
% (locator, expected))
actual = self._get_text(locator)
if not expected in actual:
if not message:
message = "Element '%s' should have contained text '%s' but "\
"its text was '%s'." % (locator, expected, actual)
raise AssertionError(message)
def element_should_not_contain(self, locator, expected, message=''):
"""Verifies element identified by `locator` does not contain text `expected`.
`message` can be used to override the default error message.
Key attributes for arbitrary elements are `id` and `name`. See
`Element Should Contain` for more details.
"""
self._info("Verifying element '%s' does not contain text '%s'."
% (locator, expected))
actual = self._get_text(locator)
if expected in actual:
if not message:
message = "Element '%s' should not contain text '%s' but " \
"it did." % (locator, expected)
raise AssertionError(message)
def frame_should_contain(self, locator, text, loglevel='INFO'):
"""Verifies frame identified by `locator` contains `text`.
See `Page Should Contain ` for explanation about `loglevel` argument.
Key attributes for frames are `id` and `name.` See `introduction` for
details about locating elements.
"""
if not self._frame_contains(locator, text):
self.log_source(loglevel)
raise AssertionError("Page should have contained text '%s' "
"but did not" % text)
self._info("Current page contains text '%s'." % text)
def page_should_contain(self, text, loglevel='INFO'):
"""Verifies that current page contains `text`.
If this keyword fails, it automatically logs the page source
using the log level specified with the optional `loglevel` argument.
Valid log levels are DEBUG, INFO (default), WARN, and NONE. If the
log level is NONE or below the current active log level the source
will not be logged.
"""
if not self._page_contains(text):
self.log_source(loglevel)
raise AssertionError("Page should have contained text '%s' "
"but did not" % text)
self._info("Current page contains text '%s'." % text)
def page_should_contain_element(self, locator, message='', loglevel='INFO'):
"""Verifies element identified by `locator` is found on the current page.
`message` can be used to override default error message.
See `Page Should Contain` for explanation about `loglevel` argument.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._page_should_contain_element(locator, None, message, loglevel)
def locator_should_match_x_times(self, locator, expected_locator_count, message='', loglevel='INFO'):
"""Verifies that the page contains the given number of elements located by the given `locator`.
See `introduction` for details about locating elements.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
"""
actual_locator_count = len(self._element_find(locator, False, False))
if int(actual_locator_count) != int(expected_locator_count):
if not message:
message = "Locator %s should have matched %s times but matched %s times"\
%(locator, expected_locator_count, actual_locator_count)
self.log_source(loglevel)
raise AssertionError(message)
self._info("Current page contains %s elements matching '%s'."
% (actual_locator_count, locator))
def page_should_not_contain(self, text, loglevel='INFO'):
"""Verifies the current page does not contain `text`.
See `Page Should Contain ` for explanation about `loglevel` argument.
"""
if self._page_contains(text):
self.log_source(loglevel)
raise AssertionError("Page should not have contained text '%s'" % text)
self._info("Current page does not contain text '%s'." % text)
def page_should_not_contain_element(self, locator, message='', loglevel='INFO'):
"""Verifies element identified by `locator` is not found on the current page.
`message` can be used to override the default error message.
See `Page Should Contain ` for explanation about `loglevel` argument.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._page_should_not_contain_element(locator, None, message, loglevel)
# Public, attributes
def assign_id_to_element(self, locator, id):
"""Assigns a temporary identifier to element specified by `locator`.
This is mainly useful if the locator is complicated/slow XPath expression.
Identifier expires when the page is reloaded.
Example:
| Assign ID to Element | xpath=//div[@id="first_div"] | my id |
| Page Should Contain Element | my id |
"""
self._info("Assigning temporary id '%s' to element '%s'" % (id, locator))
element = self._element_find(locator, True, True)
self._current_browser().execute_script("arguments[0].id = '%s';" % id, element)
def element_should_be_disabled(self, locator):
"""Verifies that element identified with `locator` is disabled.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
if self._is_enabled(locator):
raise AssertionError("Element '%s' is enabled." % (locator))
def element_should_be_enabled(self, locator):
"""Verifies that element identified with `locator` is enabled.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
if not self._is_enabled(locator):
raise AssertionError("Element '%s' is disabled." % (locator))
def element_should_be_visible(self, locator, message=''):
"""Verifies that the element identified by `locator` is visible.
Herein, visible means that the element is logically visible, not optically
visible in the current browser viewport. For example, an element that carries
display:none is not logically visible, so using this keyword on that element
would fail.
`message` can be used to override the default error message.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Verifying element '%s' is visible." % locator)
visible = self._is_visible(locator)
if not visible:
if not message:
message = "The element '%s' should be visible, but it "\
"is not." % locator
raise AssertionError(message)
def element_should_not_be_visible(self, locator, message=''):
"""Verifies that the element identified by `locator` is NOT visible.
This is the opposite of `Element Should Be Visible`.
`message` can be used to override the default error message.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Verifying element '%s' is not visible." % locator)
visible = self._is_visible(locator)
if visible:
if not message:
message = "The element '%s' should not be visible, "\
"but it is." % locator
raise AssertionError(message)
def element_text_should_be(self, locator, expected, message=''):
"""Verifies element identified by `locator` exactly contains text `expected`.
In contrast to `Element Should Contain`, this keyword does not try
a substring match but an exact match on the element identified by `locator`.
`message` can be used to override the default error message.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Verifying element '%s' contains exactly text '%s'."
% (locator, expected))
element = self._element_find(locator, True, True)
actual = element.text
if expected != actual:
if not message:
message = "The text of element '%s' should have been '%s' but "\
"in fact it was '%s'." % (locator, expected, actual)
raise AssertionError(message)
def get_element_attribute(self, attribute_locator):
"""Return value of element attribute.
`attribute_locator` consists of element locator followed by an @ sign
and attribute name, for example "element_id@class".
"""
locator, attribute_name = self._parse_attribute_locator(attribute_locator)
element = self._element_find(locator, True, False)
if element is None:
raise ValueError("Element '%s' not found." % (locator))
return element.get_attribute(attribute_name)
def get_horizontal_position(self, locator):
"""Returns horizontal position of element identified by `locator`.
The position is returned in pixels off the left side of the page,
as an integer. Fails if a matching element is not found.
See also `Get Vertical Position`.
"""
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("Could not determine position for '%s'" % (locator))
return element.location['x']
def get_value(self, locator):
"""Returns the value attribute of element identified by `locator`.
See `introduction` for details about locating elements.
"""
return self._get_value(locator)
def get_text(self, locator):
"""Returns the text value of element identified by `locator`.
See `introduction` for details about locating elements.
"""
return self._get_text(locator)
def clear_element_text(self, locator):
"""Clears the text value of text entry element identified by `locator`.
See `introduction` for details about locating elements.
"""
element = self._element_find(locator, True, True)
element.clear()
def get_vertical_position(self, locator):
"""Returns vertical position of element identified by `locator`.
The position is returned in pixels off the top of the page,
as an integer. Fails if a matching element is not found.
See also `Get Horizontal Position`.
"""
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("Could not determine position for '%s'" % (locator))
return element.location['y']
# Public, mouse input/events
def click_element(self, locator):
"""Click element identified by `locator`.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Clicking element '%s'." % locator)
self._element_find(locator, True, True).click()
def click_element_at_coordinates(self, locator, xoffset, yoffset):
"""Click element identified by `locator` at x/y coordinates of the element.
Cursor is moved and the center of the element and x/y coordinates are
calculted from that point.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Click clicking element '%s' in coordinates '%s', '%s'." % (locator, xoffset, yoffset))
element = self._element_find(locator, True, True)
#self._element_find(locator, True, True).click()
#ActionChains(self._current_browser()).move_to_element_with_offset(element, xoffset, yoffset).click().perform()
ActionChains(self._current_browser()).move_to_element(element).move_by_offset(xoffset, yoffset).click().perform()
def double_click_element(self, locator):
"""Double click element identified by `locator`.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Double clicking element '%s'." % locator)
element = self._element_find(locator, True, True)
ActionChains(self._current_browser()).double_click(element).perform()
def focus(self, locator):
"""Sets focus to element identified by `locator`."""
element = self._element_find(locator, True, True)
self._current_browser().execute_script("arguments[0].focus();", element)
def drag_and_drop(self, source, target):
"""Drags element identified with `source` which is a locator.
Element can be moved on top of another element with `target`
argument.
`target` is a locator of the element where the dragged object is
dropped.
Examples:
| Drag And Drop | elem1 | elem2 | # Move elem1 over elem2. |
"""
src_elem = self._element_find(source,True,True)
trg_elem = self._element_find(target,True,True)
ActionChains(self._current_browser()).drag_and_drop(src_elem, trg_elem).perform()
def drag_and_drop_by_offset(self, source, xoffset, yoffset):
"""Drags element identified with `source` which is a locator.
Element will be moved by xoffset and yoffset, each of which is a
negative or positive number specify the offset.
Examples:
| Drag And Drop By Offset | myElem | 50 | -35 | # Move myElem 50px right and 35px down. |
"""
src_elem = self._element_find(source, True, True)
ActionChains(self._current_browser()).drag_and_drop_by_offset(src_elem, xoffset, yoffset).perform()
def mouse_down(self, locator):
"""Simulates pressing the left mouse button on the element specified by `locator`.
The element is pressed without releasing the mouse button.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
See also the more specific keywords `Mouse Down On Image` and
`Mouse Down On Link`.
"""
self._info("Simulating Mouse Down on element '%s'" % locator)
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("ERROR: Element %s not found." % (locator))
ActionChains(self._current_browser()).click_and_hold(element).perform()
def mouse_out(self, locator):
"""Simulates moving mouse away from the element specified by `locator`.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Simulating Mouse Out on element '%s'" % locator)
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("ERROR: Element %s not found." % (locator))
size = element.size
offsetx = (size['width'] / 2) + 1
offsety = (size['height'] / 2) + 1
ActionChains(self._current_browser()).move_to_element(element).move_by_offset(offsetx, offsety).perform()
def mouse_over(self, locator):
"""Simulates hovering mouse over the element specified by `locator`.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Simulating Mouse Over on element '%s'" % locator)
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("ERROR: Element %s not found." % (locator))
ActionChains(self._current_browser()).move_to_element(element).perform()
def mouse_up(self, locator):
"""Simulates releasing the left mouse button on the element specified by `locator`.
Key attributes for arbitrary elements are `id` and `name`. See
`introduction` for details about locating elements.
"""
self._info("Simulating Mouse Up on element '%s'" % locator)
element = self._element_find(locator, True, False)
if element is None:
raise AssertionError("ERROR: Element %s not found." % (locator))
ActionChains(self._current_browser()).release(element).perform()
def open_context_menu(self, locator):
"""Opens context menu on element identified by `locator`."""
element = self._element_find(locator, True, True)
ActionChains(self._current_browser()).context_click(element).perform()
def simulate(self, locator, event):
"""Simulates `event` on element identified by `locator`.
This keyword is useful if element has OnEvent handler that needs to be
explicitly invoked.
See `introduction` for details about locating elements.
"""
element = self._element_find(locator, True, True)
script = """
element = arguments[0];
eventName = arguments[1];
if (document.createEventObject) { // IE
return element.fireEvent('on' + eventName, document.createEventObject());
}
var evt = document.createEvent("HTMLEvents");
evt.initEvent(eventName, true, true);
return !element.dispatchEvent(evt);
"""
self._current_browser().execute_script(script, element, event)
def press_key(self, locator, key):
"""Simulates user pressing key on element identified by `locator`.
`key` is either a single character, a string, or a numerical ASCII code of the key
lead by '\\\\'.
Examples:
| Press Key | text_field | q |
| Press Key | text_field | abcde |
| Press Key | login_button | \\\\13 | # ASCII code for enter key |
"""
if key.startswith('\\') and len(key) > 1:
key = self._map_ascii_key_code_to_key(int(key[1:]))
element = self._element_find(locator, True, True)
#select it
element.send_keys(key)
# Public, links
def click_link(self, locator):
"""Clicks a link identified by locator.
Key attributes for links are `id`, `name`, `href` and link text. See
`introduction` for details about locating elements.
"""
self._info("Clicking link '%s'." % locator)
link = self._element_find(locator, True, True, tag='a')
link.click()
def get_all_links(self):
"""Returns a list containing ids of all links found in current page.
If a link has no id, an empty string will be in the list instead.
"""
links = []
for anchor in self._element_find("tag=a", False, False, 'a'):
links.append(anchor.get_attribute('id'))
return links
def mouse_down_on_link(self, locator):
"""Simulates a mouse down event on a link.
Key attributes for links are `id`, `name`, `href` and link text. See
`introduction` for details about locating elements.
"""
element = self._element_find(locator, True, True, 'link')
ActionChains(self._current_browser()).click_and_hold(element).perform()
def page_should_contain_link(self, locator, message='', loglevel='INFO'):
"""Verifies link identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for links are `id`, `name`, `href` and link text. See
`introduction` for details about locating elements.
"""
self._page_should_contain_element(locator, 'link', message, loglevel)
def page_should_not_contain_link(self, locator, message='', loglevel='INFO'):
"""Verifies image identified by `locator` is not found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for images are `id`, `src` and `alt`. See
`introduction` for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'link', message, loglevel)
# Public, images
def click_image(self, locator):
"""Clicks an image found by `locator`.
Key attributes for images are `id`, `src` and `alt`. See
`introduction` for details about locating elements.
"""
self._info("Clicking image '%s'." % locator)
element = self._element_find(locator, True, False, 'image')
if element is None:
# A form may have an image as it's submit trigger.
element = self._element_find(locator, True, True, 'input')
element.click()
def mouse_down_on_image(self, locator):
"""Simulates a mouse down event on an image.
Key attributes for images are `id`, `src` and `alt`. See
`introduction` for details about locating elements.
"""
element = self._element_find(locator, True, True, 'image')
ActionChains(self._current_browser()).click_and_hold(element).perform()
def page_should_contain_image(self, locator, message='', loglevel='INFO'):
"""Verifies image identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for images are `id`, `src` and `alt`. See
`introduction` for details about locating elements.
"""
self._page_should_contain_element(locator, 'image', message, loglevel)
def page_should_not_contain_image(self, locator, message='', loglevel='INFO'):
"""Verifies image identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for images are `id`, `src` and `alt`. See
`introduction` for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'image', message, loglevel)
# Public, xpath
def get_matching_xpath_count(self, xpath):
"""Returns number of elements matching `xpath`
One should not use the xpath= prefix for 'xpath'. XPath is assumed.
Correct:
| count = | Get Matching Xpath Count | //div[@id='sales-pop']
Incorrect:
| count = | Get Matching Xpath Count | xpath=//div[@id='sales-pop']
If you wish to assert the number of matching elements, use
`Xpath Should Match X Times`.
"""
count = len(self._element_find("xpath=" + xpath, False, False))
return str(count)
def xpath_should_match_x_times(self, xpath, expected_xpath_count, message='', loglevel='INFO'):
"""Verifies that the page contains the given number of elements located by the given `xpath`.
One should not use the xpath= prefix for 'xpath'. XPath is assumed.
Correct:
| Xpath Should Match X Times | //div[@id='sales-pop'] | 1
Incorrect:
| Xpath Should Match X Times | xpath=//div[@id='sales-pop'] | 1
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
"""
actual_xpath_count = len(self._element_find("xpath=" + xpath, False, False))
if int(actual_xpath_count) != int(expected_xpath_count):
if not message:
message = "Xpath %s should have matched %s times but matched %s times"\
%(xpath, expected_xpath_count, actual_xpath_count)
self.log_source(loglevel)
raise AssertionError(message)
self._info("Current page contains %s elements matching '%s'."
% (actual_xpath_count, xpath))
# Public, custom
def add_location_strategy(self, strategy_name, strategy_keyword, persist=False):
"""Adds a custom location strategy based on a user keyword. Location strategies are
automatically removed after leaving the current scope by default. Setting `persist`
to any non-empty string will cause the location strategy to stay registered throughout
the life of the test.
Trying to add a custom location strategy with the same name as one that already exists will
cause the keyword to fail.
Custom locator keyword example:
| Custom Locator Strategy | [Arguments] | ${browser} | ${criteria} | ${tag} | ${constraints} |
| | ${retVal}= | Execute Javascript | return window.document.getElementById('${criteria}'); |
| | [Return] | ${retVal} |
Usage example:
| Add Location Strategy | custom | Custom Locator Strategy |
| Page Should Contain Element | custom=my_id |
See `Remove Location Strategy` for details about removing a custom location strategy.
"""
strategy = CustomLocator(strategy_name, strategy_keyword)
self._element_finder.register(strategy, persist)
def remove_location_strategy(self, strategy_name):
"""Removes a previously added custom location strategy.
Will fail if a default strategy is specified.
See `Add Location Strategy` for details about adding a custom location strategy.
"""
self._element_finder.unregister(strategy_name)
# Private
def _element_find(self, locator, first_only, required, tag=None):
browser = self._current_browser()
if isstr(locator):
elements = self._element_finder.find(browser, locator, tag)
if required and len(elements) == 0:
raise ValueError("Element locator '" + locator + "' did not match any elements.")
if first_only:
if len(elements) == 0: return None
return elements[0]
elif isinstance(locator, WebElement):
elements = locator
# do some other stuff here like deal with list of webelements
# ... or raise locator/element specific error if required
return elements
def _frame_contains(self, locator, text):
browser = self._current_browser()
element = self._element_find(locator, True, True)
browser.switch_to_frame(element)
self._info("Searching for text from frame '%s'." % locator)
found = self._is_text_present(text)
browser.switch_to_default_content()
return found
def _get_text(self, locator):
element = self._element_find(locator, True, True)
if element is not None:
return element.text
return None
def _get_value(self, locator, tag=None):
element = self._element_find(locator, True, False, tag=tag)
return element.get_attribute('value') if element is not None else None
def _is_enabled(self, locator):
element = self._element_find(locator, True, True)
if not self._is_form_element(element):
raise AssertionError("ERROR: Element %s is not an input." % (locator))
if not element.is_enabled():
return False
read_only = element.get_attribute('readonly')
if read_only == 'readonly' or read_only == 'true':
return False
return True
def _is_text_present(self, text):
locator = "xpath=//*[contains(., %s)]" % utils.escape_xpath_value(text);
return self._is_element_present(locator)
def _is_visible(self, locator):
element = self._element_find(locator, True, False)
if element is not None:
return element.is_displayed()
return None
def _map_ascii_key_code_to_key(self, key_code):
map = {
0: Keys.NULL,
8: Keys.BACK_SPACE,
9: Keys.TAB,
10: Keys.RETURN,
13: Keys.ENTER,
24: Keys.CANCEL,
27: Keys.ESCAPE,
32: Keys.SPACE,
42: Keys.MULTIPLY,
43: Keys.ADD,
44: Keys.SEPARATOR,
45: Keys.SUBTRACT,
56: Keys.DECIMAL,
57: Keys.DIVIDE,
59: Keys.SEMICOLON,
61: Keys.EQUALS,
127: Keys.DELETE
}
key = map.get(key_code)
if key is None:
key = chr(key_code)
return key
def _map_named_key_code_to_special_key(self, key_name):
try:
return getattr(Keys, key_name)
except AttributeError:
message = "Unknown key named '%s'." % (key_name)
self._debug(message)
raise ValueError(message)
def _parse_attribute_locator(self, attribute_locator):
parts = attribute_locator.rpartition('@')
if len(parts[0]) == 0:
raise ValueError("Attribute locator '%s' does not contain an element locator." % (attribute_locator))
if len(parts[2]) == 0:
raise ValueError("Attribute locator '%s' does not contain an attribute name." % (attribute_locator))
return (parts[0], parts[2])
def _is_element_present(self, locator, tag=None):
return (self._element_find(locator, True, False, tag=tag) is not None)
def _page_contains(self, text):
browser = self._current_browser()
browser.switch_to_default_content()
if self._is_text_present(text):
return True
subframes = self._element_find("xpath=//frame|//iframe", False, False)
self._debug('Current frame has %d subframes' % len(subframes))
for frame in subframes:
browser.switch_to_frame(frame)
found_text = self._is_text_present(text)
browser.switch_to_default_content()
if found_text:
return True
return False
def _page_should_contain_element(self, locator, tag, message, loglevel):
element_name = tag if tag is not None else 'element'
if not self._is_element_present(locator, tag):
if not message:
message = "Page should have contained %s '%s' but did not"\
% (element_name, locator)
self.log_source(loglevel)
raise AssertionError(message)
self._info("Current page contains %s '%s'." % (element_name, locator))
def _page_should_not_contain_element(self, locator, tag, message, loglevel):
element_name = tag if tag is not None else 'element'
if self._is_element_present(locator, tag):
if not message:
message = "Page should not have contained %s '%s'"\
% (element_name, locator)
self.log_source(loglevel)
raise AssertionError(message)
self._info("Current page does not contain %s '%s'."
% (element_name, locator)) | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_element.py | 0.749454 | 0.243339 | _element.py | pypi |
import robot
import os, errno
from Selenium2Library import utils
from keywordgroup import KeywordGroup
class _ScreenshotKeywords(KeywordGroup):
def __init__(self):
self._screenshot_index = 0
self._gif_index=0
self._screenshot_path_stack = []
self.screenshot_root_directory = None
# Public
def set_screenshot_directory(self, path, persist=False):
"""Sets the root output directory for captured screenshots.
``path`` argument specifies the absolute path where the screenshots should
be written to. If the specified ``path`` does not exist, it will be created.
Setting ``persist`` specifies that the given ``path`` should
be used for the rest of the test execution, otherwise the path will be restored
at the end of the currently executing scope.
"""
path = os.path.abspath(path)
self._create_directory(path)
if persist is False:
self._screenshot_path_stack.append(self.screenshot_root_directory)
# Restore after current scope ends
utils.events.on('scope_end', 'current', self._restore_screenshot_directory)
self.screenshot_root_directory = path
def capture_page_screenshot(self, filename=None):
"""Takes a screenshot of the current page and embeds it into the log.
`filename` argument specifies the name of the file to write the
screenshot into. If no `filename` is given, the screenshot is saved into file
`selenium-screenshot-<counter>.png` under the directory where
the Robot Framework log file is written into. The `filename` is
also considered relative to the same directory, if it is not
given in absolute format. If an absolute or relative path is given
but the path does not exist it will be created.
`css` can be used to modify how the screenshot is taken. By default
the bakground color is changed to avoid possible problems with
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
self._create_directory(path)
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
if not self._current_browser().get_screenshot_as_file(path):
raise RuntimeError('Failed to save screenshot ' + filename)
else:
if not self._current_browser().save_screenshot(path):
raise RuntimeError('Failed to save screenshot ' + filename)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
'<img src="%s" width="800px"></a>' % (link, link))
def capture_page_screenshot_without_html_log(self, filename=None):
"""Takes a screenshot of the current page and >do not< embeds it into the log.
`filename` argument specifies the name of the file to write the
screenshot into. If no `filename` is given, the screenshot is saved into file
`appium-screenshot-<counter>.png` under the directory where
the Robot Framework log file is written into. The `filename` is
also considered relative to the same directory, if it is not
given in absolute format.
`css` can be used to modify how the screenshot is taken. By default
the bakground color is changed to avoid possible problems with
background leaking when the page layout is somehow broken.
"""
path, link = self._get_gif_screenshot_paths(filename)
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
# Private
def _create_directory(self, path):
target_dir = os.path.dirname(path)
if not os.path.exists(target_dir):
try:
os.makedirs(target_dir)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(target_dir):
pass
else:
raise
def _get_screenshot_directory(self):
# Use screenshot root directory if set
if self.screenshot_root_directory is not None:
return self.screenshot_root_directory
# Otherwise use RF's log directory
return self._get_log_dir()
# should only be called by set_screenshot_directory
def _restore_screenshot_directory(self):
self.screenshot_root_directory = self._screenshot_path_stack.pop()
def _get_screenshot_paths(self, filename):
if not filename:
self._screenshot_index += 1
filename = 'selenium-screenshot-%d.png' % self._screenshot_index
else:
filename = filename.replace('/', os.sep)
screenshotDir = self._get_screenshot_directory()
logDir = self._get_log_dir()
path = os.path.join(screenshotDir, filename)
link = robot.utils.get_link_path(path, logDir)
return path, link
def _get_gif_screenshot_paths(self, filename):
if not filename:
self._gif_index += 1
filename = 'web-gif-%d.png' % self._gif_index
else:
filename = filename.replace('/', os.sep)
logdir = self._get_log_dir()
path = os.path.join(logdir, filename)
link = robot.utils.get_link_path(path, logdir)
return path, link | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_screenshot.py | 0.507568 | 0.208874 | _screenshot.py | pypi |
import os
from keywordgroup import KeywordGroup
from selenium.common.exceptions import WebDriverException
class _FormElementKeywords(KeywordGroup):
# Public, form
def submit_form(self, locator=None):
"""Submits a form identified by `locator`.
If `locator` is empty, first form in the page will be submitted.
Key attributes for forms are `id` and `name`. See `introduction` for
details about locating elements.
"""
self._info("Submitting form '%s'." % locator)
if not locator:
locator = 'xpath=//form'
element = self._element_find(locator, True, True, 'form')
element.submit()
# Public, checkboxes
def checkbox_should_be_selected(self, locator):
"""Verifies checkbox identified by `locator` is selected/checked.
Key attributes for checkboxes are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._info("Verifying checkbox '%s' is selected." % locator)
element = self._get_checkbox(locator)
if not element.is_selected():
raise AssertionError("Checkbox '%s' should have been selected "
"but was not" % locator)
def checkbox_should_not_be_selected(self, locator):
"""Verifies checkbox identified by `locator` is not selected/checked.
Key attributes for checkboxes are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._info("Verifying checkbox '%s' is not selected." % locator)
element = self._get_checkbox(locator)
if element.is_selected():
raise AssertionError("Checkbox '%s' should not have been selected"
% locator)
def page_should_contain_checkbox(self, locator, message='', loglevel='INFO'):
"""Verifies checkbox identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for checkboxes are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._page_should_contain_element(locator, 'checkbox', message, loglevel)
def page_should_not_contain_checkbox(self, locator, message='', loglevel='INFO'):
"""Verifies checkbox identified by `locator` is not found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for checkboxes are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'checkbox', message, loglevel)
def select_checkbox(self, locator):
"""Selects checkbox identified by `locator`.
Does nothing if checkbox is already selected. Key attributes for
checkboxes are `id` and `name`. See `introduction` for details about
locating elements.
"""
self._info("Selecting checkbox '%s'." % locator)
element = self._get_checkbox(locator)
if not element.is_selected():
element.click()
def unselect_checkbox(self, locator):
"""Removes selection of checkbox identified by `locator`.
Does nothing if the checkbox is not checked. Key attributes for
checkboxes are `id` and `name`. See `introduction` for details about
locating elements.
"""
self._info("Unselecting checkbox '%s'." % locator)
element = self._get_checkbox(locator)
if element.is_selected():
element.click()
# Public, radio buttons
def page_should_contain_radio_button(self, locator, message='', loglevel='INFO'):
"""Verifies radio button identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for radio buttons are `id`, `name` and `value`. See
`introduction` for details about locating elements.
"""
self._page_should_contain_element(locator, 'radio button', message, loglevel)
def page_should_not_contain_radio_button(self, locator, message='', loglevel='INFO'):
"""Verifies radio button identified by `locator` is not found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for radio buttons are `id`, `name` and `value`. See
`introduction` for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'radio button', message, loglevel)
def radio_button_should_be_set_to(self, group_name, value):
"""Verifies radio button group identified by `group_name` has its selection set to `value`.
See `Select Radio Button` for information about how radio buttons are
located.
"""
self._info("Verifying radio button '%s' has selection '%s'." \
% (group_name, value))
elements = self._get_radio_buttons(group_name)
actual_value = self._get_value_from_radio_buttons(elements)
if actual_value is None or actual_value != value:
raise AssertionError("Selection of radio button '%s' should have "
"been '%s' but was '%s'"
% (group_name, value, actual_value))
def radio_button_should_not_be_selected(self, group_name):
"""Verifies radio button group identified by `group_name` has no selection.
See `Select Radio Button` for information about how radio buttons are
located.
"""
self._info("Verifying radio button '%s' has no selection." % group_name)
elements = self._get_radio_buttons(group_name)
actual_value = self._get_value_from_radio_buttons(elements)
if actual_value is not None:
raise AssertionError("Radio button group '%s' should not have had "
"selection, but '%s' was selected"
% (group_name, actual_value))
def select_radio_button(self, group_name, value):
"""Sets selection of radio button group identified by `group_name` to `value`.
The radio button to be selected is located by two arguments:
- `group_name` is used as the name of the radio input
- `value` is used for the value attribute or for the id attribute
The XPath used to locate the correct radio button then looks like this:
//input[@type='radio' and @name='group_name' and (@value='value' or @id='value')]
Examples:
| Select Radio Button | size | XL | # Matches HTML like <input type="radio" name="size" value="XL">XL</input> |
| Select Radio Button | size | sizeXL | # Matches HTML like <input type="radio" name="size" value="XL" id="sizeXL">XL</input> |
"""
self._info("Selecting '%s' from radio button '%s'." % (value, group_name))
element = self._get_radio_button_with_value(group_name, value)
if not element.is_selected():
element.click()
# Public, text fields
def choose_file(self, locator, file_path):
"""Inputs the `file_path` into file input field found by `locator`.
This keyword is most often used to input files into upload forms.
The file specified with `file_path` must be available on the same host
where the Selenium Server is running.
Example:
| Choose File | my_upload_field | /home/user/files/trades.csv |
"""
if not os.path.isfile(file_path):
raise AssertionError("File '%s' does not exist on the local file system"
% file_path)
self._element_find(locator, True, True).send_keys(file_path)
def input_password(self, locator, text):
"""Types the given password into text field identified by `locator`.
Difference between this keyword and `Input Text` is that this keyword
does not log the given password. See `introduction` for details about
locating elements.
"""
self._info("Typing password into text field '%s'" % locator)
self._input_text_into_text_field(locator, text)
def input_text(self, locator, text):
"""Types the given `text` into text field identified by `locator`.
See `introduction` for details about locating elements.
"""
self._info("Typing text '%s' into text field '%s'" % (text, locator))
self._input_text_into_text_field(locator, text)
def input_text_into_prompt(self, text):
"""Types the given `text` into alert box. """
alert = None
try:
alert = self._current_browser().switch_to_alert()
alert.send_keys(text)
except WebDriverException:
raise RuntimeError('There were no alerts')
def page_should_contain_textfield(self, locator, message='', loglevel='INFO'):
"""Verifies text field identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for text fields are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._page_should_contain_element(locator, 'text field', message, loglevel)
def page_should_not_contain_textfield(self, locator, message='', loglevel='INFO'):
"""Verifies text field identified by `locator` is not found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for text fields are `id` and `name`. See `introduction`
for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'text field', message, loglevel)
def textfield_should_contain(self, locator, expected, message=''):
"""Verifies text field identified by `locator` contains text `expected`.
`message` can be used to override default error message.
Key attributes for text fields are `id` and `name`. See `introduction`
for details about locating elements.
"""
actual = self._get_value(locator, 'text field')
if not expected in actual:
if not message:
message = "Text field '%s' should have contained text '%s' "\
"but it contained '%s'" % (locator, expected, actual)
raise AssertionError(message)
self._info("Text field '%s' contains text '%s'." % (locator, expected))
def textfield_value_should_be(self, locator, expected, message=''):
"""Verifies the value in text field identified by `locator` is exactly `expected`.
`message` can be used to override default error message.
Key attributes for text fields are `id` and `name`. See `introduction`
for details about locating elements.
"""
element = self._element_find(locator, True, False, 'text field')
if element is None: element = self._element_find(locator, True, False, 'file upload')
actual = element.get_attribute('value') if element is not None else None
if actual != expected:
if not message:
message = "Value of text field '%s' should have been '%s' "\
"but was '%s'" % (locator, expected, actual)
raise AssertionError(message)
self._info("Content of text field '%s' is '%s'." % (locator, expected))
def textarea_should_contain(self, locator, expected, message=''):
"""Verifies text area identified by `locator` contains text `expected`.
`message` can be used to override default error message.
Key attributes for text areas are `id` and `name`. See `introduction`
for details about locating elements.
"""
actual = self._get_value(locator, 'text area')
if actual is not None:
if not expected in actual:
if not message:
message = "Text field '%s' should have contained text '%s' "\
"but it contained '%s'" % (locator, expected, actual)
raise AssertionError(message)
else:
raise ValueError("Element locator '" + locator + "' did not match any elements.")
self._info("Text area '%s' contains text '%s'." % (locator, expected))
def textarea_value_should_be(self, locator, expected, message=''):
"""Verifies the value in text area identified by `locator` is exactly `expected`.
`message` can be used to override default error message.
Key attributes for text areas are `id` and `name`. See `introduction`
for details about locating elements.
"""
actual = self._get_value(locator, 'text area')
if actual is not None:
if expected!=actual:
if not message:
message = "Text field '%s' should have contained text '%s' "\
"but it contained '%s'" % (locator, expected, actual)
raise AssertionError(message)
else:
raise ValueError("Element locator '" + locator + "' did not match any elements.")
self._info("Content of text area '%s' is '%s'." % (locator, expected))
# Public, buttons
def click_button(self, locator):
"""Clicks a button identified by `locator`.
Key attributes for buttons are `id`, `name` and `value`. See
`introduction` for details about locating elements.
"""
self._info("Clicking button '%s'." % locator)
element = self._element_find(locator, True, False, 'input')
if element is None:
element = self._element_find(locator, True, True, 'button')
element.click()
def page_should_contain_button(self, locator, message='', loglevel='INFO'):
"""Verifies button identified by `locator` is found from current page.
This keyword searches for buttons created with either `input` or `button` tag.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for buttons are `id`, `name` and `value`. See
`introduction` for details about locating elements.
"""
try:
self._page_should_contain_element(locator, 'input', message, loglevel)
except AssertionError:
self._page_should_contain_element(locator, 'button', message, loglevel)
def page_should_not_contain_button(self, locator, message='', loglevel='INFO'):
"""Verifies button identified by `locator` is not found from current page.
This keyword searches for buttons created with either `input` or `button` tag.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for buttons are `id`, `name` and `value`. See
`introduction` for details about locating elements.
"""
self._page_should_not_contain_element(locator, 'button', message, loglevel)
self._page_should_not_contain_element(locator, 'input', message, loglevel)
# Private
def _get_checkbox(self, locator):
return self._element_find(locator, True, True, tag='input')
def _get_radio_buttons(self, group_name):
xpath = "xpath=//input[@type='radio' and @name='%s']" % group_name
self._debug('Radio group locator: ' + xpath)
return self._element_find(xpath, False, True)
def _get_radio_button_with_value(self, group_name, value):
xpath = "xpath=//input[@type='radio' and @name='%s' and (@value='%s' or @id='%s')]" \
% (group_name, value, value)
self._debug('Radio group locator: ' + xpath)
return self._element_find(xpath, True, True)
def _get_value_from_radio_buttons(self, elements):
for element in elements:
if element.is_selected():
return element.get_attribute('value')
return None
def _input_text_into_text_field(self, locator, text):
element = self._element_find(locator, True, True)
element.clear()
element.send_keys(text)
def _is_form_element(self, element):
if element is None:
return False
tag = element.tag_name.lower()
return tag == 'input' or tag == 'select' or tag == 'textarea' or tag == 'button' or tag == 'option' | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_formelement.py | 0.657758 | 0.188866 | _formelement.py | pypi |
import os
import sys
from Selenium2Library.locators import TableElementFinder
from keywordgroup import KeywordGroup
class _TableElementKeywords(KeywordGroup):
def __init__(self):
self._table_element_finder = TableElementFinder()
# Public
def get_table_cell(self, table_locator, row, column, loglevel='INFO'):
"""Returns the content from a table cell.
Row and column number start from 1. Header and footer rows are
included in the count. A negative row or column number can be used
to get rows counting from the end (end: -1). Cell content from header
or footer rows can be obtained with this keyword. To understand how
tables are identified, please take a look at the `introduction`.
See `Page Should Contain` for explanation about `loglevel` argument.
"""
row = int(row)
row_index = row
if row > 0: row_index = row - 1
column = int(column)
column_index = column
if column > 0: column_index = column - 1
table = self._table_element_finder.find(self._current_browser(), table_locator)
if table is not None:
rows = table.find_elements_by_xpath("./thead/tr")
if row_index >= len(rows) or row_index < 0:
rows.extend(table.find_elements_by_xpath("./tbody/tr"))
if row_index >= len(rows) or row_index < 0:
rows.extend(table.find_elements_by_xpath("./tfoot/tr"))
if row_index < len(rows):
columns = rows[row_index].find_elements_by_tag_name('th')
if column_index >= len(columns) or column_index < 0:
columns.extend(rows[row_index].find_elements_by_tag_name('td'))
if column_index < len(columns):
return columns[column_index].text
self.log_source(loglevel)
raise AssertionError("Cell in table %s in row #%s and column #%s could not be found."
% (table_locator, str(row), str(column)))
def table_cell_should_contain(self, table_locator, row, column, expected, loglevel='INFO'):
"""Verifies that a certain cell in a table contains `expected`.
Row and column number start from 1. This keyword passes if the
specified cell contains the given content. If you want to test
that the cell content matches exactly, or that it e.g. starts
with some text, use `Get Table Cell` keyword in combination
with built-in keywords such as `Should Be Equal` or `Should
Start With`.
To understand how tables are identified, please take a look at
the `introduction`.
See `Page Should Contain` for explanation about `loglevel` argument.
"""
message = ("Cell in table '%s' in row #%s and column #%s "
"should have contained text '%s'."
% (table_locator, row, column, expected))
try:
content = self.get_table_cell(table_locator, row, column, loglevel='NONE')
except AssertionError, err:
self._info(err)
self.log_source(loglevel)
raise AssertionError(message)
self._info("Cell contains %s." % (content))
if expected not in content:
self.log_source(loglevel)
raise AssertionError(message)
def table_column_should_contain(self, table_locator, col, expected, loglevel='INFO'):
"""Verifies that a specific column contains `expected`.
The first leftmost column is column number 1. A negative column
number can be used to get column counting from the end of the row (end: -1).
If the table contains cells that span multiple columns, those merged cells
count as a single column. For example both tests below work,
if in one row columns A and B are merged with colspan="2", and
the logical third column contains "C".
Example:
| Table Column Should Contain | tableId | 3 | C |
| Table Column Should Contain | tableId | 2 | C |
To understand how tables are identified, please take a look at
the `introduction`.
See `Page Should Contain Element` for explanation about
`loglevel` argument.
"""
element = self._table_element_finder.find_by_col(self._current_browser(), table_locator, col, expected)
if element is None:
self.log_source(loglevel)
raise AssertionError("Column #%s in table identified by '%s' "
"should have contained text '%s'."
% (col, table_locator, expected))
def table_footer_should_contain(self, table_locator, expected, loglevel='INFO'):
"""Verifies that the table footer contains `expected`.
With table footer can be described as any <td>-element that is
child of a <tfoot>-element. To understand how tables are
identified, please take a look at the `introduction`.
See `Page Should Contain Element` for explanation about
`loglevel` argument.
"""
element = self._table_element_finder.find_by_footer(self._current_browser(), table_locator, expected)
if element is None:
self.log_source(loglevel)
raise AssertionError("Footer in table identified by '%s' should have contained "
"text '%s'." % (table_locator, expected))
def table_header_should_contain(self, table_locator, expected, loglevel='INFO'):
"""Verifies that the table header, i.e. any <th>...</th> element, contains `expected`.
To understand how tables are identified, please take a look at
the `introduction`.
See `Page Should Contain Element` for explanation about
`loglevel` argument.
"""
element = self._table_element_finder.find_by_header(self._current_browser(), table_locator, expected)
if element is None:
self.log_source(loglevel)
raise AssertionError("Header in table identified by '%s' should have contained "
"text '%s'." % (table_locator, expected))
def table_row_should_contain(self, table_locator, row, expected, loglevel='INFO'):
"""Verifies that a specific table row contains `expected`.
The uppermost row is row number 1. A negative column
number can be used to get column counting from the end of the row
(end: -1). For tables that are structured with thead, tbody and tfoot,
only the tbody section is searched. Please use `Table Header Should Contain`
or `Table Footer Should Contain` for tests against the header or
footer content.
If the table contains cells that span multiple rows, a match
only occurs for the uppermost row of those merged cells. To
understand how tables are identified, please take a look at
the `introduction`.
See `Page Should Contain Element` for explanation about `loglevel` argument.
"""
element = self._table_element_finder.find_by_row(self._current_browser(), table_locator, row, expected)
if element is None:
self.log_source(loglevel)
raise AssertionError("Row #%s in table identified by '%s' should have contained "
"text '%s'." % (row, table_locator, expected))
def table_should_contain(self, table_locator, expected, loglevel='INFO'):
"""Verifies that `expected` can be found somewhere in the table.
To understand how tables are identified, please take a look at
the `introduction`.
See `Page Should Contain Element` for explanation about
`loglevel` argument.
"""
element = self._table_element_finder.find_by_content(self._current_browser(), table_locator, expected)
if element is None:
self.log_source(loglevel)
raise AssertionError("Table identified by '%s' should have contained text '%s'." \
% (table_locator, expected)) | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_tableelement.py | 0.605216 | 0.661554 | _tableelement.py | pypi |
from selenium.webdriver.support.ui import Select
from keywordgroup import KeywordGroup
class _SelectElementKeywords(KeywordGroup):
# Public
def get_list_items(self, locator):
"""Returns the values in the select list identified by `locator`.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
select, options = self._get_select_list_options(locator)
return self._get_labels_for_options(options)
def get_selected_list_label(self, locator):
"""Returns the visible label of the selected element from the select list identified by `locator`.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
select = self._get_select_list(locator)
return select.first_selected_option.text
def get_selected_list_labels(self, locator):
"""Returns the visible labels of selected elements (as a list) from the select list identified by `locator`.
Fails if there is no selection.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
select, options = self._get_select_list_options_selected(locator)
if len(options) == 0:
raise ValueError("Select list with locator '%s' does not have any selected values")
return self._get_labels_for_options(options)
def get_selected_list_value(self, locator):
"""Returns the value of the selected element from the select list identified by `locator`.
Return value is read from `value` attribute of the selected element.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
select = self._get_select_list(locator)
return select.first_selected_option.get_attribute('value')
def get_selected_list_values(self, locator):
"""Returns the values of selected elements (as a list) from the select list identified by `locator`.
Fails if there is no selection.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
select, options = self._get_select_list_options_selected(locator)
if len(options) == 0:
raise ValueError("Select list with locator '%s' does not have any selected values")
return self._get_values_for_options(options)
def list_selection_should_be(self, locator, *items):
"""Verifies the selection of select list identified by `locator` is exactly `*items`.
If you want to test that no option is selected, simply give no `items`.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
items_str = items and "option(s) [ %s ]" % " | ".join(items) or "no options"
self._info("Verifying list '%s' has %s selected." % (locator, items_str))
items = list(items)
self.page_should_contain_list(locator)
select, options = self._get_select_list_options_selected(locator)
if not items and len(options) == 0:
return
selected_values = self._get_values_for_options(options)
selected_labels = self._get_labels_for_options(options)
err = "List '%s' should have had selection [ %s ] but it was [ %s ]" \
% (locator, ' | '.join(items), ' | '.join(selected_labels))
for item in items:
if item not in selected_values + selected_labels:
raise AssertionError(err)
for selected_value, selected_label in zip(selected_values, selected_labels):
if selected_value not in items and selected_label not in items:
raise AssertionError(err)
def list_should_have_no_selections(self, locator):
"""Verifies select list identified by `locator` has no selections.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
self._info("Verifying list '%s' has no selection." % locator)
select, options = self._get_select_list_options_selected(locator)
if options:
selected_labels = self._get_labels_for_options(options)
items_str = " | ".join(selected_labels)
raise AssertionError("List '%s' should have had no selection "
"(selection was [ %s ])" % (locator, items_str))
def page_should_contain_list(self, locator, message='', loglevel='INFO'):
"""Verifies select list identified by `locator` is found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for lists are `id` and `name`. See `introduction` for
details about locating elements.
"""
self._page_should_contain_element(locator, 'list', message, loglevel)
def page_should_not_contain_list(self, locator, message='', loglevel='INFO'):
"""Verifies select list identified by `locator` is not found from current page.
See `Page Should Contain Element` for explanation about `message` and
`loglevel` arguments.
Key attributes for lists are `id` and `name`. See `introduction` for
details about locating elements.
"""
self._page_should_not_contain_element(locator, 'list', message, loglevel)
def select_all_from_list(self, locator):
"""Selects all values from multi-select list identified by `id`.
Key attributes for lists are `id` and `name`. See `introduction` for
details about locating elements.
"""
self._info("Selecting all options from list '%s'." % locator)
select = self._get_select_list(locator)
if not select.is_multiple:
raise RuntimeError("Keyword 'Select all from list' works only for multiselect lists.")
for i in range(len(select.options)):
select.select_by_index(i)
def select_from_list(self, locator, *items):
"""Selects `*items` from list identified by `locator`
If more than one value is given for a single-selection list, the last
value will be selected. If the target list is a multi-selection list,
and `*items` is an empty list, all values of the list will be selected.
*items try to select by value then by label.
It's faster to use 'by index/value/label' functions.
An exception is raised for a single-selection list if the last
value does not exist in the list and a warning for all other non-
existing items. For a multi-selection list, an exception is raised
for any and all non-existing values.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
non_existing_items = []
items_str = items and "option(s) '%s'" % ", ".join(items) or "all options"
self._info("Selecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
if not items:
for i in range(len(select.options)):
select.select_by_index(i)
return
for item in items:
try:
select.select_by_value(item)
except:
try:
select.select_by_visible_text(item)
except:
non_existing_items = non_existing_items + [item]
continue
if any(non_existing_items):
if select.is_multiple:
raise ValueError("Options '%s' not in list '%s'." % (", ".join(non_existing_items), locator))
else:
if any (non_existing_items[:-1]):
items_str = non_existing_items[:-1] and "Option(s) '%s'" % ", ".join(non_existing_items[:-1])
self._warn("%s not found within list '%s'." % (items_str, locator))
if items and items[-1] in non_existing_items:
raise ValueError("Option '%s' not in list '%s'." % (items[-1], locator))
def select_from_list_by_index(self, locator, *indexes):
"""Selects `*indexes` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not indexes:
raise ValueError("No index given.")
items_str = "index(es) '%s'" % ", ".join(indexes)
self._info("Selecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
for index in indexes:
select.select_by_index(int(index))
def select_from_list_by_value(self, locator, *values):
"""Selects `*values` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not values:
raise ValueError("No value given.")
items_str = "value(s) '%s'" % ", ".join(values)
self._info("Selecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
for value in values:
select.select_by_value(value)
def select_from_list_by_label(self, locator, *labels):
"""Selects `*labels` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not labels:
raise ValueError("No value given.")
items_str = "label(s) '%s'" % ", ".join(labels)
self._info("Selecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
for label in labels:
select.select_by_visible_text(label)
def unselect_from_list(self, locator, *items):
"""Unselects given values from select list identified by locator.
As a special case, giving empty list as `*items` will remove all
selections.
*items try to unselect by value AND by label.
It's faster to use 'by index/value/label' functions.
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
items_str = items and "option(s) '%s'" % ", ".join(items) or "all options"
self._info("Unselecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
if not select.is_multiple:
raise RuntimeError("Keyword 'Unselect from list' works only for multiselect lists.")
if not items:
select.deselect_all()
return
select, options = self._get_select_list_options(select)
for item in items:
select.deselect_by_value(item)
select.deselect_by_visible_text(item)
def unselect_from_list_by_index(self, locator, *indexes):
"""Unselects `*indexes` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not indexes:
raise ValueError("No index given.")
items_str = "index(es) '%s'" % ", ".join(indexes)
self._info("Unselecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
if not select.is_multiple:
raise RuntimeError("Keyword 'Unselect from list' works only for multiselect lists.")
for index in indexes:
select.deselect_by_index(int(index))
def unselect_from_list_by_value(self, locator, *values):
"""Unselects `*values` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not values:
raise ValueError("No value given.")
items_str = "value(s) '%s'" % ", ".join(values)
self._info("Unselecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
if not select.is_multiple:
raise RuntimeError("Keyword 'Unselect from list' works only for multiselect lists.")
for value in values:
select.deselect_by_value(value)
def unselect_from_list_by_label(self, locator, *labels):
"""Unselects `*labels` from list identified by `locator`
Select list keywords work on both lists and combo boxes. Key attributes for
select lists are `id` and `name`. See `introduction` for details about
locating elements.
"""
if not labels:
raise ValueError("No value given.")
items_str = "label(s) '%s'" % ", ".join(labels)
self._info("Unselecting %s from list '%s'." % (items_str, locator))
select = self._get_select_list(locator)
if not select.is_multiple:
raise RuntimeError("Keyword 'Unselect from list' works only for multiselect lists.")
for label in labels:
select.deselect_by_visible_text(label)
# Private
def _get_labels_for_options(self, options):
labels = []
for option in options:
labels.append(option.text)
return labels
def _get_select_list(self, locator):
el = self._element_find(locator, True, True, 'select')
return Select(el)
def _get_select_list_options(self, select_list_or_locator):
if isinstance(select_list_or_locator, Select):
select = select_list_or_locator
else:
select = self._get_select_list(select_list_or_locator)
return select, select.options
def _get_select_list_options_selected(self, locator):
select = self._get_select_list(locator)
# TODO: Handle possible exception thrown by all_selected_options
return select, select.all_selected_options
def _get_values_for_options(self, options):
values = []
for option in options:
values.append(option.get_attribute('value'))
return values
def _is_multiselect_list(self, select):
multiple_value = select.get_attribute('multiple')
if multiple_value is not None and (multiple_value == 'true' or multiple_value == 'multiple'):
return True
return False
def _unselect_all_options_from_multi_select_list(self, select):
self._current_browser().execute_script("arguments[0].selectedIndex = -1;", select)
def _unselect_option_from_multi_select_list(self, select, options, index):
if options[index].is_selected():
options[index].click() | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_selectelement.py | 0.732879 | 0.425784 | _selectelement.py | pypi |
from robot.libraries import BuiltIn
from keywordgroup import KeywordGroup
BUILTIN = BuiltIn.BuiltIn()
class _RunOnFailureKeywords(KeywordGroup):
def __init__(self):
self._run_on_failure_keyword = None
self._running_on_failure_routine = False
# Public
def register_keyword_to_run_on_failure(self, keyword):
"""Sets the keyword to execute when a Selenium2Library keyword fails.
`keyword_name` is the name of a keyword (from any available
libraries) that will be executed if a Selenium2Library keyword fails.
It is not possible to use a keyword that requires arguments.
Using the value "Nothing" will disable this feature altogether.
The initial keyword to use is set in `importing`, and the
keyword that is used by default is `Capture Page Screenshot`.
Taking a screenshot when something failed is a very useful
feature, but notice that it can slow down the execution.
This keyword returns the name of the previously registered
failure keyword. It can be used to restore the original
value later.
Example:
| Register Keyword To Run On Failure | Log Source | # Run `Log Source` on failure. |
| ${previous kw}= | Register Keyword To Run On Failure | Nothing | # Disables run-on-failure functionality and stores the previous kw name in a variable. |
| Register Keyword To Run On Failure | ${previous kw} | # Restore to the previous keyword. |
This run-on-failure functionality only works when running tests on Python/Jython 2.4
or newer and it does not work on IronPython at all.
"""
old_keyword = self._run_on_failure_keyword
old_keyword_text = old_keyword if old_keyword is not None else "No keyword"
new_keyword = keyword if keyword.strip().lower() != "nothing" else None
new_keyword_text = new_keyword if new_keyword is not None else "No keyword"
self._run_on_failure_keyword = new_keyword
self._info('%s will be run on failure.' % new_keyword_text)
return old_keyword_text
# Private
def _run_on_failure(self):
if self._run_on_failure_keyword is None:
return
if self._running_on_failure_routine:
return
self._running_on_failure_routine = True
try:
BUILTIN.run_keyword(self._run_on_failure_keyword)
except Exception, err:
self._run_on_failure_error(err)
finally:
self._running_on_failure_routine = False
def _run_on_failure_error(self, err):
err = "Keyword '%s' could not be run on failure: %s" % (self._run_on_failure_keyword, err)
if hasattr(self, '_warn'):
self._warn(err)
return
raise Exception(err) | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_runonfailure.py | 0.788217 | 0.209652 | _runonfailure.py | pypi |
import time
import robot
from keywordgroup import KeywordGroup
class _WaitingKeywords(KeywordGroup):
# Public
def wait_for_condition(self, condition, timeout=None, error=None):
"""Waits until the given `condition` is true or `timeout` expires.
The `condition` can be arbitrary JavaScript expression but must contain a
return statement (with the value to be returned) at the end.
See `Execute JavaScript` for information about accessing the
actual contents of the window through JavaScript.
`error` can be used to override the default error message.
See `introduction` for more information about `timeout` and its
default value.
See also `Wait Until Page Contains`, `Wait Until Page Contains
Element`, `Wait Until Element Is Visible` and BuiltIn keyword
`Wait Until Keyword Succeeds`.
"""
if not error:
error = "Condition '%s' did not become true in <TIMEOUT>" % condition
self._wait_until(timeout, error,
lambda: self._current_browser().execute_script(condition) == True)
def wait_until_page_contains(self, text, timeout=None, error=None):
"""Waits until `text` appears on current page.
Fails if `timeout` expires before the text appears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains Element`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
if not error:
error = "Text '%s' did not appear in <TIMEOUT>" % text
self._wait_until(timeout, error, self._is_text_present, text)
def wait_until_page_does_not_contain(self, text, timeout=None, error=None):
"""Waits until `text` disappears from current page.
Fails if `timeout` expires before the `text` disappears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
def check_present():
present = self._is_text_present(text)
if not present:
return
else:
return error or "Text '%s' did not disappear in %s" % (text, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_present)
def wait_until_page_contains_element(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` appears on current page.
Fails if `timeout` expires before the element appears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
if not error:
error = "Element '%s' did not appear in <TIMEOUT>" % locator
self._wait_until(timeout, error, self._is_element_present, locator)
def wait_until_page_does_not_contain_element(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` disappears from current page.
Fails if `timeout` expires before the element disappears. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
def check_present():
present = self._is_element_present(locator)
if not present:
return
else:
return error or "Element '%s' did not disappear in %s" % (locator, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_present)
def wait_until_element_is_visible(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` is visible.
Fails if `timeout` expires before the element is visible. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait Until Page Contains
Element`, `Wait For Condition` and BuiltIn keyword `Wait Until Keyword
Succeeds`.
"""
def check_visibility():
visible = self._is_visible(locator)
if visible:
return
elif visible is None:
return error or "Element locator '%s' did not match any elements after %s" % (locator, self._format_timeout(timeout))
else:
return error or "Element '%s' was not visible in %s" % (locator, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_visibility)
def wait_until_element_is_not_visible(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` is not visible.
Fails if `timeout` expires before the element is not visible. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait Until Page Contains
Element`, `Wait For Condition` and BuiltIn keyword `Wait Until Keyword
Succeeds`.
"""
def check_hidden():
visible = self._is_visible(locator)
if not visible:
return
elif visible is None:
return error or "Element locator '%s' did not match any elements after %s" % (locator, self._format_timeout(timeout))
else:
return error or "Element '%s' was still visible in %s" % (locator, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_hidden)
def wait_until_element_is_enabled(self, locator, timeout=None, error=None):
"""Waits until element specified with `locator` is enabled.
Fails if `timeout` expires before the element is enabled. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait Until Page Contains
Element`, `Wait For Condition` and BuiltIn keyword `Wait Until Keyword
Succeeds`.
"""
def check_enabled():
element = self._element_find(locator, True, False)
if not element:
return error or "Element locator '%s' did not match any elements after %s" % (locator, self._format_timeout(timeout))
enabled = not element.get_attribute("disabled")
if enabled:
return
else:
return error or "Element '%s' was not enabled in %s" % (locator, self._format_timeout(timeout))
self._wait_until_no_error(timeout, check_enabled)
def wait_until_element_contains(self, locator, text, timeout=None, error=None):
"""Waits until given element contains `text`.
Fails if `timeout` expires before the text appears on given element. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait Until Page Contains Element`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
element = self._element_find(locator, True, True)
def check_text():
actual = element.text
if text in actual:
return
else:
return error or "Text '%s' did not appear in %s to element '%s'. " \
"Its text was '%s'." % (text, self._format_timeout(timeout), locator, actual)
self._wait_until_no_error(timeout, check_text)
def wait_until_element_does_not_contain(self, locator, text, timeout=None, error=None):
"""Waits until given element does not contain `text`.
Fails if `timeout` expires before the text disappears from given element. See
`introduction` for more information about `timeout` and its
default value.
`error` can be used to override the default error message.
See also `Wait Until Page Contains`, `Wait Until Page Contains Element`, `Wait For Condition`,
`Wait Until Element Is Visible` and BuiltIn keyword `Wait Until
Keyword Succeeds`.
"""
element = self._element_find(locator, True, True)
def check_text():
actual = element.text
if not text in actual:
return
else:
return error or "Text '%s' did not disappear in %s from element '%s'." % (text, self._format_timeout(timeout), locator)
self._wait_until_no_error(timeout, check_text)
# Private
def _wait_until(self, timeout, error, function, *args):
error = error.replace('<TIMEOUT>', self._format_timeout(timeout))
def wait_func():
return None if function(*args) else error
self._wait_until_no_error(timeout, wait_func)
def _wait_until_no_error(self, timeout, wait_func, *args):
timeout = robot.utils.timestr_to_secs(timeout) if timeout is not None else self._timeout_in_secs
maxtime = time.time() + timeout
while True:
timeout_error = wait_func(*args)
if not timeout_error: return
if time.time() > maxtime:
raise AssertionError(timeout_error)
time.sleep(0.2)
def _format_timeout(self, timeout):
timeout = robot.utils.timestr_to_secs(timeout) if timeout is not None else self._timeout_in_secs
return robot.utils.secs_to_timestr(timeout) | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/keywords/_waiting.py | 0.759671 | 0.296433 | _waiting.py | pypi |
from Selenium2Library import utils
from robot.api import logger
from robot.utils import NormalizedDict
class ElementFinder(object):
def __init__(self):
strategies = {
'identifier': self._find_by_identifier,
'id': self._find_by_id,
'name': self._find_by_name,
'xpath': self._find_by_xpath,
'dom': self._find_by_dom,
'link': self._find_by_link_text,
'partial link': self._find_by_partial_link_text,
'css': self._find_by_css_selector,
'jquery': self._find_by_sizzle_selector,
'sizzle': self._find_by_sizzle_selector,
'tag': self._find_by_tag_name,
'scLocator': self._find_by_sc_locator,
'default': self._find_by_default
}
self._strategies = NormalizedDict(initial=strategies, caseless=True, spaceless=True)
self._default_strategies = strategies.keys()
def find(self, browser, locator, tag=None):
assert browser is not None
assert locator is not None and len(locator) > 0
(prefix, criteria) = self._parse_locator(locator)
prefix = 'default' if prefix is None else prefix
strategy = self._strategies.get(prefix)
if strategy is None:
raise ValueError("Element locator with prefix '" + prefix + "' is not supported")
(tag, constraints) = self._get_tag_and_constraints(tag)
return strategy(browser, criteria, tag, constraints)
def register(self, strategy, persist):
if strategy.name in self._strategies:
raise AttributeError("The custom locator '" + strategy.name +
"' cannot be registered. A locator of that name already exists.")
self._strategies[strategy.name] = strategy.find
if not persist:
# Unregister after current scope ends
utils.events.on('scope_end', 'current', self.unregister, strategy.name)
def unregister(self, strategy_name):
if strategy_name in self._default_strategies:
raise AttributeError("Cannot unregister the default strategy '" + strategy_name + "'")
elif strategy_name not in self._strategies:
logger.info("Cannot unregister the non-registered strategy '" + strategy_name + "'")
else:
del self._strategies[strategy_name]
def has_strategy(self, strategy_name):
return strategy_name in self.strategies
# Strategy routines, private
def _find_by_identifier(self, browser, criteria, tag, constraints):
elements = self._normalize_result(browser.find_elements_by_id(criteria))
elements.extend(self._normalize_result(browser.find_elements_by_name(criteria)))
return self._filter_elements(elements, tag, constraints)
def _find_by_id(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_id(criteria),
tag, constraints)
def _find_by_name(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_name(criteria),
tag, constraints)
def _find_by_xpath(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_xpath(criteria),
tag, constraints)
def _find_by_dom(self, browser, criteria, tag, constraints):
result = browser.execute_script("return %s;" % criteria)
if result is None:
return []
if not isinstance(result, list):
result = [result]
return self._filter_elements(result, tag, constraints)
def _find_by_sizzle_selector(self, browser, criteria, tag, constraints):
js = "return jQuery('%s').get();" % criteria.replace("'", "\\'")
return self._filter_elements(
browser.execute_script(js),
tag, constraints)
def _find_by_link_text(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_link_text(criteria),
tag, constraints)
def _find_by_partial_link_text(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_partial_link_text(criteria),
tag, constraints)
def _find_by_css_selector(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_css_selector(criteria),
tag, constraints)
def _find_by_tag_name(self, browser, criteria, tag, constraints):
return self._filter_elements(
browser.find_elements_by_tag_name(criteria),
tag, constraints)
def _find_by_sc_locator(self, browser, criteria, tag, constraints):
js = "return isc.AutoTest.getElement('%s')" % criteria.replace("'", "\\'")
return self._filter_elements([browser.execute_script(js)], tag, constraints)
def _find_by_default(self, browser, criteria, tag, constraints):
if criteria.startswith('//'):
return self._find_by_xpath(browser, criteria, tag, constraints)
return self._find_by_key_attrs(browser, criteria, tag, constraints)
def _find_by_key_attrs(self, browser, criteria, tag, constraints):
key_attrs = self._key_attrs.get(None)
if tag is not None:
key_attrs = self._key_attrs.get(tag, key_attrs)
xpath_criteria = utils.escape_xpath_value(criteria)
xpath_tag = tag if tag is not None else '*'
xpath_constraints = ["@%s='%s'" % (name, constraints[name]) for name in constraints]
xpath_searchers = ["%s=%s" % (attr, xpath_criteria) for attr in key_attrs]
xpath_searchers.extend(
self._get_attrs_with_url(key_attrs, criteria, browser))
xpath = "//%s[%s(%s)]" % (
xpath_tag,
' and '.join(xpath_constraints) + ' and ' if len(xpath_constraints) > 0 else '',
' or '.join(xpath_searchers))
return self._normalize_result(browser.find_elements_by_xpath(xpath))
# Private
_key_attrs = {
None: ['@id', '@name'],
'a': ['@id', '@name', '@href', 'normalize-space(descendant-or-self::text())'],
'img': ['@id', '@name', '@src', '@alt'],
'input': ['@id', '@name', '@value', '@src'],
'button': ['@id', '@name', '@value', 'normalize-space(descendant-or-self::text())']
}
def _get_tag_and_constraints(self, tag):
if tag is None: return None, {}
tag = tag.lower()
constraints = {}
if tag == 'link':
tag = 'a'
if tag == 'partial link':
tag = 'a'
elif tag == 'image':
tag = 'img'
elif tag == 'list':
tag = 'select'
elif tag == 'radio button':
tag = 'input'
constraints['type'] = 'radio'
elif tag == 'checkbox':
tag = 'input'
constraints['type'] = 'checkbox'
elif tag == 'text field':
tag = 'input'
constraints['type'] = 'text'
elif tag == 'file upload':
tag = 'input'
constraints['type'] = 'file'
elif tag == 'text area':
tag = 'textarea'
return tag, constraints
def _element_matches(self, element, tag, constraints):
if not element.tag_name.lower() == tag:
return False
for name in constraints:
if not element.get_attribute(name) == constraints[name]:
return False
return True
def _filter_elements(self, elements, tag, constraints):
elements = self._normalize_result(elements)
if tag is None: return elements
return filter(
lambda element: self._element_matches(element, tag, constraints),
elements)
def _get_attrs_with_url(self, key_attrs, criteria, browser):
attrs = []
url = None
xpath_url = None
for attr in ['@src', '@href']:
if attr in key_attrs:
if url is None or xpath_url is None:
url = self._get_base_url(browser) + "/" + criteria
xpath_url = utils.escape_xpath_value(url)
attrs.append("%s=%s" % (attr, xpath_url))
return attrs
def _get_base_url(self, browser):
url = browser.get_current_url()
if '/' in url:
url = '/'.join(url.split('/')[:-1])
return url
def _parse_locator(self, locator):
prefix = None
criteria = locator
if not locator.startswith('//'):
locator_parts = locator.partition('=')
if len(locator_parts[1]) > 0:
prefix = locator_parts[0]
criteria = locator_parts[2].strip()
return (prefix, criteria)
def _normalize_result(self, elements):
if not isinstance(elements, list):
logger.debug("WebDriver find returned %s" % elements)
return []
return elements | /robotframework-weblibrary-2.0.1.zip/robotframework-weblibrary-2.0.1/src/Selenium2Library/locators/elementfinder.py | 0.713232 | 0.152821 | elementfinder.py | pypi |
from .version import VERSION
from selenium import webdriver
from robot.libraries.BuiltIn import BuiltIn
from robot.api.deco import keyword
from robot.api import logger
class WebScreens():
"""
WebScreens library helps in simulating different web screen resolutions by using selenium internally
Available Resolutions:
| = Resolution Tye = | = Values = |
| Desktop | 2560*1440, 1920*1200, 1680*1050, 1600*1200, 1400*900, 1366*768, 1280*800, 1280*768, 1152*864, 1024*768, 800*600 |
| Tablet | 768*1024, 1024*1366, 800*1280, 600*960 |
| Mobile | 360*598, 412*684, 414*736, 375*667, 320*568, 320*480 |
"""
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = VERSION
def __init__(self):
self.webdriver = None
self.DESKTOP_RESOLUTIONS = ["2560*1440", "1920*1200", "1680*1050", "1600*1200", "1400*900",
"1366*768", "1280*800", "1280*768", "1152*864", "1024*768", "800*600"]
self.TABLET_RESOLUTIONS = ["768*1024", "1024*1366", "800*1280", "600*960"]
self.SMARTPHONE_RESOLUTIONS = ["360*598", "412*684", "414*736", "375*667", "320*568", "320*480"]
@keyword("Simulate Screen Resolutions")
def simulate_screen_resolutions(self, app_url=None, resolution_type="Desktop", screenshot=True, revert=True):
"""
Adjust webbrowser to set of resolutions, navigate to url and capture page screenshot.
| = Attributes = | = Description = |
| app_url | Application url under test. Default is current page and user can pass respective URL |
| resolution_type | Pre defined resolutions assigned to variable. They are ``Mobile``, ``Desktop`` and ``Tablet`` |
| screenshot | Capture screenshot after navigating to page. Default value is ``True`` |
| revert | Revert screen resolution to original resolution. Default value is ``True`` |
Usage Example:
| = Keyword = | = Paramter = |
| Simulate Screen Resolutions | resolution_type=Mobile |
| Simulate Screen Resolutions | app_url=https://github.com/ | resolution_type=Desktop |
"""
# get selenium instance
seleniumlib = BuiltIn().get_library_instance('SeleniumLibrary')
# remember window size
prev_width, prev_height = seleniumlib.get_window_size()
if resolution_type.lower() == "desktop":
resolution_list = self.DESKTOP_RESOLUTIONS
elif resolution_type.lower() == "tablet":
resolution_list = self.TABLET_RESOLUTIONS
elif resolution_type.lower() == "mobile":
resolution_list = self.SMARTPHONE_RESOLUTIONS
else:
BuiltIn().fail("Resolution: %s not found"%(resolution_type))
# loop through resolutions list
for items in resolution_list:
BuiltIn().log("Simulating Resolution: %s" %(items) )
try:
width, height = items.split("*")
# re-size for required
seleniumlib.set_window_size(width, height)
# reload page
if app_url is None:
seleniumlib.reload_page()
else:
url = app_url
seleniumlib.go_to(url)
# capture full page screenshot - supports firefox only
if screenshot:
seleniumlib.capture_element_screenshot("tag:body")
except Exception as e:
BuiltIn().log(e)
finally:
if revert:
seleniumlib.set_window_size(prev_width, prev_height)
seleniumlib.reload_page()
@keyword("Simulate Screen Resolution")
def simulate_screen_resolution(self, width, height, app_url=None, screenshot=True, revert=True):
"""
Adjust webbrowser to given resolution (width * height), navigate to url and capture page screenshot.
| = Attributes = | = Description = |
| width | Browser width |
| height | Browser height |
| app_url | Application url under test. Default is current page and user can pass respective URL |
| screenshot | Capture screenshot after navigating to page. Default value is ``True`` |
| revert | Revert screen resolution to original resolution. Default value is ``True`` |
Usage Example:
| = Keyword = | = Paramter = |
| Simulate Screen Resolution | 800 | 760 |
| Simulate Screen Resolution | 800 | 760 | app_url=https://github.com/ | screenshot=False |
"""
# get selenium instance
seleniumlib = BuiltIn().get_library_instance('SeleniumLibrary')
# remember window size
prev_width, prev_height = seleniumlib.get_window_size()
try:
# re-size for required
seleniumlib.set_window_size(width, height)
# reload page
if app_url is None:
seleniumlib.reload_page()
else:
url = app_url
seleniumlib.go_to(url)
# capture full page screenshot
if screenshot:
seleniumlib.capture_element_screenshot("tag:body")
except Exception as e:
BuiltIn().log(e)
finally:
if revert:
seleniumlib.set_window_size(prev_width, prev_height)
seleniumlib.reload_page() | /robotframework-webscreens-0.1.1.tar.gz/robotframework-webscreens-0.1.1/src/WebScreens/web.py | 0.566139 | 0.181481 | web.py | pypi |
import os
from robot.utils import is_truthy
import clr
DLL_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'bin', 'TestStack.White.dll')
clr.AddReference('System')
clr.AddReference(DLL_PATH)
from System.Windows.Automation import AutomationElement, ControlType # noqa: E402
from TestStack.White.UIItems.Finders import SearchCriteria # noqa: E402
from TestStack.White.UIItems import UIItem # noqa: E402
from WhiteLibrary.keywords import (ApplicationKeywords, KeyboardKeywords, MouseKeywords,
WindowKeywords, ScreenshotKeywords, WhiteConfigurationKeywords) # noqa: E402
from WhiteLibrary.keywords.items import (ButtonKeywords,
LabelKeywords,
ListKeywords,
ListViewKeywords,
MenuKeywords,
ProgressbarKeywords,
SliderKeywords,
TabKeywords,
ToolStripKeywords,
TreeKeywords,
TextBoxKeywords,
UiItemKeywords) # noqa: E402
from WhiteLibrary.keywords.robotlibcore import DynamicCore # noqa: E402
from WhiteLibrary.errors import ItemNotFoundError # noqa: E402
from WhiteLibrary import version # noqa: E402
STRATEGIES = dict(id={"method": "ByAutomationId"}, # noqa: C408
text={"method": "ByText"},
index={"method": "Indexed"},
help_text={"method": "ByNativeProperty", "property": "HelpTextProperty"},
class_name={"method": "ByClassName"},
control_type={"method": "ByControlType"})
class WhiteLibrary(DynamicCore):
"""WhiteLibrary is a Robot Framework library for automating Windows GUI.
It is a wrapper for [https://github.com/TestStack/White | TestStack.White] automation framework, which is based on
[https://docs.microsoft.com/en-us/windows/desktop/WinAuto/entry-uiauto-win32 | Microsoft UI Automation API] (UIA).
= Applications and windows =
To interact with UI items, the correct application and window must be attached to WhiteLibrary.
When application is started with `Launch Application`, the keyword also attaches the application to WhiteLibrary.
Attaching a running application is done with `Attach Application By Name` or `Attach Application By Id`.
Once the application is attached, the window to interact with is attached with `Attach Window`.
Examples:
| # Launch application, no separate step for attaching application needed | |
| `Launch Application` | C:/myApplication.exe |
| `Attach Window` | Main window |
| | |
| # Switch to an application that is already running | |
| `Attach Application By Name` | calc1 |
| `Attach Window` | Calculator |
= UI items =
WhiteLibrary uses the same names for UI items (=controls) as White.
See [https://teststackwhite.readthedocs.io/en/latest/UIItems | White's documentation] for details about mapping
UIA control types to White's UI item classes.
For example, the UIA control type ``Text`` maps to the ``Label`` class in White (e.g. in WhiteLibrary's keyword `Verify Label`).
== Item locators ==
Keywords that access UI items (e.g. `Click Button`) use a ``locator`` argument.
The locator consists of a locator prefix that specifies the search criteria, and the locator value.
Locator syntax is ``prefix:value``.
The following locator prefixes are available:
| = Prefix = | = Description = |
| id (or no prefix) | Search by AutomationID. If no prefix is given, the item is searched by AutomationID by default. |
| text | Search by exact item text or name. |
| index | Search by item index. |
| help_text | Search by HelpTextProperty. |
| class_name | Search by class name. |
| control_type | Search by control type. |
| partial_text | Search by text that the item text/name contains. |
Examples:
| `Click Button` | myButton | # clicks button by its AutomationID |
| `Click Button` | id:myButton | # clicks button by its AutomationID |
| `Click Button` | text:Click here! | # clicks button by the button text |
| `Click Button` | index:2 | # clicks button whose index is 2 |
*Note:* Old locator syntax ``prefix=value`` is also valid but it is recommended to use the ``prefix:value`` syntax
since the old syntax *will be deprecated* in the future.
== Item object as a locator ==
It is also possible to use an item object reference as the ``locator`` value.
An item object can be obtained with e.g. `Get Item` or `Get Items` keywords.
The need to use an item object reference can arise for instance when multiple items match the same locator
and one of the items is selected for further action.
When using an item object, the action on the item can be executed regardless of the window it is in,
i.e. the window where the item is located does not necessarily need to be attached.
However, this does not change the attached window and the operation continues in the attached window after action on
the referred item is complete.
Example using item object:
| @{my_buttons}= | `Get Items` | class_name:MyButtonClass |
| `Click Button` | ${my_buttons[2]} | # clicks button object at index 2 of the list |
= Workflow example =
| ***** Variables ***** | | | |
| ${TEST APPLICATION} | C:/path/to/my_application.exe | | |
| | | | |
| ***** Settings ***** | | | |
| Library | WhiteLibrary | | |
| | | | |
| ***** Test Cases ***** | | | |
| Small Example | | | |
| | Launch Application | ${TEST APPLICATION} | |
| | Attach Window | Window Title | |
| | Button Text Should Be | my_button | press this button |
| | Click Button | my_button | |
| | Close Application | | |
= Waiting and timeouts =
White handles a lot of the required waiting automatically, including waiting while the window is busy and
waiting for a window to appear.
White's internal waits use timeouts that can be read and configured with keywords:
- BusyTimeout defines how long to wait while the window is busy,
see `Get White Busy Timeout`, `Set White Busy Timeout`
- FindWindowTimeout defines how long to wait until the specified window is found,
see `Get White Find Window Timeout`, `Set White Find Window Timeout`.
In situations that require additional waiting for UI items, see keywords `Wait Until Item Exists`
and `Wait Until Item Does Not Exist`.
"""
ROBOT_LIBRARY_VERSION = version.VERSION
ROBOT_LIBRARY_SCOPE = "Global"
ROBOT_LISTENER_API_VERSION = 2
def __init__(self, screenshot_dir=None):
"""WhiteLibrary can be imported with an optional argument ``screenshot_dir``.
``screenshot_dir`` is the directory where screenshots taken by WhiteLibrary are saved.
If the given directory does not already exist, it will be created when the first screenshot is taken.
The directory can also be set at runtime with `Set Screenshot Directory`.
If the argument is not given, the default location for screenshots is the output directory of the Robot run,
i.e. the directory where output and log files are generated.
"""
self.app = None
self.window = None
self.screenshooter = None
self.ROBOT_LIBRARY_LISTENER = self # pylint: disable=invalid-name
self.screenshots_enabled = True
self.libraries = [ApplicationKeywords(self),
ButtonKeywords(self),
KeyboardKeywords(self),
LabelKeywords(self),
ListKeywords(self),
ListViewKeywords(self),
MenuKeywords(self),
MouseKeywords(self),
ProgressbarKeywords(self),
SliderKeywords(self),
TabKeywords(self),
WhiteConfigurationKeywords(self),
TextBoxKeywords(self),
ToolStripKeywords(self),
TreeKeywords(self),
UiItemKeywords(self),
WindowKeywords(self),
ScreenshotKeywords(self, screenshot_dir)]
self._running_keyword = None
self._running_on_failure_keyword = False
DynamicCore.__init__(self, self.libraries)
def run_keyword(self, name, args, kwargs): # pylint: disable=signature-differs
"""Reimplemtation of run_keyword.
calls robot framework's own implementation but handles screenshots if/when exceptions are triggered.
"""
self._running_keyword = name
try:
return DynamicCore.run_keyword(self, name, args, kwargs)
except Exception:
self._failure_occurred()
raise
finally:
self._running_keyword = None
def _failure_occurred(self):
# this if-guard here is to prevent recursion if there's
# error in taking of a screenshot.
# Might be safe to remove
if self._running_on_failure_keyword:
return
try:
self._running_on_failure_keyword = True
if self.screenshots_enabled:
self.screenshooter.take_desktop_screenshot()
finally:
self._running_on_failure_keyword = False
def _get_typed_item_by_locator(self, item_type, locator):
if isinstance(locator, UIItem):
if not isinstance(locator, item_type):
raise TypeError("Item object was not of the expected type")
return locator
search_strategy, locator_value = self._parse_locator(locator)
if search_strategy == "partial_text":
return self._get_item_by_partial_text(locator_value, item_type)
search_criteria = self._get_search_criteria(search_strategy, locator_value)
return self.window.Get[item_type](search_criteria)
def _get_item_by_locator(self, locator):
if isinstance(locator, UIItem):
return locator
search_strategy, locator_value = self._parse_locator(locator)
if search_strategy == "partial_text":
return self._get_item_by_partial_text(locator_value)
search_criteria = self._get_search_criteria(search_strategy, locator_value)
return self.window.Get(search_criteria)
def _get_multiple_items_by_locator(self, locator):
search_strategy, locator_value = self._parse_locator(locator)
if search_strategy == "partial_text":
return list(self._get_multiple_items_by_partial_text(locator_value))
search_criteria = self._get_search_criteria(search_strategy, locator_value)
return self.window.GetMultiple(search_criteria)
def _get_item_by_partial_text(self, partial_text, item_type=None):
items = self._get_multiple_items_by_partial_text(partial_text)
try:
if item_type is None:
return next(items)
return next((item for item in items if item.GetType() == clr.GetClrType(item_type)))
except StopIteration:
raise ItemNotFoundError(u"Item with partial text '{}' was not found".format(partial_text))
def _get_multiple_items_by_partial_text(self, partial_text):
items = self.window.GetMultiple(SearchCriteria.All)
return (item for item in items if partial_text in item.Name)
@staticmethod
def _get_search_criteria(search_strategy, locator_value):
if search_strategy == "index":
locator_value = int(locator_value)
try:
search_method = STRATEGIES[search_strategy]["method"]
except KeyError:
raise ValueError("'{}' is not a valid locator prefix".format(search_strategy))
if search_method == "ByNativeProperty":
property_name = STRATEGIES[search_strategy]["property"]
property_name = getattr(AutomationElement, property_name)
search_params = (property_name, locator_value)
else:
if search_method == "ByControlType":
locator_value = getattr(ControlType, locator_value)
search_params = (locator_value,)
method = getattr(SearchCriteria, search_method)
return method(*search_params)
def _parse_locator(self, locator):
if "=" not in locator and ":" not in locator:
locator = "id:" + locator
idx = self._get_locator_delimiter_index(locator)
return locator[:idx], locator[idx + 1:]
@staticmethod
def _get_locator_delimiter_index(locator):
if "=" not in locator:
return locator.index(":")
if ":" not in locator:
return locator.index("=")
return min(locator.index(":"), locator.index("="))
def _end_keyword(self, name, attrs): # pylint: disable=unused-argument
pass
@staticmethod
def _contains_string_value(expected, actual, case_sensitive=True):
case_sensitive = is_truthy(case_sensitive)
expected_value = expected if case_sensitive else expected.upper()
actual_value = actual if case_sensitive else actual.upper()
if expected_value not in actual_value:
raise AssertionError(u"Expected value {} not found in {}".format(expected, actual))
@staticmethod
def _verify_string_value(expected, actual, case_sensitive=True):
case_sensitive = is_truthy(case_sensitive)
expected_value = expected if case_sensitive else expected.upper()
actual_value = actual if case_sensitive else actual.upper()
if expected_value != actual_value:
raise AssertionError(u"Expected value {}, but found {}".format(expected, actual))
@staticmethod
def _verify_value(expected, actual):
if expected != actual:
raise AssertionError(u"Expected value {}, but found {}".format(expected, actual)) | /robotframework-whitelibrary-1.6.0.20191007.3rc0.zip/robotframework-whitelibrary-1.6.0.20191007.3rc0/src/WhiteLibrary/__init__.py | 0.70069 | 0.218471 | __init__.py | pypi |
from robot.utils import secs_to_timestr, timestr_to_secs
from System.Diagnostics import Process, ProcessStartInfo
from WhiteLibrary.keywords.librarycomponent import LibraryComponent
from WhiteLibrary.keywords.robotlibcore import keyword
from WhiteLibrary.utils.wait import Wait
from TestStack.White import Application, WhiteException
class ApplicationKeywords(LibraryComponent):
@keyword
def launch_application(self, sut_path, args=None):
"""Launches an application.
``sut_path`` is the absolute path to the application to launch.
``args`` is a string of arguments to use when starting the application (optional).
Examples:
| Launch Application | C:/path/to/MyApp.exe | | # Launch without arguments |
| Launch Application | C:/path/to/MyApp.exe | /o log.txt | # Launch with arguments |
"""
if args is not None:
process_start_info = ProcessStartInfo(sut_path)
process_start_info.Arguments = args
self.state.app = Application.Launch(process_start_info)
else:
self.state.app = Application.Launch(sut_path)
@staticmethod
def _attach_application(sut_identifier, timeout=0):
exception_message = "Unable to locate application with identifier: {}".format(sut_identifier)
if timeout == 0:
try:
return Application.Attach(sut_identifier)
except WhiteException:
raise AssertionError(exception_message)
# using hack due to 2.7 doesnt support nonlocal keyword
# and inner function cant modify primitive datatypes.
# aand im trying to avoid calling Application.Attach()
# multiple times as it allocates memory on python .net
# side.
hack = {"sut": None}
def search_application():
try:
hack["sut"] = Application.Attach(sut_identifier) # noqa: F841
return True
except WhiteException:
return False
Wait.until_true(search_application, timeout, exception_message)
return hack["sut"]
@keyword
def attach_application_by_name(self, sut_name, timeout=0):
"""Attaches a running application by name.
``sut_name`` is the name of the process.
``timeout`` is the maximum time to wait as a Robot time string. (Optional)
Example:
| Attach Application By Name | UIAutomationTest |
"""
self.state.app = self._attach_application(sut_name, timeout)
@keyword
def attach_application_by_id(self, sut_id, timeout=0):
"""Attaches a running application by process id.
``sut_id`` is the application process id.
``timeout`` is the maximum time to wait as a Robot time string. (Optional)
Example:
| Attach Application By Id | 12188 |
"""
self.state.app = self._attach_application(int(sut_id), timeout)
@keyword
def close_application(self):
"""Closes the attached application."""
self.state.app.Close()
self.state.app = None
self.state.window = None
@keyword
def wait_until_application_has_stopped(self, name, timeout): # pylint:disable=no-self-use
"""Waits until no process with the given name exists.
`name` is the name of the process
`timeout` is the maximum time to wait as a Robot time string.
Example:
| Wait Until Application Has Stopped | calc | # waits until calc.exe process does not exist |
"""
timeout = timestr_to_secs(timeout)
Wait.until_true(lambda: not Process.GetProcessesByName(name), timeout,
"Application '{}' did not exit within {}".format(name, secs_to_timestr(timeout))) | /robotframework-whitelibrary-1.6.0.20191007.3rc0.zip/robotframework-whitelibrary-1.6.0.20191007.3rc0/src/WhiteLibrary/keywords/application.py | 0.696887 | 0.242362 | application.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.