code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
# Creating and deploying a bot with Robo AI's Bot Manager Tool
This tutorial intends to give you an overview of how to create and deploy a simple Rasa chatbot using Robo AI's Bot Manager Command Line Tool.
For any doubts you might have, please refer to our [README](../README.md) and if you cannot find the answer to your query, you may contact us at info@robo-ai.com.
Prerequisites of this tutorial: <br>
- An account on the Robo AI platform<br>
- A preexisting bot created on the Robo AI platform <br>
- A valid API key<br>
If you have trouble setting up the above please follow this [guide](manage_roboai_account.md).
## Robo AI Bot Manager Command Line Tool
Robo AI Bot Manager is a CLI tool which allows you to create, manage and deploy Rasa chatbots on the Robo AI platform.
## Installation
To install the tool please make sure you are working on a virtual environment with Python 3.6 or Python 3.7.
You can create a virtual environment using conda:
```sh
conda create -n robo-bot python=3.7
conda activate robo-bot
```
The package can then be installed via pip as follows:
```sh
pip install robo-bot
```
After installing the command line tool, it should be available through the following terminal command:
```sh
robo-bot
```
When you execute it in a terminal you should see an output with a list of commands supported by the tool.
I.e:
```
user@host:~$ robo-bot
____ ___ ____ ___ _ ___
| _ \ / _ \| __ ) / _ \ / \ |_ _|
| |_) | | | | _ \| | | | / _ \ | |
| _ <| |_| | |_) | |_| | _ / ___ \ | |
|_| \_\\___/|____/ \___/ (_) /_/ \_\___|
Bot Management Tool robo-ai.com
Usage: robo-bot [OPTIONS] COMMAND [ARGS]...
robo-bot 0.1.0
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
clean Clean the last package
connect Connect a local bot to a ROBO.AI server bot instance.
deploy Deploy the current bot into the ROBO.AI platform.
diff Check for structural differences between languages for the...
environment Define the ROBO.AI platform API endpoint to use.
interactive Run in interactive learning mode where you can provide...
login Initialize a new session using a ROBO.AI API key.
logout Close the current session in the ROBO.AI platform.
logs Display selected bot runtime logs.
package Package the required bot and make it ready for deployment.
remove Remove a deployed bot from the ROBO.AI platform.
run Start the action server.
seed Create a new ROBO.AI project seedling, including folder...
shell Start a shell to interact with the required bot.
start Start a bot deployed on the ROBO.AI platform.
status Display the bot status.
stories Generate stories for a Rasa bot.
stop Stop a bot running in the ROBO.AI platform.
test Tests Rasa models for the required bots.
train Trains Rasa models for the required bots.
```
## Usage
As of now, the tool provides you with two major features: <br>
(1) creating and managing Rasa chatbots <br>
(2) deploying Rasa chatbots on the Robo AI platform. <br>
For simplicity purposes we will use the tool to generate a new bot and stories for it. For the remaining of the tutorial we will use our <a href="https://github.com/robo-ai/roboai-demo-faq-chatbot">demo chatbot</a> which you can clone to follow along.
### Creating a bot
Let's say we want to create a multi-language bot (with English and German being the languages) and a language detection feature where the bot is able to recognize other languages and redirect your users accordingly. This feature is essentially a custom policy which at every user input is classifying the language. Apart from this feature we also included scripts for a custom component and a custom action which you can use as fallback (more on that here). The first step would be to create a folder for our bot's contents.
```sh
mkdir bot
cd bot
```
We are now ready to generate the initial structure of the bot which is provided to you with this tool. To do so, we only need to run the following command:
```sh
robo-bot seed en de --language-detection
```
The *seed* command will generate the following structure inside the bot directory:
```
.
βββ actions
β βββ action_parlai_fallback.py
β βββ __init__.py
βββ custom
β βββ components
β β βββ spacy_nlp
β β βββ spacy_nlp_neuralcoref.py
β β βββ spacy_tokenizer_neuralcoref.py
β βββ policies
β βββ language_detection
β βββ lang_change_policy.py
β βββ lid.176.ftz
βββ languages
| βββ en
| β βββ config.yml
| β βββ data
| β β βββ nlu.md
| β βββ domain.yml
| βββ de
| β βββ config.yml
| β βββ data
| β β βββ nlu.md
| β βββ domain.yml
| βββ stories.md
βββ credentials.yml
βββ endpoints.yml
βββ __init__.py
```
As you can see we have specific files for the NLU, domain and config files depending on the language but the stories file is common among the two chatbots, meaning you do not have to repeat work and if you do wish to have a small subset of specific dialogues for one of the bots, you can create a stories file and include it in the data folder.
This is the initial step for creating any bot with this tool, the structure is always the one shown above even when you are working with single-language chatbots (i.e. if you pass only one language code to the command).
Imagine now we have developed the NLU and the domain files, with intents and responses. Naturally, the next thing to do will be to generate the stories. With this tool you can generate ping pong dialogues automatically with the *stories* command, assuming your intents and responses for these scenarios have the same name (example: intent "greeting", response "utter_greeting"). If you run
```sh
robo-bot stories en
```
it will pick up the intents you have defined in the English bot domain file and build the ping pong dialogues in the common stories.md file.
If you are deep in the development of your bot you can also run
```sh
robo-bot stories en --check-covered-intents
```
and it will check which intents registered in your English domain are not yet covered in the stories file.
For the remaining of the tutorial we will use our demo chatbot which already contains some content that we can leverage to show how the tool works. This is also a multi-language chatbot written in English and German.
Previously we mentioned that for a lot of cases, the stories file will be common between different languages and this means that we need to check the integrity of our multi-language bot. To do so we can use the *diff* command. The *diff* command will check for differences in intents, actions, entities and responses registered in the different language domains. You can use it in the following way:
```sh
robo-bot diff en de
```
Alternatively you can also not pass any language code and it will check for differences between all available bots.
Training a bot, launching a shell and running the action server is intended to work exactly as rasa commands. These commands are just wrappers and we only included arguments we thought are the most used among ourselves so if you're missing some argument that you believe should be added please let us know. You can run
```sh
robo-bot train/shell/run --help
```
to check the available options.
If we want to train our English bot, we can simply run:
```sh
robo-bot train en
```
In the background it will call rasa train with the adjusted paths for the necessary files. Note that if you do not pass a language to the command, all bots will be trained. You should also be warned that differently from Rasa we only keep one model at a time. The next time you train your bot, the old model will be overwritten.
If we want to test our bot locally we can use the shell command as follows:
```sh
robo-bot shell en
```
It will launch the shell for the English bot. In this case it is mandatory that you pass only one language code.
Just like in Rasa we also have to launch the action server so that our bot's actions can be executed. In a separate terminal window you can run the following command to run the action server:
```sh
robo-bot run actions
```
Another natural step in the flow is to generate test stories for our bot to see how it is behaving, at least for simple dialogues. The *test* command generates test stories automatically for you based on the bot stories.md file. Using this command does not necessarily mean that you won't have to manually build some test cases but most of the work should be done. If you run
```sh
robo-bot test en
```
it will check if no test stories already exist and if so it'll automatically create them. After that it will run the rasa test command to run the tests. If there is already a file containing test dialogues then it'll list the intents which are not covered in these dialogues and prompt you to continue with the tests or not.
Once you're happy with your bot you can deploy it on the Robo AI platform. This part of the tutorial assumes you already have set up an account, you already have enabled an API key and you have already created a bot on the platform. You can check [this tutorial](manage_roboai_account.md) in case you haven't followed those steps.
This tool already provides you with the environment configuration you need to have for deploying a bot, you just need to activate it. To do so you only need to run
```sh
robo-bot environment activate production
```
This command will create a hidden file in your filesystem with the configuration needed for the following steps. There are other options under the environment command but they are out of scope in this tutorial.
After activating the environment we can log in on the platform. For that we can use the *login* command in the following way (note that this is a dummy API key):
```sh
robo-bot login --api-key=9e4r5877-56e5-4211-8d92-c33757bc3f54
```
You'll take the API key from the platform, in the top right corner > Account > API keys > copy the most suitable one. This command is logging in on the platform, fetching a token and storing it in the configuration file mentioned above.
Now we need to establish the connection between the bot we have been working on, say the English version, and one that already exists in the platform. To do so we use the connect command as follows:
```sh
robo-bot connect en --target-dir .
```
It will prompt you with the options available in the platform from which you'll have to select one bot. Based on that selection, the command will generate a file called robo-manifest.json where some metadata will be stored in the path passed in the target-dir argument. This file is needed for the deployment step and thus it should be stored in the root directory of the bot (folder called "bot") because that is what subsequent commands are expecting (hence the target-dir argument pointing to the current directory).
After being done with this step we can finally deploy our bot in the platform by executing:
```sh
robo-bot deploy en
```
This will generate an image of the bot which will be running on a docker container which you can also check and manage through some available commands. The *logs* command will show you the logs of the bot which should be useful to track some eventual errors.
```sh
robo-bot logs en
```
The *start* command will start a bot that has been deployed on the Robo AI platform.
```sh
robo-bot start en
```
The *stop* command will stop a bot that is running in the Robo AI platform.
```sh
robo-bot stop en
```
The *remove* command removes a deployed bot from the Robo AI platform.
```sh
robo-bot remove en
```
These are the fundamental steps to handle this tool. If you have any further questions please don't hesitate to reach out to us.
| /robo-bot-0.1.3.1.tar.gz/robo-bot-0.1.3/docs/create_deploy_bot.md | 0.477067 | 0.977586 | create_deploy_bot.md | pypi |
import os
import logging
from typing import Any, Dict, Optional, Text
from rasa.nlu import utils
from rasa.nlu.classifiers.classifier import IntentClassifier
from rasa.nlu.constants import INTENT
from rasa.utils.common import raise_warning
from rasa.nlu.config import RasaNLUModelConfig
from rasa.nlu.training_data import TrainingData
from rasa.nlu.model import Metadata
from rasa.nlu.training_data import Message
logger = logging.getLogger(__name__)
class ExactMatchClassifier(IntentClassifier):
"""Intent classifier using simple exact matching.
The classifier takes a list of keywords and associated intents as an input.
A input sentence is checked for the keywords and the intent is returned.
"""
defaults = {"case_sensitive": True}
def __init__(
self,
component_config: Optional[Dict[Text, Any]] = None,
intent_keyword_map: Optional[Dict] = None,
):
super(ExactMatchClassifier, self).__init__(component_config)
self.case_sensitive = self.component_config.get("case_sensitive")
self.intent_keyword_map = intent_keyword_map or {}
def train(
self,
training_data: TrainingData,
config: Optional[RasaNLUModelConfig] = None,
**kwargs: Any,
) -> None:
for ex in training_data.training_examples:
self.intent_keyword_map[ex.text] = ex.get(INTENT)
def process(self, message: Message, **kwargs: Any) -> None:
intent_name = self._map_keyword_to_intent(message.text)
confidence = 0.0 if intent_name is None else 1.0
intent = {"name": intent_name, "confidence": confidence}
if message.get(INTENT) is None or intent is not None:
message.set(INTENT, intent, add_to_output=True)
def _map_keyword_to_intent(self, text: Text) -> Optional[Text]:
for keyword, intent in self.intent_keyword_map.items():
if keyword.strip() == text.strip(): # re.search(r"\b" + keyword + r"\b", text, flags=re_flag):
logger.debug(
f"ExactMatchClassifier matched keyword '{keyword}' to"
f" intent '{intent}'."
)
return intent
logger.debug("ExactMatchClassifier did not find any keywords in the message.")
return None
def persist(self, file_name: Text, model_dir: Text) -> Dict[Text, Any]:
"""Persist this model into the passed directory.
Return the metadata necessary to load the model again.
"""
file_name = file_name + ".json"
keyword_file = os.path.join(model_dir, file_name)
utils.write_json_to_file(keyword_file, self.intent_keyword_map)
return {"file": file_name}
@classmethod
def load(
cls,
meta: Dict[Text, Any],
model_dir: Optional[Text] = None,
model_metadata: Metadata = None,
cached_component: Optional["ExactMatchClassifier"] = None,
**kwargs: Any,
) -> "ExactMatchClassifier":
if model_dir and meta.get("file"):
file_name = meta.get("file")
keyword_file = os.path.join(model_dir, file_name)
if os.path.exists(keyword_file):
intent_keyword_map = utils.read_json_file(keyword_file)
else:
raise_warning(
f"Failed to load key word file for `IntentKeywordClassifier`, "
f"maybe {keyword_file} does not exist?"
)
intent_keyword_map = None
return cls(meta, intent_keyword_map)
else:
raise Exception(
f"Failed to load keyword intent classifier model. "
f"Path {os.path.abspath(meta.get('file'))} doesn't exist."
) | /robo-bot-0.1.3.1.tar.gz/robo-bot-0.1.3/robo_bot_cli/initial_structure/initial_project/custom/components/exact_match_classifier/exact_match_classifier.py | 0.787605 | 0.227351 | exact_match_classifier.py | pypi |
import math
import os
from os.path import dirname, isfile, join
from shutil import copyfile
import click
import pkg_resources
from questionary import Choice, prompt
from robo_ai.model.assistant.assistant import Assistant
from robo_ai.model.assistant.assistant_list_response import (
AssistantListResponse,
)
from robo_bot_cli.config.bot_manifest import BotManifest
from robo_bot_cli.config.tool_settings import ToolSettings
from robo_bot_cli.util.cli import (
loading_indicator,
print_message,
print_success,
)
from robo_bot_cli.util.robo import (
get_robo_client,
get_supported_base_versions,
validate_bot,
validate_robo_session,
)
@click.command(
name="connect",
help="Connect a local bot to a ROBO.AI server bot instance.",
)
@click.argument(
"language",
nargs=-1,
)
@click.option(
"--bot-uuid",
default=None,
type=str,
help="The bot UUID to assign to the bot implementation.",
)
@click.option(
"--target-dir",
default=None,
type=str,
help="The target directory where the bot will be setup.",
)
def command(
language: tuple,
bot_uuid: str,
target_dir: str,
base_version: str = "rasa-1.10.0",
):
"""
Connect a local bot to a ROBO.AI server bot instance.
This instance must be already created in the ROBO.AI platform.
This command will generate a JSON file (robo-manifest) with metadata about the bot to be deployed.
Args:
language: language code of the bot to be connected.
bot_uuid (str): optional argument stating the ID of the bot.
target_dir (str): optional argument stating where the robo-manifest file should be stored.
"""
validate_robo_session()
if not bot_uuid:
bot_uuid = get_bot_uuid()
validate_bot(bot_uuid)
if len(language) == 0:
bot_dir = bot_ignore_dir = (
os.path.abspath(target_dir)
if target_dir
else create_implementation_directory(bot_uuid)
)
elif len(language) == 1:
bot_dir = (
os.path.abspath(join(target_dir, "languages", language[0]))
if target_dir
else create_implementation_directory(bot_uuid)
) # TODO check what this does
bot_ignore_dir = join(dirname(dirname(bot_dir)))
else:
print_message("Please select only one bot to connect to.")
exit(0)
print_message("Bot target directory: {0}".format(bot_dir))
create_bot_manifest(bot_dir, bot_uuid, base_version)
create_bot_ignore(bot_ignore_dir)
print_success("The bot was successfully initialized.")
def get_bots(page: int) -> AssistantListResponse:
"""
Retrieves the list of bots from the ROBO.AI platform.
Args:
page (int): current page.
Returns:
AssistantListResponse: list of bots available in the current page.
"""
settings = ToolSettings()
current_environment = settings.get_current_environment()
robo = get_robo_client(current_environment)
return robo.assistants.get_list(page)
def get_bot_choice(bot: Assistant) -> dict:
"""
Get bot name and ID in a dictionary.
Args:
bot (Assistant): Assistant object to get details from.
Returns:
dict: dictionary with assistant's name and ID.
"""
return {
"name": bot.name,
"value": bot.uuid,
}
def get_bot_uuid() -> str:
"""
Show bot options to the user, returns the selected bot ID.
Returns:
bot_uuid (str): ID of the selected bot.
"""
NEXT_PAGE = "__NEXT_PAGE__"
PREV_PAGE = "__PREV_PAGE__"
NONE = "__NONE__"
META_RESPONSES = [NEXT_PAGE, PREV_PAGE, NONE]
bot_uuid = NONE
page = 0
while bot_uuid in META_RESPONSES:
if bot_uuid == NEXT_PAGE:
page += 1
if bot_uuid == PREV_PAGE:
page -= 1
with loading_indicator("Loading bot list..."):
bots = get_bots(page)
bot_choices = list(map(get_bot_choice, bots.content))
page_count = math.ceil(bots.totalElements / bots.size)
if page < page_count - 1:
bot_choices.append({"name": "> Next page...", "value": NEXT_PAGE})
if page > 0:
bot_choices.append(
{"name": "> Previous page...", "value": PREV_PAGE}
)
questions = [
{
"type": "list",
"name": "bot_id",
"message": "Please select the bot you would like to implement:",
"choices": [
Choice(title=bot["name"], value=bot["value"])
if (bot["value"] == NEXT_PAGE or bot["value"] == PREV_PAGE)
else Choice(
title=str(bot["name"] + " (" + bot["value"] + ")"),
value=bot["value"],
)
for bot in bot_choices
],
},
]
answers = prompt(questions)
bot_uuid = answers["bot_id"]
return bot_uuid
def get_base_version() -> str:
"""
Show runtime options to the user.
Returns:
base_version (str): selected base version
"""
with loading_indicator("Loading base version list..."):
versions = get_supported_base_versions()
version_choices = [
{"value": base_version["version"], "name": base_version["label"]}
for base_version in versions
]
questions = [
{
"type": "list",
"name": "base_version",
"message": "Please select the bot runtime version:",
"choices": version_choices,
},
]
answers = prompt(questions)
base_version = answers["base_version"]
return base_version
def create_implementation_directory(bot_uuid: str) -> str:
cwd = os.getcwd()
bot_dir = cwd + "/" + bot_uuid
os.mkdir(bot_dir)
return bot_dir
def create_bot_manifest(bot_dir: str, bot_uuid: str, base_version: str):
with loading_indicator("Creating bot manifest file..."):
manifesto = BotManifest(bot_dir)
manifesto.set_bot_id(bot_uuid)
manifesto.set_base_version(base_version)
def create_bot_ignore(bot_ignore_dir: str):
if not isfile(join(bot_ignore_dir, ".botignore")):
copyfile(get_bot_ignore_path(), join(bot_ignore_dir, ".botignore"))
def get_bot_ignore_path() -> str:
return pkg_resources.resource_filename(
__name__, "../initial_structure/initial_project/.botignore"
)
if __name__ == "__main__":
command() | /robo-bot-0.1.3.1.tar.gz/robo-bot-0.1.3/robo_bot_cli/commands/connect.py | 0.489259 | 0.153042 | connect.py | pypi |
import gym
from gym import error, spaces, utils
from gym.utils import seeding
import os
import pybullet as p
import pybullet_data
import math
import numpy as np
import random
class PandaEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self):
p.connect(p.GUI)
p.resetDebugVisualizerCamera(cameraDistance=1.5, cameraYaw=0, cameraPitch=-40, cameraTargetPosition=[0.55,-0.35,0.2])
self.action_space = spaces.Box(np.array([-1]*4), np.array([1]*4))
self.observation_space = spaces.Box(np.array([-1]*5), np.array([1]*5))
def step(self, action):
p.configureDebugVisualizer(p.COV_ENABLE_SINGLE_STEP_RENDERING)
orientation = p.getQuaternionFromEuler([0.,-math.pi,math.pi/2.])
dv = 0.005
dx = action[0] * dv
dy = action[1] * dv
dz = action[2] * dv
fingers = action[3]
currentPose = p.getLinkState(self.pandaUid, 11)
currentPosition = currentPose[0]
newPosition = [currentPosition[0] + dx,
currentPosition[1] + dy,
currentPosition[2] + dz]
jointPoses = p.calculateInverseKinematics(self.pandaUid,11,newPosition, orientation)
p.setJointMotorControlArray(self.pandaUid, list(range(7))+[9,10], p.POSITION_CONTROL, list(jointPoses)+2*[fingers])
p.stepSimulation()
state_object, _ = p.getBasePositionAndOrientation(self.objectUid)
state_robot = p.getLinkState(self.pandaUid, 11)[0]
state_fingers = (p.getJointState(self.pandaUid,9)[0], p.getJointState(self.pandaUid, 10)[0])
if state_object[2]>0.45:
reward = 1
done = True
else:
reward = 0
done = False
info = state_object
observation = state_robot + state_fingers
return observation, reward, done, info
def reset(self):
p.resetSimulation()
p.configureDebugVisualizer(p.COV_ENABLE_RENDERING,0) # we will enable rendering after we loaded everything
p.setGravity(0,0,-10)
urdfRootPath=pybullet_data.getDataPath()
planeUid = p.loadURDF(os.path.join(urdfRootPath,"plane.urdf"), basePosition=[0,0,-0.65])
rest_poses = [0,-0.215,0,-2.57,0,2.356,2.356,0.08,0.08]
self.pandaUid = p.loadURDF(os.path.join(urdfRootPath, "franka_panda/panda.urdf"),useFixedBase=True)
for i in range(7):
p.resetJointState(self.pandaUid,i, rest_poses[i])
tableUid = p.loadURDF(os.path.join(urdfRootPath, "table/table.urdf"),basePosition=[0.5,0,-0.65])
trayUid = p.loadURDF(os.path.join(urdfRootPath, "tray/traybox.urdf"),basePosition=[0.65,0,0])
state_object= [random.uniform(0.5,0.8),random.uniform(-0.2,0.2),0.05]
self.objectUid = p.loadURDF(os.path.join(urdfRootPath, "random_urdfs/000/000.urdf"), basePosition=state_object)
state_robot = p.getLinkState(self.pandaUid, 11)[0]
state_fingers = (p.getJointState(self.pandaUid,9)[0], p.getJointState(self.pandaUid, 10)[0])
observation = state_robot + state_fingers
p.configureDebugVisualizer(p.COV_ENABLE_RENDERING,1) # rendering's back on again
return observation
def render(self, mode='human'):
view_matrix = p.computeViewMatrixFromYawPitchRoll(cameraTargetPosition=[0.7,0,0.05],
distance=.7,
yaw=90,
pitch=-70,
roll=0,
upAxisIndex=2)
proj_matrix = p.computeProjectionMatrixFOV(fov=60,
aspect=float(960) /720,
nearVal=0.1,
farVal=100.0)
(_, _, px, _, _) = p.getCameraImage(width=960,
height=720,
viewMatrix=view_matrix,
projectionMatrix=proj_matrix,
renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_array = np.array(px, dtype=np.uint8)
rgb_array = np.reshape(rgb_array, (720,960, 4))
rgb_array = rgb_array[:, :, :3]
return rgb_array
def close(self):
p.disconnect() | /robo_env-0.0.1.tar.gz/robo_env-0.0.1/robo_env/envs/robo_env.py | 0.537284 | 0.327534 | robo_env.py | pypi |
import numpy as np
import yaml
import os
import copy
class UR():
"""Universal Robots utilities class.
Attributes:
max_joint_positions (np.array): Maximum joint position values (rad)`.
min_joint_positions (np.array): Minimum joint position values (rad)`.
max_joint_velocities (np.array): Maximum joint velocity values (rad/s)`.
min_joint_velocities (np.array): Minimum joint velocity values (rad/s)`.
joint_names (list): Joint names (Standard Indexing)`.
Joint Names (ROS Indexing):
[elbow_joint, shoulder_lift_joint, shoulder_pan_joint, wrist_1_joint, wrist_2_joint,
wrist_3_joint]
NOTE: Where not specified, Standard Indexing is used.
"""
def __init__(self, model):
assert model in ["ur3", "ur3e", "ur5", "ur5e", "ur10", "ur10e", "ur16e"]
self.model = model
file_name = model + ".yaml"
file_path = os.path.join(os.path.dirname(__file__), 'ur_parameters', file_name)
# Load robot paramters
with open(file_path, 'r') as stream:
try:
p = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
# Joint Names (Standard Indexing):
self.joint_names = ["shoulder_pan", "shoulder_lift", "elbow_joint", \
"wrist_1", "wrist_2", "wrist_3"]
# Initialize joint limits attributes
self.max_joint_positions = np.zeros(6)
self.min_joint_positions = np.zeros(6)
self.max_joint_velocities = np.zeros(6)
self.min_joint_velocities = np.zeros(6)
for idx,joint in enumerate(self.joint_names):
self.max_joint_positions[idx] = p["joint_limits"][joint]["max_position"]
self.min_joint_positions[idx] = p["joint_limits"][joint]["min_position"]
self.max_joint_velocities[idx] = p["joint_limits"][joint]["max_velocity"]
self.min_joint_velocities[idx] = -p["joint_limits"][joint]["max_velocity"]
# Workspace parameters
self.ws_r = p["workspace_area"]["r"]
self.ws_min_r = p["workspace_area"]["min_r"]
def get_max_joint_positions(self):
return self.max_joint_positions
def get_min_joint_positions(self):
return self.min_joint_positions
def get_max_joint_velocities(self):
return self.max_joint_velocities
def get_min_joint_velocities(self):
return self.min_joint_velocities
def normalize_joint_values(self, joints):
"""Normalize joint position values
Args:
joints (np.array): Joint position values
Returns:
norm_joints (np.array): Joint position values normalized between [-1 , 1]
"""
joints = copy.deepcopy(joints)
for i in range(len(joints)):
if joints[i] <= 0:
joints[i] = joints[i]/abs(self.min_joint_positions[i])
else:
joints[i] = joints[i]/abs(self.max_joint_positions[i])
return joints
def get_random_workspace_pose(self):
"""Get pose of a random point in the robot workspace.
Returns:
np.array: [x,y,z,alpha,theta,gamma] pose.
"""
pose = np.zeros(6)
singularity_area = True
# check if generated x,y,z are in singularityarea
while singularity_area:
# Generate random uniform sample in semisphere taking advantage of the
# sampling rule
phi = np.random.default_rng().uniform(low= 0.0, high= 2*np.pi)
costheta = np.random.default_rng().uniform(low= 0.0, high= 1.0) # [-1.0,1.0] for a sphere
u = np.random.default_rng().uniform(low= 0.0, high= 1.0)
theta = np.arccos(costheta)
r = self.ws_r * np.cbrt(u)
x = r * np.sin(theta) * np.cos(phi)
y = r * np.sin(theta) * np.sin(phi)
z = r * np.cos(theta)
if (x**2 + y**2) > self.ws_min_r**2:
singularity_area = False
pose[0:3] = [x,y,z]
return pose
def _ros_joint_list_to_ur_joint_list(self,ros_thetas):
"""Transform joint angles list from ROS indexing to standard indexing.
Rearrange a list containing the joints values from the joint indexes used
in the ROS join_states messages to the standard joint indexing going from
base to end effector.
Args:
ros_thetas (list): Joint angles with ROS indexing.
Returns:
np.array: Joint angles with standard indexing.
"""
return np.array([ros_thetas[2],ros_thetas[1],ros_thetas[0],ros_thetas[3],ros_thetas[4],ros_thetas[5]])
def _ur_joint_list_to_ros_joint_list(self,thetas):
"""Transform joint angles list from standard indexing to ROS indexing.
Rearrange a list containing the joints values from the standard joint indexing
going from base to end effector to the indexing used in the ROS
join_states messages.
Args:
thetas (list): Joint angles with standard indexing.
Returns:
np.array: Joint angles with ROS indexing.
"""
return np.array([thetas[2],thetas[1],thetas[0],thetas[3],thetas[4],thetas[5]]) | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/utils/ur_utils.py | 0.874158 | 0.611672 | ur_utils.py | pypi |
import math
import numpy as np
from scipy.spatial.transform import Rotation as R
def normalize_angle_rad(a):
"""Normalize angle (in radians) to +-pi
Args:
a (float): Angle (rad).
Returns:
float: Normalized angle (rad).
"""
return (a + math.pi) % (2 * math.pi) - math.pi
def point_inside_circle(x,y,center_x,center_y,radius):
"""Check if a point is inside a circle.
Args:
x (float): x coordinate of the point.
y (float): y coordinate of the point.
center_x (float): x coordinate of the center of the circle.
center_y (float): y coordinate of the center of the circle.
radius (float): radius of the circle (m).
Returns:
bool: True if the point is inside the circle.
"""
dx = abs(x - center_x)
dy = abs(y - center_y)
if dx>radius:
return False
if dy>radius:
return False
if dx + dy <= radius:
return True
if dx**2 + dy**2 <= radius**2:
return True
else:
return False
def rotate_point(x,y,theta):
"""Rotate a point around the origin by an angle theta.
Args:
x (float): x coordinate of point.
y (float): y coordinate of point.
theta (float): rotation angle (rad).
Returns:
list: [x,y] coordinates of rotated point.
"""
"""
Rotation of a point by an angle theta(rads)
"""
x_r = x * math.cos(theta) - y * math.sin(theta)
y_r = x * math.sin(theta) + y * math.cos(theta)
return [x_r, y_r]
def cartesian_to_polar_2d(x_target, y_target, x_origin = 0, y_origin = 0):
"""Transform 2D cartesian coordinates to 2D polar coordinates.
Args:
x_target (type): x coordinate of target point.
y_target (type): y coordinate of target point.
x_origin (type): x coordinate of origin of polar system. Defaults to 0.
y_origin (type): y coordinate of origin of polar system. Defaults to 0.
Returns:
float, float: r,theta polard coordinates.
"""
delta_x = x_target - x_origin
delta_y = y_target - y_origin
polar_r = np.sqrt(delta_x**2+delta_y**2)
polar_theta = np.arctan2(delta_y,delta_x)
return polar_r, polar_theta
def cartesian_to_polar_3d(cartesian_coordinates):
"""Transform 3D cartesian coordinates to 3D polar coordinates.
Args:
cartesian_coordinates (list): [x,y,z] coordinates of target point.
Returns:
list: [r,phi,theta] polar coordinates of point.
"""
x = cartesian_coordinates[0]
y = cartesian_coordinates[1]
z = cartesian_coordinates[2]
r = np.sqrt(x**2+y**2+z**2)
#? phi is defined in [-pi, +pi]
phi = np.arctan2(y,x)
#? theta is defined in [0, +pi]
theta = np.arccos(z/r)
return [r,theta,phi]
def downsample_list_to_len(data, output_len):
"""Downsample a list of values to a specific length.
Args:
data (list): Data to downsample.
output_len (int): Length of the downsampled list.
Returns:
list: Downsampled list.
"""
assert output_len > 0
assert output_len <= len(data)
temp = np.linspace(0, len(data)-1, num=output_len)
temp = [int(round(x)) for x in temp]
assert len(temp) == len(set(temp))
ds_data = []
for index in temp:
ds_data.append(data[index])
return ds_data
def change_reference_frame(point, translation, quaternion):
"""Transform a point from one reference frame to another, given
the translation vector between the two frames and the quaternion
between the two frames.
Args:
point (array_like,shape(3,) or shape(N,3)): x,y,z coordinates of the point in the original frame
translation (array_like,shape(3,)): translation vector from the original frame to the new frame
quaternion (array_like,shape(4,)): quaternion from the original frame to the new frame
Returns:
ndarray,shape(3,): x,y,z coordinates of the point in the new frame.
"""
#point = [1,2,3]
#point = np.array([1,2,3])
#point = np.array([[11,12,13],[21,22,23]]) # point.shape = (2,3) # point (11,12,13) and point (21,22,23)
# Apply rotation
r = R.from_quat(quaternion)
rotated_point = r.apply(np.array(point))
# Apply translation
translated_point = np.add(rotated_point, np.array(translation))
return translated_point | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/utils/utils.py | 0.928757 | 0.887156 | utils.py | pypi |
import sys, time, math, copy
import numpy as np
import gym
from gym import spaces, logger
from gym.utils import seeding
from robo_gym.utils import utils, mir100_utils
from robo_gym.utils.exceptions import InvalidStateError, RobotServerError
import robo_gym_server_modules.robot_server.client as rs_client
from robo_gym.envs.simulation_wrapper import Simulation
from robo_gym_server_modules.robot_server.grpc_msgs.python import robot_server_pb2
class Mir100Env(gym.Env):
"""Mobile Industrial Robots MiR100 base environment.
Args:
rs_address (str): Robot Server address. Formatted as 'ip:port'. Defaults to None.
Attributes:
mir100 (:obj:): Robot utilities object.
observation_space (:obj:): Environment observation space.
action_space (:obj:): Environment action space.
distance_threshold (float): Minimum distance (m) from target to consider it reached.
min_target_dist (float): Minimum initial distance (m) between robot and target.
max_vel (numpy.array): # Maximum allowed linear (m/s) and angular (rad/s) velocity.
client (:obj:str): Robot Server client.
real_robot (bool): True if the environment is controlling a real robot.
laser_len (int): Length of laser data array included in the environment state.
"""
real_robot = False
laser_len = 1022
max_episode_steps = 500
def __init__(self, rs_address=None, **kwargs):
self.mir100 = mir100_utils.Mir100()
self.elapsed_steps = 0
self.observation_space = self._get_observation_space()
self.action_space = spaces.Box(low=np.full((2), -1.0), high=np.full((2), 1.0), dtype=np.float32)
self.seed()
self.distance_threshold = 0.2
self.min_target_dist = 1.0
# Maximum linear velocity (m/s) of MiR
max_lin_vel = 0.5
# Maximum angular velocity (rad/s) of MiR
max_ang_vel = 0.7
self.max_vel = np.array([max_lin_vel, max_ang_vel])
# Connect to Robot Server
if rs_address:
self.client = rs_client.Client(rs_address)
else:
print("WARNING: No IP and Port passed. Simulation will not be started")
print("WARNING: Use this only to get environment shape")
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def reset(self, start_pose = None, target_pose = None):
"""Environment reset.
Args:
start_pose (list[3] or np.array[3]): [x,y,yaw] initial robot position.
target_pose (list[3] or np.array[3]): [x,y,yaw] target robot position.
Returns:
np.array: Environment state.
"""
self.elapsed_steps = 0
self.prev_base_reward = None
# Initialize environment state
self.state = np.zeros(self._get_env_state_len())
rs_state = np.zeros(self._get_robot_server_state_len())
# Set Robot starting position
if start_pose:
assert len(start_pose)==3
else:
start_pose = self._get_start_pose()
rs_state[3:6] = start_pose
# Set target position
if target_pose:
assert len(target_pose)==3
else:
target_pose = self._get_target(start_pose)
rs_state[0:3] = target_pose
# Set initial state of the Robot Server
state_msg = robot_server_pb2.State(state = rs_state.tolist())
if not self.client.set_state_msg(state_msg):
raise RobotServerError("set_state")
# Get Robot Server state
rs_state = copy.deepcopy(np.nan_to_num(np.array(self.client.get_state_msg().state)))
# Check if the length of the Robot Server state received is correct
if not len(rs_state)== self._get_robot_server_state_len():
raise InvalidStateError("Robot Server state received has wrong length")
# Convert the initial state from Robot Server format to environment format
self.state = self._robot_server_state_to_env_state(rs_state)
# Check if the environment state is contained in the observation space
if not self.observation_space.contains(self.state):
raise InvalidStateError()
return self.state
def _reward(self, rs_state, action):
return 0, False, {}
def step(self, action):
self.elapsed_steps += 1
# Check if the action is within the action space
assert self.action_space.contains(action), "%r (%s) invalid" % (action, type(action))
# Convert environment action to Robot Server action
rs_action = copy.deepcopy(action)
# Scale action
rs_action = np.multiply(action, self.max_vel)
# Send action to Robot Server
if not self.client.send_action(rs_action.tolist()):
raise RobotServerError("send_action")
# Get state from Robot Server
rs_state = self.client.get_state_msg().state
# Convert the state from Robot Server format to environment format
self.state = self._robot_server_state_to_env_state(rs_state)
# Check if the environment state is contained in the observation space
if not self.observation_space.contains(self.state):
raise InvalidStateError()
# Assign reward
reward, done, info = self._reward(rs_state=rs_state, action=action)
return self.state, reward, done, info
def render(self):
pass
def _get_robot_server_state_len(self):
"""Get length of the Robot Server state.
Describes the composition of the Robot Server state and returns
its length.
Returns:
int: Length of the Robot Server state.
"""
target = [0.0] * 3
mir_pose = [0.0] * 3
mir_twist = [0.0] * 2
f_scan = [0.0] * 501
b_scan = [0.0] * 511
collision = False
obstacles = [0.0] * 9
rs_state = target + mir_pose + mir_twist + f_scan + b_scan + [collision] + obstacles
return len(rs_state)
def _get_env_state_len(self):
"""Get length of the environment state.
Describes the composition of the environment state and returns
its length.
Returns:
int: Length of the environment state
"""
target_polar_coordinates = [0.0]*2
mir_twist = [0.0]*2
laser = [0.0]*self.laser_len
env_state = target_polar_coordinates + mir_twist + laser
return len(env_state)
def _get_start_pose(self):
"""Get initial robot coordinates.
For the real robot the initial coordinates are its current coordinates
whereas for the simulated robot the initial coordinates are
randomly generated.
Returns:
numpy.array: [x,y,yaw] robot initial coordinates.
"""
if self.real_robot:
# Take current robot position as start position
start_pose = self.client.get_state_msg().state[3:6]
else:
# Create random starting position
x = self.np_random.uniform(low= -2.0, high= 2.0)
y = self.np_random.uniform(low= -2.0, high= 2.0)
yaw = self.np_random.uniform(low= -np.pi, high= np.pi)
start_pose = [x,y,yaw]
return start_pose
def _get_target(self, robot_coordinates):
"""Generate coordinates of the target at a minimum distance from the robot.
Args:
robot_coordinates (list): [x,y,yaw] coordinates of the robot.
Returns:
numpy.array: [x,y,yaw] coordinates of the target.
"""
target_far_enough = False
while not target_far_enough:
x_t = self.np_random.uniform(low= -1.0, high= 1.0)
y_t = self.np_random.uniform(low= -1.0, high= 1.0)
yaw_t = 0.0
target_dist = np.linalg.norm(np.array([x_t,y_t]) - np.array(robot_coordinates[0:2]), axis=-1)
if target_dist >= self.min_target_dist:
target_far_enough = True
return [x_t,y_t,yaw_t]
def _robot_server_state_to_env_state(self, rs_state):
"""Transform state from Robot Server to environment format.
Args:
rs_state (list): State in Robot Server format.
Returns:
numpy.array: State in environment format.
"""
# Convert to numpy array and remove NaN values
rs_state = np.nan_to_num(np.array(rs_state))
# Transform cartesian coordinates of target to polar coordinates
polar_r, polar_theta = utils.cartesian_to_polar_2d(x_target=rs_state[0],\
y_target=rs_state[1],\
x_origin=rs_state[3],\
y_origin=rs_state[4])
# Rotate origin of polar coordinates frame to be matching with robot frame and normalize to +/- pi
polar_theta = utils.normalize_angle_rad(polar_theta - rs_state[5])
# Get Laser scanners data
raw_laser_scan = rs_state[8:1020]
# Downsampling of laser values by picking every n-th value
if self.laser_len > 0:
laser = utils.downsample_list_to_len(raw_laser_scan,self.laser_len)
# Compose environment state
state = np.concatenate((np.array([polar_r, polar_theta]),rs_state[6:8],laser))
else:
# Compose environment state
state = np.concatenate((np.array([polar_r, polar_theta]),rs_state[6:8]))
return state
def _get_observation_space(self):
"""Get environment observation space.
Returns:
gym.spaces: Gym observation space object.
"""
# Target coordinates range
max_target_coords = np.array([np.inf,np.pi])
min_target_coords = np.array([-np.inf,-np.pi])
# Robot velocity range tolerance
vel_tolerance = 0.1
# Robot velocity range used to determine if there is an error in the sensor readings
max_lin_vel = self.mir100.get_max_lin_vel() + vel_tolerance
min_lin_vel = self.mir100.get_min_lin_vel() - vel_tolerance
max_ang_vel = self.mir100.get_max_ang_vel() + vel_tolerance
min_ang_vel = self.mir100.get_min_ang_vel() - vel_tolerance
max_vel = np.array([max_lin_vel,max_ang_vel])
min_vel = np.array([min_lin_vel,min_ang_vel])
# Laser readings range
max_laser = np.full(self.laser_len, 29.0)
min_laser = np.full(self.laser_len, 0.0)
# Definition of environment observation_space
max_obs = np.concatenate((max_target_coords,max_vel,max_laser))
min_obs = np.concatenate((min_target_coords,min_vel,min_laser))
return spaces.Box(low=min_obs, high=max_obs, dtype=np.float32)
def _robot_outside_of_boundary_box(self, robot_coordinates):
"""Check if robot is outside of boundary box.
Check if the robot is outside of the boundaries defined as a box with
its center in the origin of the map and sizes width and height.
Args:
robot_coordinates (list): [x,y] Cartesian coordinates of the robot.
Returns:
bool: True if outside of boundaries.
"""
# Dimensions of boundary box in m, the box center corresponds to the map origin
width = 20
height = 20
if np.absolute(robot_coordinates[0]) > (width/2) or \
np.absolute(robot_coordinates[1] > (height/2)):
return True
else:
return False
def _sim_robot_collision(self, rs_state):
"""Get status of simulated collision sensor.
Used only for simulated Robot Server.
Args:
rs_state (list): State in Robot Server format.
Returns:
bool: True if the robot is in collision.
"""
if rs_state[1020] == 1:
return True
else:
return False
def _min_laser_reading_below_threshold(self, rs_state):
"""Check if any of the laser readings is below a threshold.
Args:
rs_state (list): State in Robot Server format.
Returns:
bool: True if any of the laser readings is below the threshold.
"""
threshold = 0.15
if min(rs_state[8:1020]) < threshold:
return True
else:
return False
class NoObstacleNavigationMir100(Mir100Env):
laser_len = 0
def _reward(self, rs_state, action):
reward = 0
done = False
info = {}
linear_power = 0
angular_power = 0
# Calculate distance to the target
target_coords = np.array([rs_state[0], rs_state[1]])
mir_coords = np.array([rs_state[3],rs_state[4]])
euclidean_dist_2d = np.linalg.norm(target_coords - mir_coords, axis=-1)
# Reward base
base_reward = -50*euclidean_dist_2d
if self.prev_base_reward is not None:
reward = base_reward - self.prev_base_reward
self.prev_base_reward = base_reward
# Power used by the motors
linear_power = abs(action[0] *0.30)
angular_power = abs(action[1] *0.03)
reward -= linear_power
reward -= angular_power
# End episode if robot is outside of boundary box
if self._robot_outside_of_boundary_box(rs_state[3:5]):
reward = -200.0
done = True
info['final_status'] = 'out of boundary'
# The episode terminates with success if the distance between the robot
# and the target is less than the distance threshold.
if (euclidean_dist_2d < self.distance_threshold):
reward = 200.0
done = True
info['final_status'] = 'success'
if self.elapsed_steps >= self.max_episode_steps:
done = True
info['final_status'] = 'max_steps_exceeded'
return reward, done, info
class NoObstacleNavigationMir100Sim(NoObstacleNavigationMir100, Simulation):
cmd = "roslaunch mir100_robot_server sim_robot_server.launch"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, **kwargs):
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
NoObstacleNavigationMir100.__init__(self, rs_address=self.robot_server_ip, **kwargs)
class NoObstacleNavigationMir100Rob(NoObstacleNavigationMir100):
real_robot = True
class ObstacleAvoidanceMir100(Mir100Env):
laser_len = 16
def reset(self, start_pose = None, target_pose = None):
"""Environment reset.
Args:
start_pose (list[3] or np.array[3]): [x,y,yaw] initial robot position.
target_pose (list[3] or np.array[3]): [x,y,yaw] target robot position.
Returns:
np.array: Environment state.
"""
self.elapsed_steps = 0
self.prev_base_reward = None
# Initialize environment state
self.state = np.zeros(self._get_env_state_len())
rs_state = np.zeros(self._get_robot_server_state_len())
# Set Robot starting position
if start_pose:
assert len(start_pose)==3
else:
start_pose = self._get_start_pose()
rs_state[3:6] = start_pose
# Set target position
if target_pose:
assert len(target_pose)==3
else:
target_pose = self._get_target(start_pose)
rs_state[0:3] = target_pose
# Generate obstacles positions
self._generate_obstacles_positions()
rs_state[1021:1024] = self.sim_obstacles[0]
rs_state[1024:1027] = self.sim_obstacles[1]
rs_state[1027:1030] = self.sim_obstacles[2]
# Set initial state of the Robot Server
state_msg = robot_server_pb2.State(state = rs_state.tolist())
if not self.client.set_state_msg(state_msg):
raise RobotServerError("set_state")
# Get Robot Server state
rs_state = copy.deepcopy(np.nan_to_num(np.array(self.client.get_state_msg().state)))
# Check if the length of the Robot Server state received is correct
if not len(rs_state)== self._get_robot_server_state_len():
raise InvalidStateError("Robot Server state received has wrong length")
# Convert the initial state from Robot Server format to environment format
self.state = self._robot_server_state_to_env_state(rs_state)
# Check if the environment state is contained in the observation space
if not self.observation_space.contains(self.state):
raise InvalidStateError()
return self.state
def _reward(self, rs_state, action):
reward = 0
done = False
info = {}
linear_power = 0
angular_power = 0
# Calculate distance to the target
target_coords = np.array([rs_state[0], rs_state[1]])
mir_coords = np.array([rs_state[3],rs_state[4]])
euclidean_dist_2d = np.linalg.norm(target_coords - mir_coords, axis=-1)
# Reward base
base_reward = -50*euclidean_dist_2d
if self.prev_base_reward is not None:
reward = base_reward - self.prev_base_reward
self.prev_base_reward = base_reward
# Power used by the motors
linear_power = abs(action[0] *0.30)
angular_power = abs(action[1] *0.03)
reward-= linear_power
reward-= angular_power
# End episode if robot is collides with an object, if it is too close
# to an object.
if not self.real_robot:
if self._sim_robot_collision(rs_state) or \
self._min_laser_reading_below_threshold(rs_state) or \
self._robot_close_to_sim_obstacle(rs_state):
reward = -200.0
done = True
info['final_status'] = 'collision'
if (euclidean_dist_2d < self.distance_threshold):
reward = 100
done = True
info['final_status'] = 'success'
if self.elapsed_steps >= self.max_episode_steps:
done = True
info['final_status'] = 'max_steps_exceeded'
return reward, done, info
def _get_start_pose(self):
"""Get initial robot coordinates.
For the real robot the initial coordinates are its current coordinates
whereas for the simulated robot the initial coordinates are
randomly generated.
Returns:
numpy.array: [x,y,yaw] robot initial coordinates.
"""
if self.real_robot:
# Take current robot position as start position
start_pose = self.client.get_state_msg().state[3:6]
else:
# Create random starting position
x = self.np_random.uniform(low= -2.0, high= 2.0)
if np.random.choice(a=[True,False]):
y = self.np_random.uniform(low= -3.1, high= -2.1)
else:
y = self.np_random.uniform(low= 2.1, high= 3.1)
yaw = self.np_random.uniform(low= -np.pi, high=np.pi)
start_pose = [x,y,yaw]
return start_pose
def _get_target(self, robot_coordinates):
"""Generate coordinates of the target at a minimum distance from the robot.
Args:
robot_coordinates (list): [x,y,yaw] coordinates of the robot.
Returns:
numpy.array: [x,y,yaw] coordinates of the target.
"""
target_far_enough = False
while not target_far_enough:
x_t = self.np_random.uniform(low= -2.0, high= 2.0)
if robot_coordinates[1]>0:
y_t = self.np_random.uniform(low= -3.1, high= -2.1)
else:
y_t = self.np_random.uniform(low= 2.1, high= 3.1)
yaw_t = 0.0
target_dist = np.linalg.norm(np.array([x_t,y_t]) - np.array(robot_coordinates[0:2]), axis=-1)
if target_dist >= self.min_target_dist:
target_far_enough = True
return [x_t,y_t,yaw_t]
def _robot_close_to_sim_obstacle(self, rs_state):
"""Check if the robot is too close to one of the obstacles in simulation.
Check if one of the corner of the robot's base has a distance shorter
than the safety radius from one of the simulated obstacles. Used only for
simulated Robot Server.
Args:
rs_state (list): State in Robot Server format.
Returns:
bool: True if the robot is too close to an obstacle.
"""
# Minimum distance from obstacle center
safety_radius = 0.40
robot_close_to_obstacle = False
robot_corners = self.mir100.get_corners_positions(rs_state[3], rs_state[4], rs_state[5])
for corner in robot_corners:
for obstacle_coord in self.sim_obstacles:
if utils.point_inside_circle(corner[0],corner[1],obstacle_coord[0],obstacle_coord[1],safety_radius):
robot_close_to_obstacle = True
return robot_close_to_obstacle
def _generate_obstacles_positions(self,):
"""Generate random positions for 3 obstacles.
Used only for simulated Robot Server.
"""
x_0 = self.np_random.uniform(low= -2.4, high= -1.5)
y_0 = self.np_random.uniform(low= -1.0, high= 1.0)
yaw_0 = self.np_random.uniform(low= -np.pi, high=np.pi)
x_1 = self.np_random.uniform(low= -0.5, high= 0.5)
y_1 = self.np_random.uniform(low= -1.0, high= 1.0)
yaw_1 = self.np_random.uniform(low= -np.pi, high=np.pi)
x_2 = self.np_random.uniform(low= 1.5, high= 2.4)
y_2 = self.np_random.uniform(low= -1.0, high= 1.0)
yaw_2 = self.np_random.uniform(low= -np.pi, high=np.pi)
self.sim_obstacles = [[x_0, y_0, yaw_0],[x_1, y_1, yaw_1],[x_2, y_2, yaw_2]]
class ObstacleAvoidanceMir100Sim(ObstacleAvoidanceMir100, Simulation):
cmd = "roslaunch mir100_robot_server sim_robot_server.launch world_name:=lab_6x8.world"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, **kwargs):
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
ObstacleAvoidanceMir100.__init__(self, rs_address=self.robot_server_ip, **kwargs)
class ObstacleAvoidanceMir100Rob(ObstacleAvoidanceMir100):
real_robot = True | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/envs/mir100/mir100.py | 0.749454 | 0.431165 | mir100.py | pypi |
import os, copy, json
import numpy as np
import gym
from typing import Tuple
from robo_gym_server_modules.robot_server.grpc_msgs.python import robot_server_pb2
from robo_gym.envs.simulation_wrapper import Simulation
from robo_gym.envs.ur.ur_base_avoidance_env import URBaseAvoidanceEnv
DEBUG = True
MINIMUM_DISTANCE = 0.45 # the distance [cm] the robot should keep to the obstacle
class AvoidanceIros2021UR(URBaseAvoidanceEnv):
"""Universal Robots UR IROS environment. Obstacle avoidance while keeping a fixed trajectory.
Args:
rs_address (str): Robot Server address. Formatted as 'ip:port'. Defaults to None.
fix_base (bool): Wether or not the base joint stays fixed or is moveable. Defaults to False.
fix_shoulder (bool): Wether or not the shoulder joint stays fixed or is moveable. Defaults to False.
fix_elbow (bool): Wether or not the elbow joint stays fixed or is moveable. Defaults to False.
fix_wrist_1 (bool): Wether or not the wrist 1 joint stays fixed or is moveable. Defaults to False.
fix_wrist_2 (bool): Wether or not the wrist 2 joint stays fixed or is moveable. Defaults to False.
fix_wrist_3 (bool): Wether or not the wrist 3 joint stays fixed or is moveable. Defaults to True.
ur_model (str): determines which ur model will be used in the environment. Defaults to 'ur5'.
include_polar_to_elbow (bool): determines wether or not the polar coordinates to the elbow joint are included in the state. Defaults to False.
Attributes:
ur (:obj:): Robot utilities object.
client (:obj:str): Robot Server client.
real_robot (bool): True if the environment is controlling a real robot.
"""
max_episode_steps = 1000
def __init__(self, rs_address=None, fix_base=False, fix_shoulder=False, fix_elbow=False, fix_wrist_1=False, fix_wrist_2=False, fix_wrist_3=True, ur_model='ur5', include_polar_to_elbow=True, rs_state_to_info=True, **kwargs) -> None:
super().__init__(rs_address, fix_base, fix_shoulder, fix_elbow, fix_wrist_1, fix_wrist_2, fix_wrist_3, ur_model, include_polar_to_elbow)
file_name = 'trajectory_iros_2021.json'
file_path = os.path.join(os.path.dirname(__file__), 'robot_trajectories', file_name)
with open(file_path) as json_file:
self.trajectory = json.load(json_file)['trajectory']
def _set_initial_robot_server_state(self, rs_state, fixed_object_position = None) -> robot_server_pb2.State:
if fixed_object_position:
state_msg = super()._set_initial_robot_server_state(rs_state=rs_state, fixed_object_position=fixed_object_position)
return state_msg
n_sampling_points = int(np.random.default_rng().uniform(low=8000, high=12000))
string_params = {"object_0_function": "3d_spline_ur5_workspace"}
float_params = {"object_0_x_min": -1.0, "object_0_x_max": 1.0, "object_0_y_min": -1.0, "object_0_y_max": 1.0, \
"object_0_z_min": 0.1, "object_0_z_max": 1.0, "object_0_n_points": 10, \
"n_sampling_points": n_sampling_points}
state = {}
state_msg = robot_server_pb2.State(state = state, float_params = float_params,
string_params = string_params, state_dict = rs_state)
return state_msg
def reset(self, fixed_object_position = None) -> np.array:
"""Environment reset.
Args:
fixed_object_position (list[3]): x,y,z fixed position of object
"""
# Initialize state machine variables
self.state_n = 0
self.elapsed_steps_in_current_state = 0
self.target_reached = 0
self.target_reached_counter = 0
self.prev_action = np.zeros(6)
joint_positions = self._get_joint_positions()
state = super().reset(joint_positions = joint_positions, fixed_object_position = fixed_object_position)
return state
def step(self, action) -> Tuple[np.array, float, bool, dict]:
if type(action) == list: action = np.array(action)
self.elapsed_steps_in_current_state += 1
state, reward, done, info = super().step(action)
# Check if waypoint was reached
joint_positions = []
joint_positions_keys = ['base_joint_position', 'shoulder_joint_position', 'elbow_joint_position',
'wrist_1_joint_position', 'wrist_2_joint_position', 'wrist_3_joint_position']
for position in joint_positions_keys:
joint_positions.append(self.rs_state[position])
joint_positions = np.array(joint_positions)
if self.target_point_flag:
if np.isclose(self._get_joint_positions(), joint_positions, atol = 0.1).all():
self.target_reached = 1
if self.target_reached:
self.state_n +=1
# Restart from state 0 if the full trajectory has been completed
self.state_n = self.state_n % len(self.trajectory)
self.elapsed_steps_in_current_state = 0
self.target_reached_counter += 1
self.target_reached = 0
self.prev_action = self.add_fixed_joints(action)
return state, reward, done, info
def reward(self, rs_state, action) -> Tuple[float, bool, dict]:
env_state = self._robot_server_state_to_env_state(rs_state)
reward = 0
done = False
info = {}
# Reward weights
close_distance_weight = -2.0
delta_joint_weight = 1.0
action_usage_weight = 1.5
rapid_action_weight = -0.2
collision_weight = -0.1
target_reached_weight = 0.05
# Calculate distance to the target
target_coord = np.array([rs_state['object_0_to_ref_translation_x'], rs_state['object_0_to_ref_translation_y'], rs_state['object_0_to_ref_translation_z']])
ee_coord = np.array([rs_state['ee_to_ref_translation_x'], rs_state['ee_to_ref_translation_y'], rs_state['ee_to_ref_translation_z']])
forearm_coord = np.array([rs_state['forearm_to_ref_translation_x'], rs_state['forearm_to_ref_translation_y'], rs_state['forearm_to_ref_translation_z']])
distance_to_ee = np.linalg.norm(np.array(target_coord)-np.array(ee_coord))
distance_to_elbow = np.linalg.norm(np.array(target_coord)-np.array(forearm_coord))
# Reward staying close to the predefined joint position
delta_joint_pos = env_state[9:15]
for i in range(action.size):
if abs(delta_joint_pos[i]) < 0.1:
temp = delta_joint_weight * (1 - (abs(delta_joint_pos[i]))/0.1) * (1/1000)
temp = temp/action.size
reward += temp
# Reward for not acting
if abs(action).sum() <= action.size:
reward += action_usage_weight * (1 - (np.square(action).sum()/action.size)) * (1/1000)
# Negative reward if actions change to rapidly between steps
for i in range(len(action)):
if abs(action[i] - self.prev_action[i]) > 0.4:
reward += rapid_action_weight * (1/1000)
# Negative reward if the obstacle is closer than the predefined minimum distance
if (distance_to_ee < MINIMUM_DISTANCE) or (distance_to_elbow < MINIMUM_DISTANCE):
reward += close_distance_weight * (1/1000)
# Reward for reaching a predefined waypoint on the trajectory
if self.target_reached:
reward += target_reached_weight
# Check if robot is in collision
collision = True if rs_state['in_collision'] == 1 else False
if collision:
# Negative reward for a collision with the robot itself, the obstacle or the scene
reward = collision_weight
done = True
info['final_status'] = 'collision'
if self.elapsed_steps >= self.max_episode_steps:
done = True
info['final_status'] = 'success'
if done:
info['targets_reached'] = self.target_reached_counter
return reward, done, info
def _robot_server_state_to_env_state(self, rs_state) -> np.array:
"""Transform state from Robot Server to environment format.
Args:
rs_state (list): State in Robot Server format.
"""
state = super()._robot_server_state_to_env_state(rs_state)
trajectory_joint_position = self.ur.normalize_joint_values(self._get_joint_positions())
target_point_flag = copy.deepcopy(self.target_point_flag)
state = np.concatenate((state, trajectory_joint_position, [target_point_flag]))
return state
def _get_observation_space(self) -> gym.spaces.Box:
"""Get environment observation space."""
# Joint position range tolerance
pos_tolerance = np.full(6,0.1)
# Joint positions range used to determine if there is an error in the sensor readings
max_joint_positions = np.add(np.full(6, 1.0), pos_tolerance)
min_joint_positions = np.subtract(np.full(6, -1.0), pos_tolerance)
# Target coordinates range
target_range = np.full(3, np.inf)
max_delta_start_positions = np.add(np.full(6, 1.0), pos_tolerance)
min_delta_start_positions = np.subtract(np.full(6, -1.0), pos_tolerance)
# Target coordinates (with respect to forearm frame) range
target_forearm_range = np.full(3, np.inf)
# Definition of environment observation_space
if self.include_polar_to_elbow:
max_obs = np.concatenate((target_range, max_joint_positions, max_delta_start_positions, target_forearm_range, max_joint_positions, [1]))
min_obs = np.concatenate((-target_range, min_joint_positions, min_delta_start_positions, -target_forearm_range, min_joint_positions, [0]))
else:
max_obs = np.concatenate((target_range, max_joint_positions, max_delta_start_positions, np.zeros(3), max_joint_positions, [1]))
min_obs = np.concatenate((-target_range, min_joint_positions, min_delta_start_positions, np.zeros(3), min_joint_positions, [0]))
return gym.spaces.Box(low=min_obs, high=max_obs, dtype=np.float32)
def _get_joint_positions(self) -> np.array:
"""Get robot joint positions with standard indexing."""
if self.elapsed_steps_in_current_state < len(self.trajectory[self.state_n]):
joint_positions = copy.deepcopy(self.trajectory[self.state_n][self.elapsed_steps_in_current_state])
self.target_point_flag = 0
else:
# Get last point of the trajectory segment
joint_positions = copy.deepcopy(self.trajectory[self.state_n][-1])
self.target_point_flag = 1
return joint_positions
def _get_joint_positions_as_array(self) -> np.array:
"""Get robot joint positions with standard indexing."""
joint_positions = self._get_joint_positions()
temp = []
temp.append(joint_positions[0])
temp.append(joint_positions[1])
temp.append(joint_positions[2])
temp.append(joint_positions[3])
temp.append(joint_positions[4])
temp.append(joint_positions[5])
return np.array(temp)
class AvoidanceIros2021URSim(AvoidanceIros2021UR, Simulation):
cmd = "roslaunch ur_robot_server ur_robot_server.launch \
world_name:=tabletop_sphere50.world \
reference_frame:=base_link \
max_velocity_scale_factor:=0.2 \
action_cycle_rate:=20 \
rviz_gui:=false \
gazebo_gui:=true \
objects_controller:=true \
rs_mode:=1moving2points \
n_objects:=1.0 \
object_0_model_name:=sphere50 \
object_0_frame:=target"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, ur_model='ur5', **kwargs):
self.cmd = self.cmd + ' ' + 'ur_model:=' + ur_model
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
AvoidanceIros2021UR.__init__(self, rs_address=self.robot_server_ip, ur_model=ur_model, **kwargs)
class AvoidanceIros2021URRob(AvoidanceIros2021UR):
real_robot = True
# roslaunch ur_robot_server ur_robot_server.launch ur_model:=ur5 real_robot:=true rviz_gui:=true gui:=true reference_frame:=base max_velocity_scale_factor:=0.2 action_cycle_rate:=20 rs_mode:=1moving2points n_objects:=1.0 object_0_frame:=target
"""
Testing Environment for more complex obstacle avoidance controlling a robotic arm from UR.
In contrast to the training environment the obstacle trajectories are fixed instead of random generation.
"""
class AvoidanceIros2021TestUR(AvoidanceIros2021UR):
ep_n = 0
def _set_initial_robot_server_state(self, rs_state, fixed_object_position = None) -> robot_server_pb2.State:
if fixed_object_position:
state_msg = super()._set_initial_robot_server_state(rs_state=rs_state, fixed_object_position=fixed_object_position)
return state_msg
string_params = {"object_0_function": "fixed_trajectory"}
float_params = {"object_0_trajectory_id": self.ep_n%50}
state = {}
state_msg = robot_server_pb2.State(state = state, float_params = float_params,
string_params = string_params, state_dict = rs_state)
return state_msg
def reset(self):
state = super().reset()
self.ep_n +=1
return state
class AvoidanceIros2021TestURSim(AvoidanceIros2021TestUR, Simulation):
cmd = "roslaunch ur_robot_server ur_robot_server.launch \
world_name:=tabletop_sphere50.world \
reference_frame:=base_link \
max_velocity_scale_factor:=0.2 \
action_cycle_rate:=20 \
rviz_gui:=false \
gazebo_gui:=true \
objects_controller:=true \
rs_mode:=1moving2points \
n_objects:=1.0 \
object_trajectory_file_name:=splines_ur5 \
object_0_model_name:=sphere50 \
object_0_frame:=target"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, ur_model='ur5', **kwargs):
self.cmd = self.cmd + ' ' + 'ur_model:=' + ur_model
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
AvoidanceIros2021TestUR.__init__(self, rs_address=self.robot_server_ip, ur_model=ur_model, **kwargs)
class AvoidanceIros2021TestURRob(AvoidanceIros2021TestUR):
real_robot = True
# roslaunch ur_robot_server ur_robot_server.launch ur_model:=ur5 real_robot:=true rviz_gui:=true gui:=true reference_frame:=base max_velocity_scale_factor:=0.2 action_cycle_rate:=20 rs_mode:=1moving2points n_objects:=1.0 object_0_frame:=target | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/envs/ur/ur_avoidance_iros.py | 0.880206 | 0.35502 | ur_avoidance_iros.py | pypi |
import copy
import numpy as np
import gym
from typing import Tuple
from robo_gym.utils import ur_utils
from robo_gym.utils.exceptions import InvalidStateError, RobotServerError, InvalidActionError
import robo_gym_server_modules.robot_server.client as rs_client
from robo_gym_server_modules.robot_server.grpc_msgs.python import robot_server_pb2
from robo_gym.envs.simulation_wrapper import Simulation
# base, shoulder, elbow, wrist_1, wrist_2, wrist_3
JOINT_POSITIONS = [0.0, -2.5, 1.5, 0.0, -1.4, 0.0]
class URBaseEnv(gym.Env):
"""Universal Robots UR base environment.
Args:
rs_address (str): Robot Server address. Formatted as 'ip:port'. Defaults to None.
fix_base (bool): Wether or not the base joint stays fixed or is moveable. Defaults to False.
fix_shoulder (bool): Wether or not the shoulder joint stays fixed or is moveable. Defaults to False.
fix_elbow (bool): Wether or not the elbow joint stays fixed or is moveable. Defaults to False.
fix_wrist_1 (bool): Wether or not the wrist 1 joint stays fixed or is moveable. Defaults to False.
fix_wrist_2 (bool): Wether or not the wrist 2 joint stays fixed or is moveable. Defaults to False.
fix_wrist_3 (bool): Wether or not the wrist 3 joint stays fixed or is moveable. Defaults to True.
ur_model (str): determines which ur model will be used in the environment. Default to 'ur5'.
Attributes:
ur (:obj:): Robot utilities object.
client (:obj:str): Robot Server client.
real_robot (bool): True if the environment is controlling a real robot.
"""
real_robot = False
max_episode_steps = 300
def __init__(self, rs_address=None, fix_base=False, fix_shoulder=False, fix_elbow=False, fix_wrist_1=False, fix_wrist_2=False, fix_wrist_3=True, ur_model='ur5', rs_state_to_info=True, **kwargs):
self.ur = ur_utils.UR(model=ur_model)
self.elapsed_steps = 0
self.rs_state_to_info = rs_state_to_info
self.fix_base = fix_base
self.fix_shoulder = fix_shoulder
self.fix_elbow = fix_elbow
self.fix_wrist_1 = fix_wrist_1
self.fix_wrist_2 = fix_wrist_2
self.fix_wrist_3 = fix_wrist_3
self.observation_space = self._get_observation_space()
self.action_space = self._get_action_space()
self.abs_joint_pos_range = self.ur.get_max_joint_positions()
self.rs_state = None
# Connect to Robot Server
if rs_address:
self.client = rs_client.Client(rs_address)
else:
print("WARNING: No IP and Port passed. Simulation will not be started")
print("WARNING: Use this only to get environment shape")
def _set_initial_robot_server_state(self, rs_state) -> robot_server_pb2.State:
string_params = {}
float_params = {}
state = {}
state_msg = robot_server_pb2.State(state = state, float_params = float_params,
string_params = string_params, state_dict = rs_state)
return state_msg
def reset(self, joint_positions = None) -> np.array:
"""Environment reset.
Args:
joint_positions (list[6] or np.array[6]): robot joint positions in radians. Order is defined by
Returns:
np.array: Environment state.
"""
if joint_positions:
assert len(joint_positions) == 6
else:
joint_positions = JOINT_POSITIONS
self.elapsed_steps = 0
# Initialize environment state
state_len = self.observation_space.shape[0]
state = np.zeros(state_len)
rs_state = dict.fromkeys(self.get_robot_server_composition(), 0.0)
# Set initial robot joint positions
self._set_joint_positions(joint_positions)
# Update joint positions in rs_state
rs_state.update(self.joint_positions)
# Set initial state of the Robot Server
state_msg = self._set_initial_robot_server_state(rs_state)
if not self.client.set_state_msg(state_msg):
raise RobotServerError("set_state")
# Get Robot Server state
rs_state = self.client.get_state_msg().state_dict
# Check if the length and keys of the Robot Server state received is correct
self._check_rs_state_keys(rs_state)
# Convert the initial state from Robot Server format to environment format
state = self._robot_server_state_to_env_state(rs_state)
# Check if the environment state is contained in the observation space
if not self.observation_space.contains(state):
raise InvalidStateError()
# Check if current position is in the range of the initial joint positions
for joint in self.joint_positions.keys():
if not np.isclose(self.joint_positions[joint], rs_state[joint], atol=0.05):
raise InvalidStateError('Reset joint positions are not within defined range')
self.rs_state = rs_state
return state
def reward(self, rs_state, action) -> Tuple[float, bool, dict]:
done = False
info = {}
# Check if robot is in collision
collision = True if rs_state['in_collision'] == 1 else False
if collision:
done = True
info['final_status'] = 'collision'
if self.elapsed_steps >= self.max_episode_steps:
done = True
info['final_status'] = 'success'
return 0, done, info
def add_fixed_joints(self, action) -> np.array:
action = action.tolist()
fixed_joints = np.array([self.fix_base, self.fix_shoulder, self.fix_elbow, self.fix_wrist_1, self.fix_wrist_2, self.fix_wrist_3])
fixed_joint_indices = np.where(fixed_joints)[0]
joint_pos_names = ['base_joint_position', 'shoulder_joint_position', 'elbow_joint_position',
'wrist_1_joint_position', 'wrist_2_joint_position', 'wrist_3_joint_position']
joint_positions_dict = self._get_joint_positions()
joint_positions = np.array([joint_positions_dict.get(joint_pos) for joint_pos in joint_pos_names])
joints_position_norm = self.ur.normalize_joint_values(joints=joint_positions)
temp = []
for joint in range(len(fixed_joints)):
if joint in fixed_joint_indices:
temp.append(joints_position_norm[joint])
else:
temp.append(action.pop(0))
return np.array(temp)
def env_action_to_rs_action(self, action) -> np.array:
"""Convert environment action to Robot Server action"""
rs_action = copy.deepcopy(action)
# Scale action
rs_action = np.multiply(rs_action, self.abs_joint_pos_range)
# Convert action indexing from ur to ros
rs_action = self.ur._ur_joint_list_to_ros_joint_list(rs_action)
return rs_action
def step(self, action) -> Tuple[np.array, float, bool, dict]:
if type(action) == list: action = np.array(action)
self.elapsed_steps += 1
# Check if the action is contained in the action space
if not self.action_space.contains(action):
raise InvalidActionError()
# Add missing joints which were fixed at initialization
action = self.add_fixed_joints(action)
# Convert environment action to robot server action
rs_action = self.env_action_to_rs_action(action)
# Send action to Robot Server and get state
rs_state = self.client.send_action_get_state(rs_action.tolist()).state_dict
self._check_rs_state_keys(rs_state)
# Convert the state from Robot Server format to environment format
state = self._robot_server_state_to_env_state(rs_state)
# Check if the environment state is contained in the observation space
if not self.observation_space.contains(state):
raise InvalidStateError()
self.rs_state = rs_state
# Assign reward
reward = 0
done = False
reward, done, info = self.reward(rs_state=rs_state, action=action)
if self.rs_state_to_info: info['rs_state'] = self.rs_state
return state, reward, done, info
def get_rs_state(self):
return self.rs_state
def render():
pass
def get_robot_server_composition(self) -> list:
rs_state_keys = [
'base_joint_position',
'shoulder_joint_position',
'elbow_joint_position',
'wrist_1_joint_position',
'wrist_2_joint_position',
'wrist_3_joint_position',
'base_joint_velocity',
'shoulder_joint_velocity',
'elbow_joint_velocity',
'wrist_1_joint_velocity',
'wrist_2_joint_velocity',
'wrist_3_joint_velocity',
'ee_to_ref_translation_x',
'ee_to_ref_translation_y',
'ee_to_ref_translation_z',
'ee_to_ref_rotation_x',
'ee_to_ref_rotation_y',
'ee_to_ref_rotation_z',
'ee_to_ref_rotation_w',
'in_collision'
]
return rs_state_keys
def _get_robot_server_state_len(self) -> int:
"""Get length of the Robot Server state.
Describes the composition of the Robot Server state and returns
its length.
"""
return len(self.get_robot_server_composition())
def _check_rs_state_keys(self, rs_state) -> None:
keys = self.get_robot_server_composition()
if not len(keys) == len(rs_state.keys()):
raise InvalidStateError("Robot Server state keys to not match. Different lengths.")
for key in keys:
if key not in rs_state.keys():
raise InvalidStateError("Robot Server state keys to not match")
def _set_joint_positions(self, joint_positions) -> None:
"""Set desired robot joint positions with standard indexing."""
# Set initial robot joint positions
self.joint_positions = {}
self.joint_positions['base_joint_position'] = joint_positions[0]
self.joint_positions['shoulder_joint_position'] = joint_positions[1]
self.joint_positions['elbow_joint_position'] = joint_positions[2]
self.joint_positions['wrist_1_joint_position'] = joint_positions[3]
self.joint_positions['wrist_2_joint_position'] = joint_positions[4]
self.joint_positions['wrist_3_joint_position'] = joint_positions[5]
def _get_joint_positions(self) -> dict:
"""Get robot joint positions with standard indexing."""
return self.joint_positions
def _get_joint_positions_as_array(self) -> np.array:
"""Get robot joint positions with standard indexing."""
joint_positions = []
joint_positions.append(self.joint_positions['base_joint_position'])
joint_positions.append(self.joint_positions['shoulder_joint_position'])
joint_positions.append(self.joint_positions['elbow_joint_position'])
joint_positions.append(self.joint_positions['wrist_1_joint_position'])
joint_positions.append(self.joint_positions['wrist_2_joint_position'])
joint_positions.append(self.joint_positions['wrist_3_joint_position'])
return np.array(joint_positions)
def get_joint_name_order(self) -> list:
return ['base', 'shoulder', 'elbow', 'wrist_1', 'wrist_2', 'wrist_3']
def _robot_server_state_to_env_state(self, rs_state) -> np.array:
"""Transform state from Robot Server to environment format.
Args:
rs_state (list): State in Robot Server format.
Returns:
numpy.array: State in environment format.
"""
# Joint positions
joint_positions = []
joint_positions_keys = ['base_joint_position', 'shoulder_joint_position', 'elbow_joint_position',
'wrist_1_joint_position', 'wrist_2_joint_position', 'wrist_3_joint_position']
for position in joint_positions_keys:
joint_positions.append(rs_state[position])
joint_positions = np.array(joint_positions)
# Normalize joint position values
joint_positions = self.ur.normalize_joint_values(joints=joint_positions)
# Joint Velocities
joint_velocities = []
joint_velocities_keys = ['base_joint_velocity', 'shoulder_joint_velocity', 'elbow_joint_velocity',
'wrist_1_joint_velocity', 'wrist_2_joint_velocity', 'wrist_3_joint_velocity']
for velocity in joint_velocities_keys:
joint_velocities.append(rs_state[velocity])
joint_velocities = np.array(joint_velocities)
# Compose environment state
state = np.concatenate((joint_positions, joint_velocities))
return state
def _get_observation_space(self) -> gym.spaces.Box:
"""Get environment observation space.
Returns:
gym.spaces: Gym observation space object.
"""
# Joint position range tolerance
pos_tolerance = np.full(6,0.1)
# Joint positions range used to determine if there is an error in the sensor readings
max_joint_positions = np.add(np.full(6, 1.0), pos_tolerance)
min_joint_positions = np.subtract(np.full(6, -1.0), pos_tolerance)
# Joint velocities range
max_joint_velocities = np.array([np.inf] * 6)
min_joint_velocities = -np.array([np.inf] * 6)
# Definition of environment observation_space
max_obs = np.concatenate((max_joint_positions, max_joint_velocities))
min_obs = np.concatenate((min_joint_positions, min_joint_velocities))
return gym.spaces.Box(low=min_obs, high=max_obs, dtype=np.float32)
def _get_action_space(self)-> gym.spaces.Box:
"""Get environment action space.
Returns:
gym.spaces: Gym action space object.
"""
fixed_joints = [self.fix_base, self.fix_shoulder, self.fix_elbow, self.fix_wrist_1, self.fix_wrist_2, self.fix_wrist_3]
num_control_joints = len(fixed_joints) - sum(fixed_joints)
return gym.spaces.Box(low=np.full((num_control_joints), -1.0), high=np.full((num_control_joints), 1.0), dtype=np.float32)
class EmptyEnvironmentURSim(URBaseEnv, Simulation):
cmd = "roslaunch ur_robot_server ur_robot_server.launch \
world_name:=empty.world \
reference_frame:=base_link \
max_velocity_scale_factor:=0.2 \
action_cycle_rate:=20 \
rviz_gui:=false \
gazebo_gui:=true \
rs_mode:=only_robot"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, ur_model='ur5', **kwargs):
self.cmd = self.cmd + ' ' + 'ur_model:=' + ur_model
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
URBaseEnv.__init__(self, rs_address=self.robot_server_ip, ur_model=ur_model, **kwargs)
class EmptyEnvironmentURRob(URBaseEnv):
real_robot = True
# roslaunch ur_robot_server ur5_real_robot_server.launch gui:=true reference_frame:=base max_velocity_scale_factor:=0.2 action_cycle_rate:=20 rs_mode:=moving | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/envs/ur/ur_base_env.py | 0.818302 | 0.323768 | ur_base_env.py | pypi |
import numpy as np
from typing import Tuple
from robo_gym_server_modules.robot_server.grpc_msgs.python import robot_server_pb2
from robo_gym.envs.simulation_wrapper import Simulation
from robo_gym.envs.ur.ur_base_avoidance_env import URBaseAvoidanceEnv
# base, shoulder, elbow, wrist_1, wrist_2, wrist_3
JOINT_POSITIONS = [-1.57, -1.31, -1.31, -2.18, 1.57, 0.0]
DEBUG = True
MINIMUM_DISTANCE = 0.3 # the distance [cm] the robot should keep to the obstacle
class BasicAvoidanceUR(URBaseAvoidanceEnv):
"""Universal Robots UR basic obstacle avoidance environment.
Args:
rs_address (str): Robot Server address. Formatted as 'ip:port'. Defaults to None.
fix_base (bool): Wether or not the base joint stays fixed or is moveable. Defaults to False.
fix_shoulder (bool): Wether or not the shoulder joint stays fixed or is moveable. Defaults to False.
fix_elbow (bool): Wether or not the elbow joint stays fixed or is moveable. Defaults to False.
fix_wrist_1 (bool): Wether or not the wrist 1 joint stays fixed or is moveable. Defaults to False.
fix_wrist_2 (bool): Wether or not the wrist 2 joint stays fixed or is moveable. Defaults to False.
fix_wrist_3 (bool): Wether or not the wrist 3 joint stays fixed or is moveable. Defaults to True.
ur_model (str): determines which ur model will be used in the environment. Defaults to 'ur5'.
include_polar_to_elbow (bool): determines wether or not the polar coordinates to the elbow joint are included in the state. Defaults to False.
Attributes:
ur (:obj:): Robot utilities object.
client (:obj:str): Robot Server client.
real_robot (bool): True if the environment is controlling a real robot.
"""
max_episode_steps = 1000
def _set_initial_robot_server_state(self, rs_state, fixed_object_position = None) -> robot_server_pb2.State:
if fixed_object_position:
state_msg = super()._set_initial_robot_server_state(rs_state=rs_state, fixed_object_position=fixed_object_position)
return state_msg
z_amplitude = np.random.default_rng().uniform(low=0.09, high=0.35)
z_frequency = 0.125
z_offset = np.random.default_rng().uniform(low=0.2, high=0.6)
string_params = {"object_0_function": "triangle_wave"}
float_params = {"object_0_x": 0.12,
"object_0_y": 0.34,
"object_0_z_amplitude": z_amplitude,
"object_0_z_frequency": z_frequency,
"object_0_z_offset": z_offset}
state = {}
state_msg = robot_server_pb2.State(state = state, float_params = float_params,
string_params = string_params, state_dict = rs_state)
return state_msg
def reset(self, joint_positions = JOINT_POSITIONS, fixed_object_position = None) -> np.array:
"""Environment reset.
Args:
joint_positions (list[6] or np.array[6]): robot joint positions in radians.
fixed_object_position (list[3]): x,y,z fixed position of object
"""
self.prev_action = np.zeros(6)
state = super().reset(joint_positions = joint_positions, fixed_object_position = fixed_object_position)
return state
def reward(self, rs_state, action) -> Tuple[float, bool, dict]:
env_state = self._robot_server_state_to_env_state(rs_state)
reward = 0
done = False
info = {}
# Reward weights
close_distance_weight = -2
delta_joint_weight = 1
action_usage_weight = 1
rapid_action_weight = -0.2
# Difference in joint position current vs. starting position
delta_joint_pos = env_state[9:15]
# Calculate distance to the obstacle
obstacle_coord = np.array([rs_state['object_0_to_ref_translation_x'], rs_state['object_0_to_ref_translation_y'], rs_state['object_0_to_ref_translation_z']])
ee_coord = np.array([rs_state['ee_to_ref_translation_x'], rs_state['ee_to_ref_translation_y'], rs_state['ee_to_ref_translation_z']])
forearm_coord = np.array([rs_state['forearm_to_ref_translation_x'], rs_state['forearm_to_ref_translation_y'], rs_state['forearm_to_ref_translation_z']])
distance_to_ee = np.linalg.norm(obstacle_coord - ee_coord)
distance_to_forearm = np.linalg.norm(obstacle_coord - forearm_coord)
distance_to_target = np.min([distance_to_ee, distance_to_forearm])
# Reward staying close to the predefined joint position
if abs(env_state[-6:]).sum() < 0.1 * action.size:
reward += delta_joint_weight * (1 - (abs(delta_joint_pos).sum()/(0.1 * action.size))) * (1/1000)
# Reward for not acting
if abs(action).sum() <= action.size:
reward += action_usage_weight * (1 - (np.square(action).sum()/action.size)) * (1/1000)
# Negative reward if actions change to rapidly between steps
for i in range(len(action)):
if abs(action[i] - self.prev_action[i]) > 0.5:
reward += rapid_action_weight * (1/1000)
# Negative reward if the obstacle is close than the predefined minimum distance
if distance_to_target < MINIMUM_DISTANCE:
reward += close_distance_weight * (1/self.max_episode_steps)
# Check if there is a collision
collision = True if rs_state['in_collision'] == 1 else False
if collision:
done = True
info['final_status'] = 'collision'
info['target_coord'] = obstacle_coord
self.last_position_on_success = []
if self.elapsed_steps >= self.max_episode_steps:
done = True
info['final_status'] = 'success'
info['target_coord'] = obstacle_coord
self.last_position_on_success = []
return reward, done, info
def step(self, action) -> Tuple[np.array, float, bool, dict]:
if type(action) == list: action = np.array(action)
state, reward, done, info = super().step(action)
self.prev_action = self.add_fixed_joints(action)
return state, reward, done, info
class BasicAvoidanceURSim(BasicAvoidanceUR, Simulation):
cmd = "roslaunch ur_robot_server ur_robot_server.launch \
world_name:=tabletop_sphere50.world \
reference_frame:=base_link \
max_velocity_scale_factor:=0.2 \
action_cycle_rate:=20 \
rviz_gui:=false \
gazebo_gui:=true \
objects_controller:=true \
rs_mode:=1moving2points \
n_objects:=1.0 \
object_0_model_name:=sphere50 \
object_0_frame:=target"
def __init__(self, ip=None, lower_bound_port=None, upper_bound_port=None, gui=False, ur_model='ur5', **kwargs):
self.cmd = self.cmd + ' ' + 'ur_model:=' + ur_model
Simulation.__init__(self, self.cmd, ip, lower_bound_port, upper_bound_port, gui, **kwargs)
BasicAvoidanceUR.__init__(self, rs_address=self.robot_server_ip, ur_model=ur_model, **kwargs)
class BasicAvoidanceURRob(BasicAvoidanceUR):
real_robot = True
# roslaunch ur_robot_server ur_robot_server.launch ur_model:=ur5 real_robot:=true rviz_gui:=true gui:=true reference_frame:=base max_velocity_scale_factor:=0.2 action_cycle_rate:=20 rs_mode:=moving | /robo-gym-1.0.0.tar.gz/robo-gym-1.0.0/robo_gym/envs/ur/ur_avoidance_basic.py | 0.857291 | 0.420302 | ur_avoidance_basic.py | pypi |
[](https://opensource.org/licenses/MIT)
# ROBO.AI Bot Runtime manager CLI tool #
<img align="right" width="200" height="200" alt="robo-bot" src="robo-bot.png"></img>
This tool allows anyone to create, train, deploy, monitor and manage a Rasa based bot on the ROBO.AI platform.
Check our [CHANGELOG](CHANGELOG.md) for the latest changes.
Tutorials:
* [The ROBO.AI platform - creating bots and API keys for deployment](docs/manage_roboai_account.md)
* [Creating and deploying a Rasa chatbot on the ROBO.AI platform](docs/create_deploy_bot.md)
### How to install ###
#### Requirements ####
* Python 3.7
* Pip and/or anaconda
You can create a virtual environment using conda:
```sh
conda create -n roboai-cli python=3.7
conda activate roboai-cli
```
#### Installing the ROBO.AI tool ####
Assuming you are already in your virtual environment with Python 3.7, you can install the tool with the following command:
```
pip install roboai-cli
```
After installing the library you should be able to execute the robo-bot command in your terminal.
#### Usage ####
The command line tool is available through the following terminal command:
```
roboai
```
When you execute it in a terminal you should see an output with a list of commands supported
by the tool.
I.e:
```
user@host:~$ roboai
____ ___ ____ ___ _ ___
| _ \ / _ \| __ ) / _ \ / \ |_ _|
| |_) | | | | _ \| | | | / _ \ | |
| _ <| |_| | |_) | |_| | _ / ___ \ | |
|_| \_\\___/|____/ \___/ (_) /_/ \_\___|
Bot Management Tool robo-ai.com
Usage: roboai [OPTIONS] COMMAND [ARGS]...
roboai 1.1.1
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
clean Clean the last package
connect Connect a local bot to a ROBO.AI server bot instance.
data Utility command to split, export and import data.
deploy Deploy the current bot into the ROBO.AI platform.
diff Check for structural differences between languages for the...
environment Define the ROBO.AI platform API endpoint to use.
interactive Run in interactive learning mode where you can provide...
login Initialize a new session using a ROBO.AI API key.
logout Close the current session in the ROBO.AI platform.
logs Display selected bot runtime logs.
package Package the required bot and make it ready for deployment.
remove Remove a deployed bot from the ROBO.AI platform.
run Start the action server.
seed Create a new ROBO.AI project seedling, including folder...
shell Start a shell to interact with the required bot.
start Start a bot deployed on the ROBO.AI platform.
status Display the bot status.
stories Generate stories for a Rasa bot.
stop Stop a bot running in the ROBO.AI platform.
test Test Rasa models for the required bots.
train Train Rasa models for the required bots.
```
Each of the listed commands provides you a functionality to deal with your bots,
each one has a description, and a help option, so you can see what options and
arguments are available.
You can invoke each of the tool commands by following the pattern:
```
roboai <command> [command arguments or options]
```
i.e.:
```
roboai login --api-key=my-apy-key
```
You can check the supported options and arguments for every command by following
the pattern:
```
roboai <command> --help
```
i.e.:
```
user@host:~$ roboai login --help
____ ___ ____ ___ _ ___
| _ \ / _ \| __ ) / _ \ / \ |_ _|
| |_) | | | | _ \| | | | / _ \ | |
| _ <| |_| | |_) | |_| | _ / ___ \ | |
|_| \_\\___/|____/ \___/ (_) /_/ \_\___|
Bot Management Tool robo-ai.com
Usage: roboai login [OPTIONS]
Initialize a new session using a ROBO.AI API key.
Options:
--api-key TEXT The ROBO.AI platform API key.
--help Show this message and exit.
```
### Using roboai-cli to create and maintain a bot ###
##### Generating an initial structure #####
The ROBO.AI tool provides you with a set of commands useful to create, train, interact and test a bot
before its deployment.
To create a bot you can use the **seed** command:
```
roboai seed [language-codes] [--path <path> --language-detection --chit-chat --coref-resolution]
```
i.e.:
```
roboai seed en de --path bot/ --language-detection --chit-chat --coref-resolution
```
The first argument of the seed command is the language-codes which indicate the languages the bot will be built upon.
If no language-codes are passed, only an english sub-directory (en) will be created.
The optional parameters are referring to features you may want to add to the bot.
This command behaves like rasa init but it'll generate a dedicated structure where you can have
multi-language bots related with the same domain. Below there's an example of a bot generated with this command.
```
.
βββ actions
β βββ action_parlai_fallback.py
βββ custom
β βββ components
β β βββ spacy_nlp
β β βββ spacy_nlp_neuralcoref.py
β β βββ spacy_tokenizer_neuralcoref.py
β βββ policies
β βββ language_detection
β βββ lang_change_policy.py
β βββ lid.176.ftz
βββ languages
| βββ de
| β βββ config.yml
| β βββ data
| β β βββ lookup_tables
| β β βββ nlu.md
| β βββ domain.yml
| βββ en
| β βββ config.yml
| β βββ data
| β β βββ lookup_tables
| β β βββ nlu.md
| β βββ domain.yml
| βββ stories.md
βββ credentials.yml
βββ endpoints.yml
βββ __init__.py
```
##### Generating stories for a bot #####
After defining intents and actions for a bot you need to combine these in stories. This command allows you to generate the most basic interactions in your Rasa bot.
Note: Manual checks will be needed to implement more complex stories but basic ping-pong dialogues should be covered with this feature.
Usage:
```
roboai stories [language-codes] [--check-covered-intents]
```
If no language-code is passed, roboai-cli will assume you're working in a single-language bot (and thus the default Rasa structure).
The option --check-covered-intents will go through your stories file and check if the intents you have defined in the domain file are being covered in the dialogues. This command is more useful when you're deep in the development of your bot.
##### Checking for differences in a bot #####
After making all the necessary changes to your bots, you want to make sure that all bots (languages) are coherent between each other (i.e. the same stories.md file will work for the nlu.md and domain.yml files configured for the different languages.) To know whether your bot is achieving this, you can use the **diff** command.
```
roboai diff [language-codes] [--path <path>]
```
It will check for structural differences between the domain.yml and stories.md files for the same multi-language bot.
If no language codes are passed, then it'll pair all the languages found and check for differences between them.
##### Splitting the nlu #####
In case you want to split your nlu data, you can use the data command for that.
Simply run ```roboai data split nlu [language-code]``` and a new folder called train_test_split will be generated within the bot directory.
When training and testing the bot you can then pass these files as arguments.
##### Training a bot #####
You're now in a position to train the bot. To do so you only need to run the **train** command just as you would do in Rasa.
```
roboai train [language-codes] [--nlu --core --augmentation <value> --dev-config <path to config file> --force --debug --training-data-path <path-to-training-data-file>]
```
In case you want to pass a specific training data file you can use the train command in the following way:
```
roboai train en --training-data-path train_test_split/training_data.md
```
It will train the bot and store the model in the language sub-directory. If no language codes are passed, all bots will be trained.
Note: The **augmentation** and **force** options do not work in the case of NLU training.
##### Interacting with a bot #####
To interact with the bot, you can use the **shell** command. Before running it, you need to execute the **run actions** command.
```
roboai run actions [--debug]
```
After doing so, you can execute the shell command.
```
roboai shell [language-code] [--nlu] [--debug]
```
You need to specify what language (bot) you want to interact with - you can only interact with one bot at the time.
##### Testing a bot #####
Testing a bot is also probably in your pipeline. And this is possible with the **test** command.
```
roboai test [language-code] [--cross-validation --folds <nr-of-folds> --tet-data-path <path-to-testing-data-file>]
```
In case you want to pass a specific testing data file you can use the test command in the following way:
```
roboai test --training-data-path train_test_split/test_data.md
```
It'll test the bot with the conversation_tests.md file you have stored in your tests folder.
The results will be stored in the language sub-directory. Besides Rasa's default results, roboai-cli also produces an excel file with a confusion list of mistmatched intents.
##### Interactive learning #####
If you want to use Rasa's interactive learning mode you can do this by using the interactive command.
```
roboai interactive [language-code]
```
It'll launch an interactive session where you can provide feedback to the bot. At the end don't forget to
adjust the paths to where the new files should be saved.
By now you're probably ready to deploy your bot...
### Using roboai-cli to deploy a bot ###
##### Setting the target endpoint #####
Before doing any operation you must indicate to the tool in what environment you're working in,
for that you have the **environment** command:
The tool provides you with a default production environment in the ROBO.AI platform.
You can activate it by running:
```
roboai environment activate production
```
You can also create new environments by executing:
```
roboai environment create <environment name> --base-url <base-url> [--username <username> --password <password>]
```
The base-url refers to the environment URL and you can optionally pass a username
and password if your environment requires them.
i.e.:
```
roboai environment create development --base-url https://robo-core.my-robo-server.com --username m2m --password GgvJrZSCXger
```
After creating an environment, do not forget to activate it if you want to use it.
To know which environment is activated you can simply run:
```
roboai environment which
```
It's possible to check what environments are available in your configuration file by running:
```
roboai environment list
```
You can also remove environments by executing:
```
roboai environment remove <environment name>
```
##### Logging in #####
Once you have the desired environment activated, you need to login into the account you'd like to use by using
an API key.
1. Log-in into your ROBO.AI administration and generate an API key (do not forget to enable it).
2. Execute the login command and enter the API key.
i.e.:
```
roboai login --api-key=my-api-key
```
Or if you don't want to enter the api key in your command, you can enter it interactively by only executing:
```
roboai login
```
##### Initializing a bot #####
In order to manage a bot runtime, it needs to be initialized so the tool will know what bot this runtime
refers to. If you already have the Rasa bot initialized, just execute the following command:
```
roboai connect [language-code] --target-dir <path to rasa bot files>
```
i.e.:
```
roboai connect [language-code] --target-dir /path/to/rasa/bot
```
First it'll ask you to pick an existing bot (if it does not exist, you must create it before executing this step).
After doing it, it'll generate a new file called robo-manifest.json which contains meta-information about the deployment
and the target bot.
**Note:** if no language-code is provided, it's assumed that you're working with the default Rasa structure.
##### Deploying a bot #####
When your bot is ready for deployment, you must train it first and ensure you're in the bot root directory. You can then execute:
```
roboai deploy [language-code] --model <path to model file>
```
It'll package your bot files and upload them to the ROBO.AI platform, starting a new deployment. This step may take some time.
If you want you can pass the path to the model you want to deploy. If no model path is passed then the most recent one will be picked up.
**Note:** if no language-code is provided, it's assumed that you're working with the default Rasa structure.
##### Checking a bot status #####
If you want to check your bot status, just run the following command from the same directory as of your robo-manifest.json
```
roboai status [language-code]
```
**Note:** if no language-code is provided, it's assumed that you're working with the default Rasa structure.
##### Removing a bot #####
If you need to remove a bot, execute the following command from the bot root directory:
```
roboai remove [language-code]
```
**Note:** if no language-code is provided, it's assumed that you're working with the default Rasa structure.
##### Checking a deployed bot logs #####
Sometimes it's useful to have a look into the logs, for that you need to execute:
```
roboai logs [language-code]
```
It'll show you the latest 1000 lines from that rasa bot logs.
**Note:** if no language-code is provided, it's assumed that you're working with the default Rasa structure.
### Using roboai-cli to export and import data ###
##### Export data #####
If you require to export your bot's data you may use the data command for that end.
You can run
```
roboai data export [nlu/responses/all] --input-path <bot-root-dir> --output-path <path-to-where-you-want-to-save-the-file>
```
If you opt to export only the nlu or the responses, an excel file will be generated with this content. If you wish to export both, use the 'all' option and an excel file with both the nlu and responses will be generated.
##### Import data #####
To import the data back from excel to markdown/yaml, the data command is what you're looking for.
You can run
```
roboai data import [nlu/responses/all] --input-path <path-where-your-file-is-saved> --output-path <path-to-where-you-want-to-save-the-file>
```
This will generate markdown and yaml files containing the nlu and responses content, respectively.
## Code Style
We use [Google Style Python Docstrings](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings).
## Releases
We follow [Semantic Versioning 2.0.0](https://semver.org/) standards.
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. | /roboai_cli-1.1.2-py3-none-any.whl/README.md | 0.486088 | 0.931213 | README.md | pypi |
import os
import logging
from typing import Any, Dict, Optional, Text
from rasa.nlu import utils
from rasa.nlu.classifiers.classifier import IntentClassifier
from rasa.nlu.constants import INTENT
from rasa.utils.common import raise_warning
from rasa.nlu.config import RasaNLUModelConfig
from rasa.nlu.training_data import TrainingData
from rasa.nlu.model import Metadata
from rasa.nlu.training_data import Message
logger = logging.getLogger(__name__)
class ExactMatchClassifier(IntentClassifier):
"""Intent classifier using simple exact matching.
The classifier takes a list of keywords and associated intents as an input.
A input sentence is checked for the keywords and the intent is returned.
"""
defaults = {"case_sensitive": True}
def __init__(
self,
component_config: Optional[Dict[Text, Any]] = None,
intent_keyword_map: Optional[Dict] = None,
):
super(ExactMatchClassifier, self).__init__(component_config)
self.case_sensitive = self.component_config.get("case_sensitive")
self.intent_keyword_map = intent_keyword_map or {}
def train(
self,
training_data: TrainingData,
config: Optional[RasaNLUModelConfig] = None,
**kwargs: Any,
) -> None:
for ex in training_data.training_examples:
self.intent_keyword_map[ex.text] = ex.get(INTENT)
def process(self, message: Message, **kwargs: Any) -> None:
intent_name = self._map_keyword_to_intent(message.text)
confidence = 0.0 if intent_name is None else 1.0
intent = {"name": intent_name, "confidence": confidence}
if message.get(INTENT) is None or intent is not None:
message.set(INTENT, intent, add_to_output=True)
def _map_keyword_to_intent(self, text: Text) -> Optional[Text]:
for keyword, intent in self.intent_keyword_map.items():
if keyword.strip() == text.strip(): # re.search(r"\b" + keyword + r"\b", text, flags=re_flag):
logger.debug(
f"ExactMatchClassifier matched keyword '{keyword}' to"
f" intent '{intent}'."
)
return intent
logger.debug("ExactMatchClassifier did not find any keywords in the message.")
return None
def persist(self, file_name: Text, model_dir: Text) -> Dict[Text, Any]:
"""Persist this model into the passed directory.
Return the metadata necessary to load the model again.
"""
file_name = file_name + ".json"
keyword_file = os.path.join(model_dir, file_name)
utils.write_json_to_file(keyword_file, self.intent_keyword_map)
return {"file": file_name}
@classmethod
def load(
cls,
meta: Dict[Text, Any],
model_dir: Optional[Text] = None,
model_metadata: Metadata = None,
cached_component: Optional["ExactMatchClassifier"] = None,
**kwargs: Any,
) -> "ExactMatchClassifier":
if model_dir and meta.get("file"):
file_name = meta.get("file")
keyword_file = os.path.join(model_dir, file_name)
if os.path.exists(keyword_file):
intent_keyword_map = utils.read_json_file(keyword_file)
else:
raise_warning(
f"Failed to load key word file for `IntentKeywordClassifier`, "
f"maybe {keyword_file} does not exist?"
)
intent_keyword_map = None
return cls(meta, intent_keyword_map)
else:
raise Exception(
f"Failed to load keyword intent classifier model. "
f"Path {os.path.abspath(meta.get('file'))} doesn't exist."
) | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/initial_structure/initial_project/custom/components/exact_match_classifier/exact_match_classifier.py | 0.787605 | 0.227351 | exact_match_classifier.py | pypi |
from requests import get, post
from typing import Any, Text, Dict, List
from rasa_sdk import Action, Tracker
from rasa_sdk.events import SlotSet
from rasa_sdk.executor import CollectingDispatcher
class ParlaiAPI:
def create_world(self):
url = "http://158.177.139.34:8081/create_world"
world_id = get(url=url).json()["world_id"]
return world_id
def world_exists(self, world_id):
url = f"http://158.177.139.34:8081/world_exists/{world_id}"
return get(url=url).json()["world_exists"]
def generate_chitchat(self, world_id, user_message):
url = "http://158.177.139.34:8081/generate_chitchat"
user_message_json = {"world_id": world_id, "user_message": user_message}
return_message = post(url=url, json=user_message_json).json()["chitchat_message"]
print(return_message)
return return_message
class ActionParlaiFallback(Action):
def name(self):
return "action_parlai_fallback"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
parlai_api = ParlaiAPI()
if tracker.get_slot("parlai_world_created"):
world_id = tracker.get_slot("parlai_world")
if not parlai_api.world_exists(world_id):
print("PARLAI world doesn't exist yet. Creating world...")
world_id = parlai_api.create_world()
print("World created. The ID is {}".format(world_id))
else:
print("PARLAI world is already created. The ID is {}".format(world_id))
else:
print("PARLAI world doesn't exist yet. Creating world...")
world_id = parlai_api.create_world()
print("World created. The ID is {}".format(world_id))
print("user message: {}, type: {}".format(tracker.latest_message["text"], type(tracker.latest_message["text"])))
return_message = parlai_api.generate_chitchat(world_id, tracker.latest_message["text"])
json_response = {"answers": [{"ssml": "<speak> " + return_message + " </speak>", "type": "ssml"},
{"text": return_message, "type": "text"}]}
dispatcher.utter_message(json_message=json_response)
return [
SlotSet("parlai_world_created", True),
SlotSet("parlai_world", world_id)
] | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/initial_structure/initial_project/actions/action_parlai_fallback.py | 0.50293 | 0.161883 | action_parlai_fallback.py | pypi |
from os.path import join
import subprocess
import sys
import click
from roboai_cli.util.cli import print_info
def get_index(lista: list, value: str) -> int:
"""
Get the list index of a given value if it exists.
Args:
lista (list): list to get the index from
value (str): value to search in the list
Returns:
i: index of the value in the list
"""
i = 0
length = len(lista)
while i < length and lista[i]["name"] != value:
i = i + 1
if i == length:
raise("value not found")
else:
return i
def clean_intents(intents: list) -> list:
"""
Some intents may trigger actions which means they will be dict instead of strings.
This method parses those dicts and returns the list of intents.
Args:
intents (list): list of intents taken from the domain
Example: [{"start_dialogue": {"triggers": "action_check_Bot_Introduced"}}, "greeting", ]
Returns:
list: list of intents without dicts
Example: ["start_dialogue", "greeting", ]
"""
for i, intent in enumerate(intents):
if isinstance(intent, dict):
intents[i] = list(intent.keys())[0]
return intents
def user_proceed(message: str):
return click.confirm(message)
def check_installed_packages(path: str) -> bool:
"""
Check for installed packages.
The caveat of this method is that for instance, packages installed from a git repo may not match the ones in the requirements file.
The user is asked if they want to continue with the process or double check their packages.
Args:
path (str): bot root dir
Returns:
bool: flag indicating whether process should continue or not
"""
try:
with open(join(path, "requirements.txt"), "r") as f:
bot_requirements = f.readlines()
print_info("Checking if packages in requirements.txt are installed.")
except Exception:
return True
bot_requirements = [r.split("==")[0] for r in bot_requirements]
reqs = subprocess.check_output([sys.executable, "-m", "pip", "freeze"])
installed_packages = [r.decode().split("==")[0] for r in reqs.split()]
if all(item in installed_packages for item in bot_requirements):
return True
else:
missing_packages = [item for item in bot_requirements if item not in installed_packages]
print("Couldn't find the following packages: \n")
print(*missing_packages, sep="\n")
print("This might be because it's a package installed from a git repo and it doesn't match the requirements.")
return user_proceed("Do you want to proceed?\n") | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/util/helpers.py | 0.42656 | 0.321274 | helpers.py | pypi |
import click
import os
from os.path import join, abspath
from datetime import datetime
from roboai_cli.util.cli import print_info
from roboai_cli.util.helpers import check_installed_packages
@click.command(name="train", help="Train Rasa models for the required bots.")
@click.argument("languages", nargs=-1,)
@click.option("--dev-config", default="config.yml", type=str,
help="Name of the config file to be used. If this is not passed, default 'config.yml' is used.")
@click.option("--nlu", "-n", "nlu", is_flag=True,
help="Train exclusively the RASA nlu model for the given languages")
@click.option("--core", "-c", "core", is_flag=True,
help="Train exclusively the RASA core model for the given languages")
@click.option("--augmentation", "augmentation", default=50, help="How much data augmentation to use during training. \
(default: 50)")
@click.option("--force", "-f", "force", is_flag=True, default=False,
help="Force a model training even if the data has not changed. (default: False)")
@click.option("--debug", "-vv", "debug", is_flag=True, default=False,
help="Print lots of debugging statements. Sets logging level")
@click.option("--training-data-path", default=None, type=click.Path(), help="Path to where training data is stored.")
def command(languages: tuple,
dev_config: str,
nlu: bool, core: bool,
augmentation: int,
force: bool,
debug: bool,
training_data_path: str):
"""
Wrapper of rasa train for multi-language bots.
Args:
languages: language code of the bots to be trained
dev_config (str): Name of the config file to be used. If this is not passed, default config.yml is used.
nlu (bool): flag indicating whether only NLU should be trained
core (bool): flag indicating whether only core should be trained
augmentation (int): augmentation option
force (bool): flag indicating whether training should be forced
debug (bool): flag indicating whether debug mode is enabled
training_data_path (str): Path to training data in case you have split it before.
"""
path = abspath(".")
if check_installed_packages(path):
languages_paths = get_all_languages(path=path, languages=languages)
if nlu:
train_nlu(path, languages_paths, dev_config, force, debug, training_data_path)
elif core:
train_core(path, languages_paths, augmentation, dev_config, force, debug)
else:
train(path, languages_paths, augmentation, dev_config, force, debug, training_data_path)
def train(path: str, languages_paths: list, augmentation: int, dev_config: str, force: bool, debug: bool, training_data_path: str):
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
stories_path = join(path, "languages", "stories.md")
for language_path in languages_paths:
lang = os.path.basename(language_path)
os.system(f"rasa train --config {join(language_path, dev_config)} --domain {join(language_path, 'domain.yml')} \
--data {join(language_path, 'data') if not training_data_path else join(language_path, training_data_path)} {stories_path} \
--augmentation {augmentation} {'--force' if force else ''} \
{'--debug' if debug else ''} --out {join(language_path, 'models')} \
--fixed-model-name model-{lang}-{timestamp}")
def train_nlu(path: str, languages_paths: list, dev_config: str, force: bool, debug: bool, training_data_path: str):
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
for language_path in languages_paths:
lang = os.path.basename(language_path)
os.system(f"rasa train nlu --nlu {join(language_path, 'data') if not training_data_path else join(language_path, training_data_path)} \
--config {join(language_path, dev_config)} \
{'--debug' if debug else ''} --out {join(language_path, 'models')} \
--fixed-model-name nlu-model-{lang}-{timestamp}")
def train_core(path: str, languages_paths: list, augmentation: int, dev_config: str, force: bool, debug: bool):
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
stories_path = join(path, 'languages', 'stories.md')
for language_path in languages_paths:
lang = os.path.basename(language_path)
os.system(f"rasa train core --domain {join(language_path, 'domain.yml')} --stories {stories_path} \
--augmentation {augmentation} --config {join(language_path, dev_config)} {'--force' if force else ''} \
{'--debug' if debug else ''} --out {join(language_path, 'models')} --fixed-model-name core-model-{lang}-{timestamp}")
def get_all_languages(path: str, languages: tuple):
if len(languages) == 0:
print_info("No language was provided. Will train all available languages inside provided bot folder.")
languages_paths = [join(path, "languages", folder) for folder in os.listdir(join(path, "languages"))
if os.path.isdir(os.path.join(path, "languages", folder))]
else:
languages_paths = [join(path, "languages", folder) for folder in os.listdir(join(path, "languages"))
if os.path.isdir(os.path.join(path, "languages", folder)) and folder in languages]
return languages_paths
if __name__ == "__main__":
command() | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/commands/train.py | 0.464173 | 0.187411 | train.py | pypi |
import json
from os import listdir, system
from os.path import abspath, join, exists, isdir, basename
import tempfile
import shutil
import yaml
import ast
import click
import pandas as pd
from roboai_cli.util.cli import print_info
@click.command(name="data", help="Utility command to split, export and import data.")
@click.argument("utility", nargs=1)
@click.argument("option", nargs=1)
@click.argument("languages", nargs=-1)
@click.option("--input-path", default=None, type=click.Path(), help="Optional input path")
@click.option("--output-path", default=None, type=click.Path(), help="Optional output path")
def command(utility: tuple, option: tuple, languages: tuple, input_path: str, output_path: str):
"""
Utility command to split, export and import data.
Args:
utility (tuple): can be 'split' to split data, 'export' to export data and 'import' to import data
option (tuple): can be 'nlu', 'domain' or 'all'. It depends on the utility argument.
languages (tuple): bot language to split, export or import data from.
input_path (str): optional input path where files are stored.
output_path (str): optional output path where files are to be stored.
"""
if (utility == "export" or utility == "import") and (input_path is None or output_path is None):
print("To use the import and export utilities you need to provide an input and output path.\n"
"For the export utility, make sure the input path refers to the bot root directory.")
exit(0)
elif utility == "split" and (input_path is None or output_path is None):
input_path = "."
if len(languages) == 0:
if exists(join(abspath(input_path), "languages")):
bot_dir = get_all_languages(path=abspath(input_path), languages=languages)
multi_language_bot = True
else:
bot_dir = [abspath(".")]
multi_language_bot = False
else:
bot_dir = get_all_languages(path=abspath(input_path), languages=languages)
multi_language_bot = True
if utility == "split":
if option == "nlu":
split_nlu(bot_dir, multi_language_bot)
else:
print("Please select a valid option.")
exit(0)
elif utility == "export":
if option == "all":
export_all(bot_dir, output_path)
elif option == "nlu":
export_nlu(bot_dir, output_path)
elif option == "responses":
export_responses(bot_dir, output_path)
else:
print("Please select a valid element to export. It can be either 'nlu', 'responses' or 'all' to export both.")
exit(0)
elif utility == "import":
if option == "all":
import_all(input_path, output_path)
elif option == "nlu":
import_nlu(input_path, output_path)
elif option == "responses":
import_responses(input_path, output_path)
else:
print("Please select a valid element to import. It can be either 'nlu', 'responses' or 'all' to import both.")
exit(0)
elif utility == "analysis":
pass
else:
print("Please select a valid utility option.")
exit(0)
def split_nlu(bot_dir: list, multi_language_bot: bool):
"""
Run rasa command to split the nlu file
Args:
bot_dir (list): list of paths respective to each bot
multi_language_bot (bool): flag indicating whether the bot is multi language or not
"""
for language in bot_dir:
if multi_language_bot:
system(f"rasa data split nlu -u {join(language, 'data')} --out {join(language, 'train_test_split')}")
else:
system(f"rasa data split nlu -u {join(language, 'data')}")
def convert_nlu_to_df(input_path: str) -> pd.DataFrame:
"""
Convert nlu file from markdown to a pandas DataFrame.
Intents and respective examples are kept. Regex features, lookup tables and entity synonyms are discarded.
Args:
input_path (str): input path to read the nlu file from
Returns:
pd.DataFrame: pandas DataFrame containing the nlu content
"""
tmp_dir = tempfile.mkdtemp()
tmp_file = "nlu.json"
try:
system(f"rasa data convert nlu --data {join(input_path, 'data', 'nlu.md')} \
-f json --out {join(tmp_dir, tmp_file)}")
except Exception:
print("It was not possible to find an NLU file. Make sure the path you provided refers to a bot root directory.")
exit(0)
with open(join(tmp_dir, tmp_file), "r", encoding="utf-8") as f:
nlu_json = json.load(f)
shutil.rmtree(tmp_dir)
nlu_df = pd.DataFrame(nlu_json["rasa_nlu_data"]["common_examples"])[["intent", "text"]]
return nlu_df
def convert_responses_to_df(input_path: str) -> pd.DataFrame:
"""
Convert domain file to a pandas DataFrame.
Responses are kept, all other information like slots, list of intents, etc are discarded.
Args:
input_path (str): input path to read the domain file from
Returns:
pd.DataFrame: pandas DataFrame containing the domain responses
"""
try:
with open(join(input_path, "domain.yml")) as f:
domain = yaml.load(f, Loader=yaml.FullLoader)
except Exception:
print("It was not possible to find a domain file. Make sure the path you provided refers to the bot root directory.")
exit(0)
responses = domain.get("responses", None)
if not responses:
print("No responses were found in the domain file.")
exit(0)
response_df = pd.DataFrame()
i = 0
for response_key, response_value in responses.items():
for option in response_value:
for answer in option["custom"]["answers"]:
response_df.loc[i, "response_name"] = response_key
if answer["type"] == "html":
response_df.loc[i, "html"] = answer["text"]
elif answer["type"] == "text":
response_df.loc[i, "text"] = answer["text"]
elif answer["type"] == "ssml":
response_df.loc[i, "ssml"] = answer["ssml"]
elif answer["type"] == "hints":
response_df.loc[i, "hints"] = str(answer["options"])
elif answer["type"] == "multichoice":
response_df.loc[i, "multichoice"] = str(answer["options"])
else:
response_df.loc[i, answer["type"]] = str(answer)
i += 1
return response_df
def export_nlu(bot_dir: list, output_path: str):
"""
Export nlu DataFrame to a given output path.
Args:
bot_dir (list): list of paths respective to each bot
output_path (str): output path where the nlu should be stored
"""
for language in bot_dir:
nlu_df = convert_nlu_to_df(language)
lang = basename(language) if basename(language) != "bot" else "bot"
export_component_to_excel(nlu_df, output_path, "nlu-" + lang + ".xlsx", "NLU")
def export_responses(bot_dir: list, output_path: str):
"""
Export domain DataFrame to a given output path.
Args:
bot_dir (list): list of paths respective to each bot
output_path (str): output path where the domain should be stored
"""
for language in bot_dir:
domain_df = convert_responses_to_df(language)
lang = basename(language) if basename(language) != "bot" else "bot"
export_component_to_excel(domain_df, output_path, "responses-" + lang + ".xlsx", "Responses")
def export_component_to_excel(component_df: pd.DataFrame, output_path: str, filename: str, component: str):
"""
Export component (NLU/responses) to excel.
Args:
component_df (pd.DataFrame): dataframe containing the component's content
output_path (str): path where the excel should be saved to
filename (str): file name
component (str): component that is being exported
"""
with pd.ExcelWriter(
join(output_path, filename), engine="xlsxwriter"
) as xlsx_writer:
component_df.to_excel(excel_writer=xlsx_writer, sheet_name=component, index=False)
worksheet = xlsx_writer.sheets[component]
for i, col in enumerate(component_df.columns):
column_len = max(component_df[col].astype(str).str.len().max(), len(col) + 2)
worksheet.set_column(i, i, column_len)
def export_all(bot_dir: list, output_path: str):
"""
Export both nlu and domain to one single output file.
Args:
bot_dir (list): list of paths respective to each bot
output_path (str): output path where both the nlu and domain should be stored
"""
for language in bot_dir:
lang = basename(language) if basename(language) != "bot" else "bot"
with pd.ExcelWriter(join(output_path, lang + "-content.xlsx"), engine="xlsxwriter") as xlsx_writer:
domain_df = convert_responses_to_df(language)
domain_df.to_excel(excel_writer=xlsx_writer, index=False, sheet_name="Responses")
worksheet = xlsx_writer.sheets["Responses"]
for i, col in enumerate(domain_df.columns):
column_len = max(domain_df[col].astype(str).str.len().max(), len(col) + 2)
worksheet.set_column(i, i, column_len)
nlu_df = convert_nlu_to_df(language)
nlu_df.to_excel(excel_writer=xlsx_writer, index=False, sheet_name="NLU")
worksheet = xlsx_writer.sheets["NLU"]
for i, col in enumerate(nlu_df.columns):
column_len = max(nlu_df[col].astype(str).str.len().max(), len(col) + 2)
worksheet.set_column(i, i, column_len)
def import_nlu(input_path: str, output_path: str):
"""
Import nlu file back to markdown
Args:
input_path (str): path where to read nlu file from
output_path (str): path to store converted nlu
"""
nlu_df = _read_nlu_df(input_path)
nlu_dict = convert_nlu_df_to_dict(nlu_df)
for intent, examples in nlu_dict.items():
with open(join(output_path, "nlu_converted.md"), "+a") as f:
f.write(f"## intent:{intent}\n")
for example in examples:
f.write(f"- {example}\n")
f.write("\n")
def import_responses(input_path: str, output_path: str):
"""
Import responses back to yaml
Args:
input_path (str): path where to read the responses file from
output_path (str): path to store converted responses
"""
response_df = _read_response_df(input_path)
response_dict = convert_response_df_to_dict(response_df)
with open(join(output_path, "responses_converted.yml"), "w", encoding="utf-8") as outfile:
yaml.dump(response_dict, stream=outfile, allow_unicode=True)
def import_all(input_path: str, output_path: str):
"""
Import both nlu and responses
Args:
input_path (str): path where to read nlu and responses from
output_path (str): path to where converted nlu and responses should be stored
"""
response_df = _read_response_df(input_path)
response_dict = convert_response_df_to_dict(response_df)
with open(join(output_path, "responses_converted.yml"), "w", encoding="utf-8") as outfile:
yaml.dump(response_dict, stream=outfile, allow_unicode=True)
nlu_df = _read_nlu_df(input_path)
nlu_dict = convert_nlu_df_to_dict(nlu_df)
for intent, examples in nlu_dict.items():
with open(join(output_path, "nlu_converted.md"), "+a") as f:
f.write(f"## intent:{intent}\n")
for example in examples:
f.write(f"- {example}\n")
f.write("\n")
def _read_response_df(input_path: str) -> pd.DataFrame:
"""
Read excel containing the responses from the file system
Args:
input_path (str): path where the excel containing the responses is stored
Returns:
pd.DataFrame: DataFrame containing the responses
"""
try:
response_df = pd.read_excel(input_path, sheet_name="Responses", engine="openpyxl")
except Exception:
print("It was not possible to read the file you provided. Make sure the file exists and it contains a Responses tab.")
exit(0)
return response_df
def _read_nlu_df(input_path: str) -> pd.DataFrame:
"""
Read excel containing the nlu from the file system
Args:
input_path (str): path where the excel containing the nlu is stored
Returns:
pd.DataFrame: DataFrame containing the nlu
"""
try:
nlu_df = pd.read_excel(input_path, sheet_name="NLU", engine="openpyxl")
except Exception:
print("It was not possible to read the file you provided. Make sure the file exists and it contains an NLU tab.")
exit(0)
return nlu_df
def convert_response_df_to_dict(response_df: pd.DataFrame) -> dict:
"""
Convert responses DataFrame to dict
Args:
response_df (pd.DataFrame): DataFrame containing the responses
Returns:
dict: dictionary containing the reponses
"""
column_names = response_df.columns.tolist()[1:]
domain_dict = {k: [] for k in response_df["response_name"].unique().tolist()}
for i, row in response_df.iterrows():
custom_dict = {"custom": {"answers": []}}
for col in column_names:
if pd.notna(row[col]):
if col == "html":
custom_dict["custom"]["answers"].append({"type": "html", "text": row["html"]})
elif col == "text":
custom_dict["custom"]["answers"].append({"type": "text", "text": row["text"]})
elif col == "ssml":
custom_dict["custom"]["answers"].append({"type": "ssml", "ssml": row["ssml"]})
elif col == "hints":
custom_dict["custom"]["answers"].append({"type": "hints",
"options": ast.literal_eval(row["hints"])})
elif col == "multichoice":
custom_dict["custom"]["answers"].append({"type": "multichoice",
"options": ast.literal_eval(row["multichoice"])})
else:
custom_dict["custom"]["answers"].append(ast.literal_eval(row[col]))
domain_dict[row["response_name"]].append(custom_dict)
return domain_dict
def convert_nlu_df_to_dict(nlu_df: pd.DataFrame) -> dict:
"""
Convert nlu DataFrame to dict
Args:
nlu_df (pd.DataFrame): DataFrame containing the nlu
Returns:
dict: dictionary containing the nlu
"""
intents_dict = {}
unique_intents = nlu_df["intent"].unique().tolist()
for intent in unique_intents:
examples = nlu_df[nlu_df["intent"] == intent]["text"].tolist()
intents_dict[intent] = examples
return intents_dict
def get_all_languages(path: str, languages: tuple) -> list:
if len(languages) == 0:
_inform_language()
languages_paths = [
join(path, "languages", folder)
for folder in listdir(join(path, "languages"))
if isdir(join(path, "languages", folder))
]
else:
languages_paths = [
join(path, "languages", folder)
for folder in listdir(join(path, "languages"))
if isdir(join(path, "languages", folder)) and folder in languages
]
return languages_paths
def _inform_language() -> None:
"""
Auxiliary method to inform the user no languages were passed when executing the data command.
"""
print_info(
"No language was provided but a multi-language bot was detected. "
"Will gather all available languages inside provided bot folder.\n"
)
if __name__ == "__main__":
command() | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/commands/data.py | 0.449634 | 0.205037 | data.py | pypi |
import math
import os
from os.path import dirname, isfile, join
from shutil import copyfile
import click
import pkg_resources
from questionary import Choice, prompt
from robo_ai.model.assistant.assistant import Assistant
from robo_ai.model.assistant.assistant_list_response import (
AssistantListResponse,
)
from roboai_cli.config.bot_manifest import BotManifest
from roboai_cli.config.tool_settings import ToolSettings
from roboai_cli.util.cli import (
loading_indicator,
print_message,
print_success,
)
from roboai_cli.util.robo import (
get_robo_client,
get_supported_base_versions,
validate_robo_session,
)
@click.command(name="connect", help="Connect a local bot to a ROBO.AI server bot instance.",)
@click.argument("language", nargs=-1,)
@click.option("--bot-uuid", default=None, type=str, help="The bot UUID to assign to the bot implementation.",)
@click.option("--target-dir", default=None, type=str, help="The target directory where the bot will be setup.",)
def command(
language: tuple,
bot_uuid: str,
target_dir: str,
base_version: str = "rasa-1.10.0",
):
"""
Connect a local bot to a ROBO.AI server bot instance.
This instance must be already created in the ROBO.AI platform.
This command will generate a JSON file (robo-manifest) with metadata about the bot to be deployed.
Args:
language: language code of the bot to be connected.
bot_uuid (str): optional argument stating the ID of the bot.
target_dir (str): optional argument stating where the robo-manifest file should be stored.
base_version (str): optional argument stating the engine base version. Defaults to 'rasa-1.10.0'
"""
validate_robo_session()
if not bot_uuid:
bot_uuid = get_bot_uuid()
# validate_bot(bot_uuid)
if len(language) == 0:
bot_dir = bot_ignore_dir = (
os.path.abspath(target_dir) if target_dir else create_implementation_directory(bot_uuid)
)
elif len(language) == 1:
bot_dir = (
os.path.abspath(join(target_dir, "languages", language[0]))
if target_dir
else create_implementation_directory(bot_uuid)
) # TODO check what this does
bot_ignore_dir = join(dirname(dirname(bot_dir)))
else:
print_message("Please select only one bot to connect to.")
exit(0)
print_message("Bot target directory: {0}".format(bot_dir))
create_bot_manifest(bot_dir, bot_uuid, base_version)
create_bot_ignore(bot_ignore_dir)
print_success("The bot was successfully initialized.")
def get_bots(page: int) -> AssistantListResponse:
"""
Retrieves the list of bots from the ROBO.AI platform.
Args:
page (int): current page.
Returns:
AssistantListResponse: list of bots available in the current page.
"""
settings = ToolSettings()
current_environment = settings.get_current_environment()
robo = get_robo_client(current_environment)
return robo.assistants.get_list(page)
def get_bot_choice(bot: Assistant) -> dict:
"""
Get bot name and ID in a dictionary.
Args:
bot (Assistant): Assistant object to get details from.
Returns:
dict: dictionary with assistant's name and ID.
"""
return {
"name": bot.name,
"value": bot.uuid,
}
def get_bot_uuid() -> str:
"""
Show bot options to the user, returns the selected bot ID.
Returns:
bot_uuid (str): ID of the selected bot.
"""
NEXT_PAGE = "__NEXT_PAGE__"
PREV_PAGE = "__PREV_PAGE__"
NONE = "__NONE__"
META_RESPONSES = [NEXT_PAGE, PREV_PAGE, NONE]
bot_uuid = NONE
page = 0
while bot_uuid in META_RESPONSES:
if bot_uuid == NEXT_PAGE:
page += 1
if bot_uuid == PREV_PAGE:
page -= 1
with loading_indicator("Loading bot list..."):
bots = get_bots(page)
bot_choices = list(map(get_bot_choice, bots.content))
page_count = math.ceil(bots.totalElements / bots.size)
if page < page_count - 1:
bot_choices.append({"name": "> Next page...", "value": NEXT_PAGE})
if page > 0:
bot_choices.append({"name": "> Previous page...", "value": PREV_PAGE})
questions = [
{
"type": "list",
"name": "bot_id",
"message": "Please select the bot you would like to implement:",
"choices": [
Choice(title=bot["name"], value=bot["value"])
if (bot["value"] == NEXT_PAGE or bot["value"] == PREV_PAGE)
else Choice(
title=str(bot["name"] + " (" + bot["value"] + ")"),
value=bot["value"],
)
for bot in bot_choices
],
},
]
answers = prompt(questions)
bot_uuid = answers["bot_id"]
return bot_uuid
def get_base_version() -> str:
"""
Show runtime options to the user.
Returns:
base_version (str): selected base version
"""
with loading_indicator("Loading base version list..."):
versions = get_supported_base_versions()
version_choices = [{"value": base_version["version"], "name": base_version["label"]} for base_version in versions]
questions = [
{
"type": "list",
"name": "base_version",
"message": "Please select the bot runtime version:",
"choices": version_choices,
},
]
answers = prompt(questions)
base_version = answers["base_version"]
return base_version
def create_implementation_directory(bot_uuid: str) -> str:
cwd = os.getcwd()
bot_dir = cwd + "/" + bot_uuid
os.mkdir(bot_dir)
return bot_dir
def create_bot_manifest(bot_dir: str, bot_uuid: str, base_version: str):
with loading_indicator("Creating bot manifest file..."):
manifesto = BotManifest(bot_dir)
manifesto.set_bot_id(bot_uuid)
manifesto.set_base_version(base_version)
def create_bot_ignore(bot_ignore_dir: str):
if not isfile(join(bot_ignore_dir, ".botignore")):
copyfile(get_bot_ignore_path(), join(bot_ignore_dir, ".botignore"))
def get_bot_ignore_path() -> str:
return pkg_resources.resource_filename(__name__, "../initial_structure/initial_project/.botignore")
if __name__ == "__main__":
command() | /roboai_cli-1.1.2-py3-none-any.whl/roboai_cli/commands/connect.py | 0.483648 | 0.150684 | connect.py | pypi |
import numpy
import math
class Quaternion:
def __init__(self, scalar: float = 1.0, vector=None):
if vector is None:
vector = [1, 1, 1]
self._scalar = round(scalar, 3)
self._vector = numpy.array(vector)
@classmethod
def fromAngleAndVector(cls, angle: float = 0.0, v=None):
if v is None:
v = [1, 1, 1]
_scalar = math.cos(angle/2)
_vector = numpy.array(v) * math.sin(angle/2)
return cls(_scalar, _vector)
def __mul__(self, other):
scalar = self._scalar * other.scalar - numpy.dot(self.vector.transpose(), other.vector)
vector = self._scalar * other.vector + other.scalar * self.vector + numpy.cross(self.vector, other.vector)
return Quaternion(scalar, vector)
def __add__(self, other):
scalar = self._scalar + other.scalar
vector = self._vector + other.vector
return Quaternion(scalar, vector)
def __sub__(self, other):
scalar = self._scalar - other.scalar
vector = self._vector - other.vector
return Quaternion(scalar, vector)
def scalarMulti(self, other) -> float:
return (other.scalar * self.scalar + other.x * self.x + other.y * self.y + other.z * self.z) \
/ (self.norma() * other.norma())
def multiToScalar(self, s):
scalar = self._scalar * s
vector = self._vector * s
return Quaternion(scalar, vector)
def toRotateMatrix(self):
a11 = 1 - 2 * self.y ** 2 - 2 * self.z ** 2
a12 = 2 * self.x * self.y - 2 * self.z * self.scalar
a13 = 2 * self.x * self.z + 2 * self.y * self.scalar
a21 = 2 * self.x * self.y + 2 * self.z * self.scalar
a22 = 1 - 2 * self.x ** 2 - 2 * self.z ** 2
a23 = 2 * self.y * self.z - 2 * self.x * self.scalar
a31 = 2 * self.x * self.z - 2 * self.y * self.scalar
a32 = 2 * self.y * self.z + 2 * self.x * self.scalar
a33 = 1 - 2 * self.x ** 2 - 2 * self.y ** 2
R = numpy.array([[a11, a12, a13],
[a21, a22, a23],
[a31, a32, a33]])
return R
def norma(self):
return math.sqrt(self._scalar ** 2 + numpy.dot(self._vector.transpose(), self._vector))
def isUnit(self):
return round(self.norma()) == 1
@property
def scalar(self):
return self._scalar
@scalar.setter
def scalar(self, value):
self._scalar = value
@property
def vector(self):
return self._vector
@vector.setter
def vector(self, value):
self._vector = value
@property
def x(self):
return self._vector[0]
@x.setter
def x(self, value):
self._vector[0] = value
@property
def y(self):
return self._vector[1]
@y.setter
def y(self, value):
self._vector[1] = value
@property
def z(self):
return self._vector[2]
@z.setter
def z(self, value):
self._vector[2] = value | /robobase_package-1.0.0-py3-none-any.whl/robobase/quaternion.py | 0.908594 | 0.557303 | quaternion.py | pypi |
class Point:
def __init__(self, coords=None):
if coords is None:
coords = [0, 0]
self.x = coords[0]
self.y = coords[1]
def coord(self):
return [self.x, self.y]
class Triangle:
def __init__(self, points=None):
if points is None:
points = [Point(), Point(), Point()]
self.point_1 = points[0]
self.point_2 = points[1]
self.point_3 = points[2]
def isIntersection(self, other):
if self.checkSeparate(self.linePointPairs(), other.points()):
return False
if self.checkSeparate(other.linePointPairs(), self.points()):
return False
return True
@classmethod
def checkSeparate(cls, pairs, points):
for s_p in pairs:
duration = cls.isClockWise(s_p[0], s_p[1], s_p[2])
separate = True
for p in points:
if duration == cls.isClockWise(s_p[0], s_p[1], p) or cls.isLine(s_p[0], s_p[1], p):
separate = False
break
if separate:
return True
return False
def points(self):
return [self.point_1, self.point_2, self.point_3]
def edges(self):
return [[self.point_1, self.point_2], [self.point_2, self.point_3], [self.point_3, self.point_1]]
def linePointPairs(self):
return [[self.point_1, self.point_2, self.point_3],
[self.point_2, self.point_3, self.point_1],
[self.point_3, self.point_1, self.point_2]]
@staticmethod
def durationRotate(first: Point, second: Point, third: Point):
return (second.x - first.x) * (third.y - first.y) - (second.y - first.y) * (third.x - first.x)
@classmethod
def isClockWise(cls, first: Point, second: Point, third: Point):
return cls.durationRotate(first, second, third) < 0
@classmethod
def isLine(cls, first: Point, second: Point, third: Point):
return cls.durationRotate(first, second, third) == 0 | /robobase_package-1.0.0-py3-none-any.whl/robobase/trianglealg.py | 0.878497 | 0.389779 | trianglealg.py | pypi |
import re
import requests
from bs4 import BeautifulSoup
from werkzeug.utils import cached_property
from requests.packages.urllib3.util.retry import Retry
from robobrowser import helpers
from robobrowser import exceptions
from robobrowser.compat import urlparse
from robobrowser.forms.form import Form
from robobrowser.cache import RoboHTTPAdapter
_link_ptn = re.compile(r'^(a|button)$', re.I)
_form_ptn = re.compile(r'^form$', re.I)
class RoboState(object):
"""Representation of a browser state. Wraps the browser and response, and
lazily parses the response content.
"""
def __init__(self, browser, response):
self.browser = browser
self.response = response
self.url = response.url
@cached_property
def parsed(self):
"""Lazily parse response content, using HTML parser specified by the
browser.
"""
return BeautifulSoup(
self.response.content,
features=self.browser.parser,
)
class RoboBrowser(object):
"""Robotic web browser. Represents HTTP requests and responses using the
requests library and parsed HTML using BeautifulSoup.
:param str parser: HTML parser; used by BeautifulSoup
:param str user_agent: Default user-agent
:param history: History length; infinite if True, 1 if falsy, else
takes integer value
:param int timeout: Default timeout, in seconds
:param bool allow_redirects: Allow redirects on POST/PUT/DELETE
:param bool cache: Cache responses
:param list cache_patterns: List of URL patterns for cache
:param timedelta max_age: Max age for cache
:param int max_count: Max count for cache
:param int tries: Number of retries
:param Exception errors: Exception or tuple of exceptions to catch
:param int delay: Delay between retries
:param int multiplier: Delay multiplier between retries
"""
def __init__(self, session=None, parser=None, user_agent=None,
history=True, timeout=None, allow_redirects=True, cache=False,
cache_patterns=None, max_age=None, max_count=None, tries=None,
multiplier=None):
self.session = session or requests.Session()
# Add default user agent string
if user_agent is not None:
self.session.headers['User-Agent'] = user_agent
self.parser = parser
self.timeout = timeout
self.allow_redirects = allow_redirects
# Set up caching
if cache:
adapter = RoboHTTPAdapter(max_age=max_age, max_count=max_count)
cache_patterns = cache_patterns or ['http://', 'https://']
for pattern in cache_patterns:
self.session.mount(pattern, adapter)
elif max_age:
raise ValueError('Parameter `max_age` is provided, '
'but caching is turned off')
elif max_count:
raise ValueError('Parameter `max_count` is provided, '
'but caching is turned off')
# Configure history
self.history = history
if history is True:
self._maxlen = None
elif not history:
self._maxlen = 1
else:
self._maxlen = history
self._states = []
self._cursor = -1
# Set up retries
if tries:
retry = Retry(tries, backoff_factor=multiplier)
for protocol in ['http://', 'https://']:
self.session.adapters[protocol].max_retries = retry
def __repr__(self):
try:
return '<RoboBrowser url={0}>'.format(self.url)
except exceptions.RoboError:
return '<RoboBrowser>'
@property
def state(self):
if self._cursor == -1:
raise exceptions.RoboError('No state')
try:
return self._states[self._cursor]
except IndexError:
raise exceptions.RoboError('Index out of range')
@property
def response(self):
return self.state.response
@property
def url(self):
return self.state.url
@property
def parsed(self):
return self.state.parsed
@property
def find(self):
"""See ``BeautifulSoup::find``."""
try:
return self.parsed.find
except AttributeError:
raise exceptions.RoboError
@property
def find_all(self):
"""See ``BeautifulSoup::find_all``."""
try:
return self.parsed.find_all
except AttributeError:
raise exceptions.RoboError
@property
def select(self):
"""See ``BeautifulSoup::select``."""
try:
return self.parsed.select
except AttributeError:
raise exceptions.RoboError
def _build_url(self, url):
"""Build absolute URL.
:param url: Full or partial URL
:return: Full URL
"""
return urlparse.urljoin(
self.url,
url
)
@property
def _default_send_args(self):
"""
"""
return {
'timeout': self.timeout,
'allow_redirects': self.allow_redirects,
}
def _build_send_args(self, **kwargs):
"""Merge optional arguments with defaults.
:param kwargs: Keyword arguments to `Session::send`
"""
out = {}
out.update(self._default_send_args)
out.update(kwargs)
return out
def open(self, url, method='get', **kwargs):
"""Open a URL.
:param str url: URL to open
:param str method: Optional method; defaults to `'get'`
:param kwargs: Keyword arguments to `Session::request`
"""
response = self.session.request(method, url, **self._build_send_args(**kwargs))
self._update_state(response)
def _update_state(self, response):
"""Update the state of the browser. Create a new state object, and
append to or overwrite the browser's state history.
:param requests.MockResponse: New response object
"""
# Clear trailing states
self._states = self._states[:self._cursor + 1]
# Append new state
state = RoboState(self, response)
self._states.append(state)
self._cursor += 1
# Clear leading states
if self._maxlen:
decrement = len(self._states) - self._maxlen
if decrement > 0:
self._states = self._states[decrement:]
self._cursor -= decrement
def _traverse(self, n=1):
"""Traverse state history. Used by `back` and `forward` methods.
:param int n: Cursor increment. Positive values move forward in the
browser history; negative values move backward.
"""
if not self.history:
raise exceptions.RoboError('Not tracking history')
cursor = self._cursor + n
if cursor >= len(self._states) or cursor < 0:
raise exceptions.RoboError('Index out of range')
self._cursor = cursor
def back(self, n=1):
"""Go back in browser history.
:param int n: Number of pages to go back
"""
self._traverse(-1 * n)
def forward(self, n=1):
"""Go forward in browser history.
:param int n: Number of pages to go forward
"""
self._traverse(n)
def get_link(self, text=None, *args, **kwargs):
"""Find an anchor or button by containing text, as well as standard
BeautifulSoup arguments.
:param text: String or regex to be matched in link text
:return: BeautifulSoup tag if found, else None
"""
return helpers.find(
self.parsed, _link_ptn, text=text, *args, **kwargs
)
def get_links(self, text=None, *args, **kwargs):
"""Find anchors or buttons by containing text, as well as standard
BeautifulSoup arguments.
:param text: String or regex to be matched in link text
:return: List of BeautifulSoup tags
"""
return helpers.find_all(
self.parsed, _link_ptn, text=text, *args, **kwargs
)
def get_form(self, id=None, *args, **kwargs):
"""Find form by ID, as well as standard BeautifulSoup arguments.
:param str id: Form ID
:return: BeautifulSoup tag if found, else None
"""
if id:
kwargs['id'] = id
form = self.find(_form_ptn, *args, **kwargs)
if form is not None:
return Form(form)
def get_forms(self, *args, **kwargs):
"""Find forms by standard BeautifulSoup arguments.
:args: Positional arguments to `BeautifulSoup::find_all`
:args: Keyword arguments to `BeautifulSoup::find_all`
:return: List of BeautifulSoup tags
"""
forms = self.find_all(_form_ptn, *args, **kwargs)
return [
Form(form)
for form in forms
]
def follow_link(self, link, **kwargs):
"""Click a link.
:param Tag link: Link to click
:param kwargs: Keyword arguments to `Session::send`
"""
try:
href = link['href']
except KeyError:
raise exceptions.RoboError('Link element must have "href" '
'attribute')
self.open(self._build_url(href), **kwargs)
def submit_form(self, form, submit=None, **kwargs):
"""Submit a form.
:param Form form: Filled-out form object
:param Submit submit: Optional `Submit` to click, if form includes
multiple submits
:param kwargs: Keyword arguments to `Session::send`
"""
# Get HTTP verb
method = form.method.upper()
# Send request
url = self._build_url(form.action) or self.url
payload = form.serialize(submit=submit)
serialized = payload.to_requests(method)
send_args = self._build_send_args(**kwargs)
send_args.update(serialized)
response = self.session.request(method, url, **send_args)
# Update history
self._update_state(response) | /robobro-0.5.3-py3-none-any.whl/robobrowser/browser.py | 0.793986 | 0.187486 | browser.py | pypi |
import re
from bs4 import BeautifulSoup
from bs4.element import Tag
from robobrowser.compat import string_types, iteritems
def match_text(text, tag):
if isinstance(text, string_types):
return text in tag.text
if isinstance(text, re._pattern_type):
return text.search(tag.text)
def find_all(soup, name=None, attrs=None, recursive=True, text=None,
limit=None, **kwargs):
"""The `find` and `find_all` methods of `BeautifulSoup` don't handle the
`text` parameter combined with other parameters. This is necessary for
e.g. finding links containing a string or pattern. This method first
searches by text content, and then by the standard BeautifulSoup arguments.
"""
if text is None:
return soup.find_all(
name, attrs or {}, recursive, text, limit, **kwargs
)
if isinstance(text, string_types):
text = re.compile(re.escape(text), re.I)
tags = soup.find_all(
name, attrs or {}, recursive, **kwargs
)
rv = []
for tag in tags:
if match_text(text, tag):
rv.append(tag)
if limit is not None and len(rv) >= limit:
break
return rv
def find(soup, name=None, attrs=None, recursive=True, text=None, **kwargs):
"""Modified find method; see `find_all`, above.
"""
tags = find_all(
soup, name, attrs or {}, recursive, text, 1, **kwargs
)
if tags:
return tags[0]
def ensure_soup(value, parser=None):
"""Coerce a value (or list of values) to Tag (or list of Tag).
:param value: String, BeautifulSoup, Tag, or list of the above
:param str parser: Parser to use; defaults to BeautifulSoup default
:return: Tag or list of Tags
"""
if isinstance(value, BeautifulSoup):
return value.find()
if isinstance(value, Tag):
return value
if isinstance(value, list):
return [
ensure_soup(item, parser=parser)
for item in value
]
parsed = BeautifulSoup(value, features=parser)
return parsed.find()
def lowercase_attr_names(tag):
"""Lower-case all attribute names of the provided BeautifulSoup tag.
Note: this mutates the tag's attribute names and does not return a new
tag.
:param Tag: BeautifulSoup tag
"""
# Use list comprehension instead of dict comprehension for 2.6 support
tag.attrs = dict([
(key.lower(), value)
for key, value in iteritems(tag.attrs)
]) | /robobro-0.5.3-py3-none-any.whl/robobrowser/helpers.py | 0.631367 | 0.159087 | helpers.py | pypi |
import re
import collections
from werkzeug.datastructures import OrderedMultiDict
from robobrowser.compat import iteritems, encode_if_py2
from . import fields
from .. import helpers
from .. import exceptions
_tags = ['input', 'textarea', 'select']
_tag_ptn = re.compile(
'|'.join(_tags),
re.I
)
def _group_flat_tags(tag, tags):
"""Extract tags sharing the same name as the provided tag. Used to collect
options for radio and checkbox inputs.
:param Tag tag: BeautifulSoup tag
:param list tags: List of tags
:return: List of matching tags
"""
grouped = [tag]
name = tag.get('name', '').lower()
while tags and tags[0].get('name', '').lower() == name:
grouped.append(tags.pop(0))
return grouped
def _parse_field(tag, tags):
tag_type = tag.name.lower()
if tag_type == 'input':
tag_type = tag.get('type', '').lower()
if tag_type == 'submit':
return fields.Submit(tag)
if tag_type == 'file':
return fields.FileInput(tag)
if tag_type == 'radio':
radios = _group_flat_tags(tag, tags)
return fields.Radio(radios)
if tag_type == 'checkbox':
checkboxes = _group_flat_tags(tag, tags)
return fields.Checkbox(checkboxes)
return fields.Input(tag)
if tag_type == 'textarea':
return fields.Textarea(tag)
if tag_type == 'select':
if tag.get('multiple') is not None:
return fields.MultiSelect(tag)
return fields.Select(tag)
def _parse_fields(parsed):
"""Parse form fields from HTML.
:param BeautifulSoup parsed: Parsed HTML
:return OrderedDict: Collection of field objects
"""
# Note: Call this `out` to avoid name conflict with `fields` module
out = []
# Prepare field tags
tags = parsed.find_all(_tag_ptn)
for tag in tags:
helpers.lowercase_attr_names(tag)
while tags:
tag = tags.pop(0)
try:
field = _parse_field(tag, tags)
except exceptions.InvalidNameError:
continue
if field is not None:
out.append(field)
return out
def _filter_fields(fields, predicate):
return OrderedMultiDict([
(key, value)
for key, value in fields.items(multi=True)
if predicate(value)
])
class Payload(object):
"""Container for serialized form outputs that knows how to export to
the format expected by Requests. By default, form values are stored in
`data`.
"""
def __init__(self):
self.data = OrderedMultiDict()
self.options = collections.defaultdict(OrderedMultiDict)
@classmethod
def from_fields(cls, fields):
"""
:param OrderedMultiDict fields:
"""
payload = cls()
for _, field in fields.items(multi=True):
if not field.disabled:
payload.add(field.serialize(), field.payload_key)
return payload
def add(self, data, key=None):
"""Add field values to container.
:param dict data: Serialized values for field
:param str key: Optional key; if not provided, values will be added
to `self.payload`.
"""
sink = self.options[key] if key is not None else self.data
for key, value in iteritems(data):
sink.add(key, value)
def to_requests(self, method='get'):
"""Export to Requests format.
:param str method: Request method
:return: Dict of keyword arguments formatted for `requests.request`
"""
out = {}
data_key = 'params' if method.lower() == 'get' else 'data'
out[data_key] = self.data
out.update(self.options)
return dict([
(key, list(value.items(multi=True)))
for key, value in iteritems(out)
])
def prepare_fields(all_fields, submit_fields, submit):
if len(list(submit_fields.items(multi=True))) > 1:
if not submit:
raise exceptions.InvalidSubmitError()
if submit not in submit_fields.getlist(submit.name):
raise exceptions.InvalidSubmitError()
return _filter_fields(
all_fields,
lambda f: not isinstance(f, fields.Submit) or f == submit
)
return all_fields
class Form(object):
"""Representation of an HTML form."""
def __init__(self, parsed):
parsed = helpers.ensure_soup(parsed)
if parsed.name != 'form':
parsed = parsed.find('form')
self.parsed = parsed
self.action = self.parsed.get('action')
self.method = self.parsed.get('method', 'get')
self.fields = OrderedMultiDict()
for field in _parse_fields(self.parsed):
self.add_field(field)
def add_field(self, field):
"""Add a field.
:param field: Field to add
:raise: ValueError if `field` is not an instance of `BaseField`.
"""
if not isinstance(field, fields.BaseField):
raise ValueError('Argument "field" must be an instance of '
'BaseField')
self.fields.add(field.name, field)
@property
def submit_fields(self):
return _filter_fields(
self.fields,
lambda field: isinstance(field, fields.Submit)
)
@encode_if_py2
def __repr__(self):
state = u', '.join(
[
u'{0}={1}'.format(name, field.value)
for name, field in self.fields.items(multi=True)
]
)
if state:
return u'<RoboForm {0}>'.format(state)
return u'<RoboForm>'
def keys(self):
return self.fields.keys()
def __getitem__(self, item):
return self.fields[item]
def __setitem__(self, key, value):
self.fields[key].value = value
def serialize(self, submit=None):
"""Serialize each form field to a Payload container.
:param Submit submit: Optional `Submit` to click, if form includes
multiple submits
:return: Payload instance
"""
include_fields = prepare_fields(self.fields, self.submit_fields, submit)
return Payload.from_fields(include_fields) | /robobro-0.5.3-py3-none-any.whl/robobrowser/forms/form.py | 0.711932 | 0.316369 | form.py | pypi |
import re
import requests
import urllib.parse
from bs4 import BeautifulSoup
from werkzeug.utils import cached_property
from requests.packages.urllib3.util.retry import Retry
from robobrowser import helpers
from robobrowser import exceptions
from robobrowser.forms.form import Form
from robobrowser.cache import RoboHTTPAdapter
_link_ptn = re.compile(r'^(a|button)$', re.I)
_form_ptn = re.compile(r'^form$', re.I)
class RoboState(object):
"""Representation of a browser state. Wraps the browser and response, and
lazily parses the response content.
"""
def __init__(self, browser, response):
self.browser = browser
self.response = response
self.url = response.url
@cached_property
def parsed(self):
"""Lazily parse response content, using HTML parser specified by the
browser.
"""
return BeautifulSoup(
self.response.content,
features=self.browser.parser,
)
class RoboBrowser(object):
"""Robotic web browser. Represents HTTP requests and responses using the
requests library and parsed HTML using BeautifulSoup.
:param str parser: HTML parser; used by BeautifulSoup
:param str user_agent: Default user-agent
:param history: History length; infinite if True, 1 if falsy, else
takes integer value
:param int timeout: Default timeout, in seconds
:param bool allow_redirects: Allow redirects on POST/PUT/DELETE
:param bool cache: Cache responses
:param list cache_patterns: List of URL patterns for cache
:param timedelta max_age: Max age for cache
:param int max_count: Max count for cache
:param int tries: Number of retries
:param Exception errors: Exception or tuple of exceptions to catch
:param int delay: Delay between retries
:param int multiplier: Delay multiplier between retries
"""
def __init__(self, session=None, parser=None, user_agent=None,
history=True, timeout=None, allow_redirects=True, cache=False,
cache_patterns=None, max_age=None, max_count=None, tries=None,
multiplier=None):
self.session = session or requests.Session()
# Add default user agent string
if user_agent is not None:
self.session.headers['User-Agent'] = user_agent
self.parser = parser
self.timeout = timeout
self.allow_redirects = allow_redirects
# Set up caching
if cache:
adapter = RoboHTTPAdapter(max_age=max_age, max_count=max_count)
cache_patterns = cache_patterns or ['http://', 'https://']
for pattern in cache_patterns:
self.session.mount(pattern, adapter)
elif max_age:
raise ValueError('Parameter `max_age` is provided, '
'but caching is turned off')
elif max_count:
raise ValueError('Parameter `max_count` is provided, '
'but caching is turned off')
# Configure history
self.history = history
if history is True:
self._maxlen = None
elif not history:
self._maxlen = 1
else:
self._maxlen = history
self._states = []
self._cursor = -1
# Set up retries
if tries:
retry = Retry(tries, backoff_factor=multiplier)
for protocol in ['http://', 'https://']:
self.session.adapters[protocol].max_retries = retry
def __repr__(self):
try:
return '<RoboBrowser url={0}>'.format(self.url)
except exceptions.RoboError:
return '<RoboBrowser>'
@property
def state(self):
if self._cursor == -1:
raise exceptions.RoboError('No state')
try:
return self._states[self._cursor]
except IndexError:
raise exceptions.RoboError('Index out of range')
@property
def response(self):
return self.state.response
@property
def url(self):
return self.state.url
@property
def parsed(self):
return self.state.parsed
@property
def find(self):
"""See ``BeautifulSoup::find``."""
try:
return self.parsed.find
except AttributeError:
raise exceptions.RoboError
@property
def find_all(self):
"""See ``BeautifulSoup::find_all``."""
try:
return self.parsed.find_all
except AttributeError:
raise exceptions.RoboError
@property
def select(self):
"""See ``BeautifulSoup::select``."""
try:
return self.parsed.select
except AttributeError:
raise exceptions.RoboError
def _build_url(self, url):
"""Build absolute URL.
:param url: Full or partial URL
:return: Full URL
"""
return urllib.parse.urljoin(self.url, url)
@property
def _default_send_args(self):
"""
"""
return {
'timeout': self.timeout,
'allow_redirects': self.allow_redirects,
}
def _build_send_args(self, **kwargs):
"""Merge optional arguments with defaults.
:param kwargs: Keyword arguments to `Session::send`
"""
out = {}
out.update(self._default_send_args)
out.update(kwargs)
return out
def open(self, url, method='get', **kwargs):
"""Open a URL.
:param str url: URL to open
:param str method: Optional method; defaults to `'get'`
:param kwargs: Keyword arguments to `Session::request`
"""
response = self.session.request(method, url, **self._build_send_args(**kwargs))
self._update_state(response)
def _update_state(self, response):
"""Update the state of the browser. Create a new state object, and
append to or overwrite the browser's state history.
:param requests.MockResponse: New response object
"""
# Clear trailing states
self._states = self._states[:self._cursor + 1]
# Append new state
state = RoboState(self, response)
self._states.append(state)
self._cursor += 1
# Clear leading states
if self._maxlen:
decrement = len(self._states) - self._maxlen
if decrement > 0:
self._states = self._states[decrement:]
self._cursor -= decrement
def _traverse(self, n=1):
"""Traverse state history. Used by `back` and `forward` methods.
:param int n: Cursor increment. Positive values move forward in the
browser history; negative values move backward.
"""
if not self.history:
raise exceptions.RoboError('Not tracking history')
cursor = self._cursor + n
if cursor >= len(self._states) or cursor < 0:
raise exceptions.RoboError('Index out of range')
self._cursor = cursor
def back(self, n=1):
"""Go back in browser history.
:param int n: Number of pages to go back
"""
self._traverse(-1 * n)
def forward(self, n=1):
"""Go forward in browser history.
:param int n: Number of pages to go forward
"""
self._traverse(n)
def get_link(self, text=None, *args, **kwargs):
"""Find an anchor or button by containing text, as well as standard
BeautifulSoup arguments.
:param text: String or regex to be matched in link text
:return: BeautifulSoup tag if found, else None
"""
return helpers.find(
self.parsed, _link_ptn, text=text, *args, **kwargs
)
def get_links(self, text=None, *args, **kwargs):
"""Find anchors or buttons by containing text, as well as standard
BeautifulSoup arguments.
:param text: String or regex to be matched in link text
:return: List of BeautifulSoup tags
"""
return helpers.find_all(
self.parsed, _link_ptn, text=text, *args, **kwargs
)
def get_form(self, id=None, *args, **kwargs):
"""Find form by ID, as well as standard BeautifulSoup arguments.
:param str id: Form ID
:return: BeautifulSoup tag if found, else None
"""
if id:
kwargs['id'] = id
form = self.find(_form_ptn, *args, **kwargs)
if form is not None:
return Form(form)
def get_forms(self, *args, **kwargs):
"""Find forms by standard BeautifulSoup arguments.
:args: Positional arguments to `BeautifulSoup::find_all`
:args: Keyword arguments to `BeautifulSoup::find_all`
:return: List of BeautifulSoup tags
"""
forms = self.find_all(_form_ptn, *args, **kwargs)
return [
Form(form)
for form in forms
]
def follow_link(self, link, **kwargs):
"""Click a link.
:param Tag link: Link to click
:param kwargs: Keyword arguments to `Session::send`
"""
try:
href = link['href']
except KeyError:
raise exceptions.RoboError('Link element must have "href" '
'attribute')
self.open(self._build_url(href), **kwargs)
def submit_form(self, form, submit=None, **kwargs):
"""Submit a form.
:param Form form: Filled-out form object
:param Submit submit: Optional `Submit` to click, if form includes
multiple submits
:param kwargs: Keyword arguments to `Session::send`
"""
# Get HTTP verb
method = form.method.upper()
# Send request
url = self._build_url(form.action) or self.url
payload = form.serialize(submit=submit)
serialized = payload.to_requests(method)
send_args = self._build_send_args(**kwargs)
send_args.update(serialized)
response = self.session.request(method, url, **send_args)
# Update history
self._update_state(response) | /robobrowser-jmr-0.5.4a1.tar.gz/robobrowser-jmr-0.5.4a1/robobrowser/browser.py | 0.779909 | 0.165222 | browser.py | pypi |
import re
from bs4 import BeautifulSoup
from bs4.element import Tag
def find_all(soup, name=None, attrs=None, recursive=True, text=None,
limit=None, **kwargs):
"""The `find` and `find_all` methods of `BeautifulSoup` don't handle the
`text` parameter combined with other parameters. This is necessary for
e.g. finding links containing a string or pattern. This method first
searches by text content, and then by the standard BeautifulSoup arguments.
"""
if text is None:
return soup.find_all(
name, attrs or {}, recursive, text, limit, **kwargs
)
tags = soup.find_all(
name, attrs or {}, recursive, **kwargs
)
rv = []
def _match_text(tag):
if isinstance(text, str):
return text in tag
return text.search(tag)
for tag in tags:
if _match_text(str(tag)):
rv.append(tag)
if limit is not None and len(rv) >= limit:
break
return rv
def find(soup, name=None, attrs=None, recursive=True, text=None, **kwargs):
"""Modified find method; see `find_all`, above.
"""
tags = find_all(
soup, name, attrs or {}, recursive, text, 1, **kwargs
)
if tags:
return tags[0]
def ensure_soup(value, parser=None):
"""Coerce a value (or list of values) to Tag (or list of Tag).
:param value: String, BeautifulSoup, Tag, or list of the above
:param str parser: Parser to use; defaults to BeautifulSoup default
:return: Tag or list of Tags
"""
if isinstance(value, BeautifulSoup):
return value.find()
if isinstance(value, Tag):
return value
if isinstance(value, list):
return [
ensure_soup(item, parser=parser)
for item in value
]
parsed = BeautifulSoup(value, features=parser)
return parsed.find()
def lowercase_attr_names(tag):
"""Lower-case all attribute names of the provided BeautifulSoup tag.
Note: this mutates the tag's attribute names and does not return a new
tag.
:param Tag: BeautifulSoup tag
"""
tag.attrs = {key.lower(): value for key, value in tag.attrs.items()} | /robobrowser-jmr-0.5.4a1.tar.gz/robobrowser-jmr-0.5.4a1/robobrowser/helpers.py | 0.64512 | 0.150372 | helpers.py | pypi |
import re
import collections
from werkzeug.datastructures import OrderedMultiDict
from . import fields
from .. import helpers
from .. import exceptions
_tags = ['input', 'textarea', 'select']
_tag_ptn = re.compile(
'|'.join(_tags),
re.I
)
def _group_flat_tags(tag, tags):
"""Extract tags sharing the same name as the provided tag. Used to collect
options for radio and checkbox inputs.
:param Tag tag: BeautifulSoup tag
:param list tags: List of tags
:return: List of matching tags
"""
grouped = [tag]
name = tag.get('name', '').lower()
while tags and tags[0].get('name', '').lower() == name:
grouped.append(tags.pop(0))
return grouped
def _parse_field(tag, tags):
tag_type = tag.name.lower()
if tag_type == 'input':
tag_type = tag.get('type', '').lower()
if tag_type == 'submit':
return fields.Submit(tag)
if tag_type == 'file':
return fields.FileInput(tag)
if tag_type == 'radio':
radios = _group_flat_tags(tag, tags)
return fields.Radio(radios)
if tag_type == 'checkbox':
checkboxes = _group_flat_tags(tag, tags)
return fields.Checkbox(checkboxes)
return fields.Input(tag)
if tag_type == 'textarea':
return fields.Textarea(tag)
if tag_type == 'select':
if tag.get('multiple') is not None:
return fields.MultiSelect(tag)
return fields.Select(tag)
def _parse_fields(parsed):
"""Parse form fields from HTML.
:param BeautifulSoup parsed: Parsed HTML
:return OrderedDict: Collection of field objects
"""
# Note: Call this `out` to avoid name conflict with `fields` module
out = []
# Prepare field tags
tags = parsed.find_all(_tag_ptn)
for tag in tags:
helpers.lowercase_attr_names(tag)
while tags:
tag = tags.pop(0)
try:
field = _parse_field(tag, tags)
except exceptions.InvalidNameError:
continue
if field is not None:
out.append(field)
return out
def _filter_fields(fields, predicate):
return OrderedMultiDict([
(key, value)
for key, value in fields.items(multi=True)
if predicate(value)
])
class Payload(object):
"""Container for serialized form outputs that knows how to export to
the format expected by Requests. By default, form values are stored in
`data`.
"""
def __init__(self):
self.data = OrderedMultiDict()
self.options = collections.defaultdict(OrderedMultiDict)
@classmethod
def from_fields(cls, fields):
"""
:param OrderedMultiDict fields:
"""
payload = cls()
for _, field in fields.items(multi=True):
if not field.disabled:
payload.add(field.serialize(), field.payload_key)
return payload
def add(self, data, key=None):
"""Add field values to container.
:param dict data: Serialized values for field
:param str key: Optional key; if not provided, values will be added
to `self.payload`.
"""
sink = self.options[key] if key is not None else self.data
for key, value in data.items():
sink.add(key, value)
def to_requests(self, method='get'):
"""Export to Requests format.
:param str method: Request method
:return: Dict of keyword arguments formatted for `requests.request`
"""
out = {}
data_key = 'params' if method.lower() == 'get' else 'data'
out[data_key] = self.data
out.update(self.options)
return {
key: list(value.items(multi=True))
for key, value in out.items()
}
def prepare_fields(all_fields, submit_fields, submit):
if len(list(submit_fields.items(multi=True))) > 1:
if not submit:
raise exceptions.InvalidSubmitError()
if submit not in submit_fields.getlist(submit.name):
raise exceptions.InvalidSubmitError()
return _filter_fields(
all_fields,
lambda f: not isinstance(f, fields.Submit) or f == submit
)
return all_fields
class Form(object):
"""Representation of an HTML form."""
def __init__(self, parsed):
parsed = helpers.ensure_soup(parsed)
if parsed.name != 'form':
parsed = parsed.find('form')
self.parsed = parsed
self.action = self.parsed.get('action')
self.method = self.parsed.get('method', 'get')
self.fields = OrderedMultiDict()
for field in _parse_fields(self.parsed):
self.add_field(field)
def add_field(self, field):
"""Add a field.
:param field: Field to add
:raise: ValueError if `field` is not an instance of `BaseField`.
"""
if not isinstance(field, fields.BaseField):
raise ValueError('Argument "field" must be an instance of '
'BaseField')
self.fields.add(field.name, field)
@property
def submit_fields(self):
return _filter_fields(
self.fields,
lambda field: isinstance(field, fields.Submit)
)
def __repr__(self):
state = u', '.join(
[
u'{0}={1}'.format(name, field.value)
for name, field in self.fields.items(multi=True)
]
)
if state:
return u'<RoboForm {0}>'.format(state)
return u'<RoboForm>'
def keys(self):
return self.fields.keys()
def __getitem__(self, item):
return self.fields[item]
def __setitem__(self, key, value):
self.fields[key].value = value
def serialize(self, submit=None):
"""Serialize each form field to a Payload container.
:param Submit submit: Optional `Submit` to click, if form includes
multiple submits
:return: Payload instance
"""
include_fields = prepare_fields(self.fields, self.submit_fields, submit)
return Payload.from_fields(include_fields) | /robobrowser-jmr-0.5.4a1.tar.gz/robobrowser-jmr-0.5.4a1/robobrowser/forms/form.py | 0.734215 | 0.322446 | form.py | pypi |
import re
from bs4 import BeautifulSoup
from bs4.element import Tag
from robobrowser.compat import string_types, iteritems
def match_text(text, tag):
if isinstance(text, string_types):
return text in tag.text
if isinstance(text, re.Pattern):
return text.search(tag.text)
def find_all(soup, name=None, attrs=None, recursive=True, text=None,
limit=None, **kwargs):
"""The `find` and `find_all` methods of `BeautifulSoup` don't handle the
`text` parameter combined with other parameters. This is necessary for
e.g. finding links containing a string or pattern. This method first
searches by text content, and then by the standard BeautifulSoup arguments.
"""
if text is None:
return soup.find_all(
name, attrs or {}, recursive, text, limit, **kwargs
)
if isinstance(text, string_types):
text = re.compile(re.escape(text), re.I)
tags = soup.find_all(
name, attrs or {}, recursive, **kwargs
)
rv = []
for tag in tags:
if match_text(text, tag):
rv.append(tag)
if limit is not None and len(rv) >= limit:
break
return rv
def find(soup, name=None, attrs=None, recursive=True, text=None, **kwargs):
"""Modified find method; see `find_all`, above.
"""
tags = find_all(
soup, name, attrs or {}, recursive, text, 1, **kwargs
)
if tags:
return tags[0]
def ensure_soup(value, parser=None):
"""Coerce a value (or list of values) to Tag (or list of Tag).
:param value: String, BeautifulSoup, Tag, or list of the above
:param str parser: Parser to use; defaults to BeautifulSoup default
:return: Tag or list of Tags
"""
if isinstance(value, BeautifulSoup):
return value.find()
if isinstance(value, Tag):
return value
if isinstance(value, list):
return [
ensure_soup(item, parser=parser)
for item in value
]
parsed = BeautifulSoup(value, features=parser)
return parsed.find()
def lowercase_attr_names(tag):
"""Lower-case all attribute names of the provided BeautifulSoup tag.
Note: this mutates the tag's attribute names and does not return a new
tag.
:param Tag: BeautifulSoup tag
"""
# Use list comprehension instead of dict comprehension for 2.6 support
tag.attrs = dict([
(key.lower(), value)
for key, value in iteritems(tag.attrs)
]) | /robobrowserdash-0.6.1.tar.gz/robobrowserdash-0.6.1/robobrowser/helpers.py | 0.633183 | 0.15961 | helpers.py | pypi |
import re
import collections
from werkzeug.datastructures import OrderedMultiDict
from robobrowser.compat import iteritems, encode_if_py2
from . import fields
from .. import helpers
from .. import exceptions
_tags = ['input', 'textarea', 'select']
_tag_ptn = re.compile(
'|'.join(_tags),
re.I
)
def _group_flat_tags(tag, tags):
"""Extract tags sharing the same name as the provided tag. Used to collect
options for radio and checkbox inputs.
:param Tag tag: BeautifulSoup tag
:param list tags: List of tags
:return: List of matching tags
"""
grouped = [tag]
name = tag.get('name', '').lower()
while tags and tags[0].get('name', '').lower() == name:
grouped.append(tags.pop(0))
return grouped
def _parse_field(tag, tags):
tag_type = tag.name.lower()
if tag_type == 'input':
tag_type = tag.get('type', '').lower()
if tag_type == 'submit':
return fields.Submit(tag)
if tag_type == 'file':
return fields.FileInput(tag)
if tag_type == 'radio':
radios = _group_flat_tags(tag, tags)
return fields.Radio(radios)
if tag_type == 'checkbox':
checkboxes = _group_flat_tags(tag, tags)
return fields.Checkbox(checkboxes)
return fields.Input(tag)
if tag_type == 'textarea':
return fields.Textarea(tag)
if tag_type == 'select':
if tag.get('multiple') is not None:
return fields.MultiSelect(tag)
return fields.Select(tag)
def _parse_fields(parsed):
"""Parse form fields from HTML.
:param BeautifulSoup parsed: Parsed HTML
:return OrderedDict: Collection of field objects
"""
# Note: Call this `out` to avoid name conflict with `fields` module
out = []
# Prepare field tags
tags = parsed.find_all(_tag_ptn)
for tag in tags:
helpers.lowercase_attr_names(tag)
while tags:
tag = tags.pop(0)
try:
field = _parse_field(tag, tags)
except exceptions.InvalidNameError:
continue
if field is not None:
out.append(field)
return out
def _filter_fields(fields, predicate):
return OrderedMultiDict([
(key, value)
for key, value in fields.items(multi=True)
if predicate(value)
])
class Payload(object):
"""Container for serialized form outputs that knows how to export to
the format expected by Requests. By default, form values are stored in
`data`.
"""
def __init__(self):
self.data = OrderedMultiDict()
self.options = collections.defaultdict(OrderedMultiDict)
@classmethod
def from_fields(cls, fields):
"""
:param OrderedMultiDict fields:
"""
payload = cls()
for _, field in fields.items(multi=True):
if not field.disabled:
payload.add(field.serialize(), field.payload_key)
return payload
def add(self, data, key=None):
"""Add field values to container.
:param dict data: Serialized values for field
:param str key: Optional key; if not provided, values will be added
to `self.payload`.
"""
sink = self.options[key] if key is not None else self.data
for key, value in iteritems(data):
sink.add(key, value)
def to_requests(self, method='get'):
"""Export to Requests format.
:param str method: Request method
:return: Dict of keyword arguments formatted for `requests.request`
"""
out = {}
data_key = 'params' if method.lower() == 'get' else 'data'
out[data_key] = self.data
out.update(self.options)
return dict([
(key, list(value.items(multi=True)))
for key, value in iteritems(out)
])
def prepare_fields(all_fields, submit_fields, submit):
if len(list(submit_fields.items(multi=True))) > 1:
if not submit:
raise exceptions.InvalidSubmitError()
if submit not in submit_fields.getlist(submit.name):
raise exceptions.InvalidSubmitError()
return _filter_fields(
all_fields,
lambda f: not isinstance(f, fields.Submit) or f == submit
)
return all_fields
class Form(object):
"""Representation of an HTML form."""
def __init__(self, parsed):
parsed = helpers.ensure_soup(parsed)
if parsed.name != 'form':
parsed = parsed.find('form')
self.parsed = parsed
self.action = self.parsed.get('action')
self.method = self.parsed.get('method', 'get')
self.fields = OrderedMultiDict()
for field in _parse_fields(self.parsed):
self.add_field(field)
def add_field(self, field):
"""Add a field.
:param field: Field to add
:raise: ValueError if `field` is not an instance of `BaseField`.
"""
if not isinstance(field, fields.BaseField):
raise ValueError('Argument "field" must be an instance of '
'BaseField')
self.fields.add(field.name, field)
@property
def submit_fields(self):
return _filter_fields(
self.fields,
lambda field: isinstance(field, fields.Submit)
)
@encode_if_py2
def __repr__(self):
state = u', '.join(
[
u'{0}={1}'.format(name, field.value)
for name, field in self.fields.items(multi=True)
]
)
if state:
return u'<RoboForm {0}>'.format(state)
return u'<RoboForm>'
def keys(self):
return self.fields.keys()
def __getitem__(self, item):
return self.fields[item]
def __setitem__(self, key, value):
self.fields[key].value = value
def serialize(self, submit=None):
"""Serialize each form field to a Payload container.
:param Submit submit: Optional `Submit` to click, if form includes
multiple submits
:return: Payload instance
"""
include_fields = prepare_fields(self.fields, self.submit_fields, submit)
return Payload.from_fields(include_fields) | /robobrowserdash-0.6.1.tar.gz/robobrowserdash-0.6.1/robobrowser/forms/form.py | 0.711932 | 0.316369 | form.py | pypi |
from typing import List
import cv2
import numpy as np
class ShuffleVariable(object):
FLOAT_TYPE: str = "float"
STRING_TYPE: str = "string"
BIG_STRING_TYPE: str = "bigstring"
BOOL_TYPE: str = "bool"
CHART_TYPE: str = "chart"
SLIDER_TYPE: str = "slider"
IN_VAR: str = "in"
OUT_VAR: str = "out"
def __init__(self, name: str, type_: str, direction: str = IN_VAR) -> None:
self.name = name
self.type_ = type_
self.value = ''
self.direction = direction
def set_bool(self, value: bool) -> None:
self.value = "1" if value else "0"
def set_float(self, value: float) -> None:
self.value = str(value)
def set_string(self, value: str) -> None:
self.value = value
def get_bool(self) -> bool:
return self.value == "1"
def get_float(self) -> float:
try:
return float(self.value.replace(',', '.') if len(self.value) > 0 else "0")
except (Exception, FloatingPointError):
return 0
def get_string(self) -> str:
return self.value
class CameraVariable(object):
def __init__(self, name: str) -> None:
self.name = name
self.value: np.ndarray = np.zeros((1, 1, 3), dtype=np.uint8)
self.shape: tuple = (0, 0)
def get_value(self) -> bytes:
_, jpg = cv2.imencode('.jpg', self.value)
return jpg
def set_mat(self, mat) -> None:
if mat is not None:
self.shape = (mat.shape[1], mat.shape[0])
self.value = mat
class InfoHolder:
# logger object
logger = None
# control the type of the shufflecad work
on_real_robot: bool = True
power: str = "0"
# some things
spi_time_dev: str = "0"
rx_spi_time_dev: str = "0"
tx_spi_time_dev: str = "0"
spi_count_dev: str = "0"
com_time_dev: str = "0"
rx_com_time_dev: str = "0"
tx_com_time_dev: str = "0"
com_count_dev: str = "0"
temperature: str = "0"
memory_load: str = "0"
cpu_load: str = "0"
variables_array: List[ShuffleVariable] = list()
camera_variables_array: List[CameraVariable] = list()
joystick_values: dict = dict()
print_array: List[str] = list()
# outcad methods
@classmethod
def print_to_log(cls, var: str, color: str = "#e0d4ab") -> None:
cls.print_array.append(var + color)
@classmethod
def get_print_array(cls) -> List[str]:
return cls.print_array
@classmethod
def clear_print_array(cls) -> None:
cls.print_array = list() | /robocad-py-0.0.2.1.tar.gz/robocad-py-0.0.2.1/robocad/shufflecad/shared.py | 0.813831 | 0.299272 | shared.py | pypi |
import multiprocessing
import argparse
from itertools import chain
from datasets import load_dataset
from transformers import AutoTokenizer
class CFG:
SEED: int = 42
SEQ_LEN: int = 8192 # context length make it larger or smaller depending on your task
NUM_CPU: int = multiprocessing.cpu_count()
HF_ACCOUNT_REPO: str = "YOUR HF ACCOUNT"
#ADD YOUR OWN TOKENIZER
TOKENIZER: str = "EleutherAI/gpt-neox-20b"
#ADD YOUR OWN DATASET
DATASET_NAME: str = "EleutherAI/the_pile_deduplicated"
def main(args):
tokenizer = AutoTokenizer.from_pretrained(CFG.TOKENIZER)
train_dataset = load_dataset(CFG.DATASET_NAME, split="train")
#ADD YOUR OWN TOKENIZE LOGIC
def tokenize_function(example):
return tokenizer([t + tokenizer.eos_token for t in example["text"]])
tokenized_dataset = train_dataset.map(
tokenize_function,
batched=True,
num_proc=CFG.NUM_CPU,
remove_columns=["text"],
)
block_size = CFG.SEQ_LEN
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
return result
train_tokenized_dataset = tokenized_dataset.map(
group_texts,
batched=True,
num_proc=CFG.NUM_CPU,
)
train_tokenized_dataset.push_to_hub(CFG.HF_ACCOUNT_REPO)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Process and push dataset to Hugging Face Hub")
parser.add_argument("--seed", type=int, default=CFG.SEED, help="Random seed")
parser.add_argument("--seq_len", type=int, default=CFG.SEQ_LEN, help="Sequence length for processing")
parser.add_argument("--hf_account", type=str, default=CFG.HF_ACCOUNT_REPO, help="Hugging Face account name and repo")
parser.add_argument("--tokenizer", type=str, default=CFG.TOKENIZER, help="Tokenizer model to use")
parser.add_argument("--dataset_name", type=str, default=CFG.DATASET_NAME, help="Name of the dataset to process")
args = parser.parse_args()
main(args) | /robocat-0.0.2-py3-none-any.whl/roboCAT/tokenize.py | 0.440229 | 0.203708 | tokenize.py | pypi |
import os
import sys
from typing import Callable, Dict, Iterator, List, Optional, Union
from playwright.sync_api import (
Browser,
BrowserContext,
BrowserType,
Error,
Page,
Playwright,
sync_playwright,
)
from robocorp.tasks import session_cache, task_cache
from ._browser_engines import (
ENGINE_TO_ARGS,
BrowserEngine,
browsers_path,
install_browser,
)
class _BrowserConfig:
__slots__ = [
"_browser_engine",
"_install",
"_headless",
"_slowmo",
"_screenshot",
"__weakref__",
]
def __init__(
self,
browser_engine: BrowserEngine = BrowserEngine.CHROMIUM,
install: Optional[bool] = None,
headless: Optional[bool] = None,
slowmo: int = 0,
screenshot: str = "only-on-failure",
):
"""
Args:
browser_engine:
Browser engine which should be used
default="chromium"
choices=["chromium", "chrome", "chrome-beta", "msedge",
"msedge-beta", "msedge-dev", "firefox", "webkit"]
install:
Install browser or not. If not defined, download is only
attempted if the browser fails to launch.
headless:
Run headless or not.
slowmo:
Run interactions in slow motion (number in millis).
screenshot:
Whether to automatically capture a screenshot after each task.
default="only-on-failure"
choices=["on", "off", "only-on-failure"]
""" # noqa
self.browser_engine = browser_engine
self.install = install
self.headless = headless
self.slowmo = slowmo
self.screenshot = screenshot
@property
def browser_engine(self) -> BrowserEngine:
return self._browser_engine
@browser_engine.setter
def browser_engine(self, value: Union[BrowserEngine, str]):
self._browser_engine = BrowserEngine(value)
@property
def install(self) -> Optional[bool]:
return self._install
@install.setter
def install(self, value: Optional[bool]):
assert value is None or isinstance(value, bool)
self._install = value
@property
def headless(self) -> Optional[bool]:
return self._headless
@headless.setter
def headless(self, value: Optional[bool]):
assert value is None or isinstance(value, bool)
self._headless = value
@property
def slowmo(self) -> int:
return self._slowmo
@slowmo.setter
def slowmo(self, value: int):
assert isinstance(value, int)
assert value >= 0
self._slowmo = value
@property
def screenshot(self) -> str:
return self._screenshot
@screenshot.setter
def screenshot(self, value: str):
assert value in ["on", "off", "only-on-failure"]
self._screenshot = value
@session_cache
def _browser_config() -> _BrowserConfig:
"""
The configuration used. Can be mutated as needed until
`browser_context_args` is called (at which point mutating it will raise an
error since it was already used).
"""
return _BrowserConfig()
def _get_auto_headless_state() -> bool:
# If in Linux and with no valid display, we can assume we are in a
# container which doesn't support UI.
return sys.platform.startswith("linux") and not (
os.environ.get("DISPLAY") or os.environ.get("WAYLAND_DISPLAY")
)
@session_cache
def browser_type_launch_args() -> Dict:
launch_options = {}
# RPA_HEADLESS_MODE can be used to force whether running headless.
rpa_headless_mode = os.environ.get("RPA_HEADLESS_MODE")
if rpa_headless_mode is not None:
launch_options["headless"] = bool(int(os.environ["RPA_HEADLESS_MODE"]))
else:
headless = _browser_config().headless
if headless is None:
# Heuristic is now: run showing UI (headless=False) by default
# and run headless when in a VM without a display.
launch_options["headless"] = _get_auto_headless_state()
else:
launch_options["headless"] = headless
slowmo_option = _browser_config().slowmo
if slowmo_option:
launch_options["slow_mo"] = slowmo_option
return launch_options
@session_cache
def playwright() -> Iterator[Playwright]:
# Make sure playwright searches from robocorp-specific path
os.environ["PLAYWRIGHT_BROWSERS_PATH"] = str(browsers_path())
pw = sync_playwright().start()
yield pw
# Need to stop when tasks finish running.
pw.stop()
@session_cache
def _browser_type() -> BrowserType:
pw = playwright()
engine = _browser_config().browser_engine
klass, _ = ENGINE_TO_ARGS[engine]
return getattr(pw, klass)
@session_cache
def _browser_launcher() -> Callable[..., Browser]:
browser_type: BrowserType = _browser_type()
def launch(**kwargs: Dict) -> Browser:
launch_options = {**browser_type_launch_args(), **kwargs}
if "channel" not in launch_options:
engine = _browser_config().browser_engine
_, channel = ENGINE_TO_ARGS[engine]
if channel is not None:
launch_options["channel"] = channel
browser = browser_type.launch(**launch_options)
return browser
return launch
@session_cache
def browser(**kwargs) -> Iterator[Browser]:
"""
The kwargs are passed as additional launch options to the
BrowserType.launch(**kwargs).
"""
# Note: one per session (must be tear-down).
config = _browser_config()
if config.install:
install_browser(config.browser_engine)
launcher = _browser_launcher()
try:
browser = launcher(**kwargs)
except Error:
if config.install is None:
install_browser(config.browser_engine)
browser = launcher(**kwargs)
else:
raise
yield browser
browser.close()
@session_cache
def browser_context_kwargs() -> dict:
"""
The returned dict may be edited to change the arguments passed to
`playwright.Browser.new_context`.
Note:
This is a (robocorp.tasks) session cache, so, the same dict will be
returned over and over again.
"""
return {}
@task_cache
def context(**kwargs) -> Iterator[BrowserContext]:
from robocorp.tasks import get_current_task
pages: List[Page] = []
all_kwargs: dict = {}
all_kwargs.update(**browser_context_kwargs())
all_kwargs.update(**kwargs)
ctx = browser().new_context(**all_kwargs)
ctx.on("page", lambda page: pages.append(page))
yield ctx
task = get_current_task()
failed = False
if task is not None:
failed = task.failed
screenshot_option = _browser_config().screenshot
capture_screenshot = screenshot_option == "on" or (
failed and screenshot_option == "only-on-failure"
)
if capture_screenshot:
from robocorp.browser import screenshot
for p in pages:
if not p.is_closed():
try:
screenshot(p, log_level="ERROR" if failed else "INFO")
except Error:
pass
ctx.close()
@task_cache
def page() -> Page:
ctx: BrowserContext = context()
p = ctx.new_page()
# When the page is closed we automatically clear the cache.
p.on("close", lambda *args: page.clear_cache())
return p | /robocorp_browser-2.1.0.tar.gz/robocorp_browser-2.1.0/src/robocorp/browser/_browser_context.py | 0.671147 | 0.164181 | _browser_context.py | pypi |
from typing import Literal, Optional, Union
from playwright.sync_api import (
Browser,
BrowserContext,
ElementHandle,
Locator,
Page,
Playwright,
)
from ._browser_engines import BrowserEngine
__version__ = "2.1.0"
version_info = [int(x) for x in __version__.split(".")]
def configure(**kwargs) -> None:
"""
May be called before any other method to configure the browser settings.
Calling this method is optional (if not called a default configuration will
be used -- note that calling this method after the browser is already
initialized will have no effect).
Args:
browser_engine:
Browser engine which should be used
default="chromium"
choices=["chromium", "chrome", "chrome-beta", "msedge",
"msedge-beta", "msedge-dev", "firefox", "webkit"]
headless: If set to False the browser UI will be shown. If set to True
the browser UI will be kept hidden. If unset or set to None it'll
show the browser UI only if a debugger is detected.
slowmo:
Run interactions in slow motion.
screenshot:
Whether to automatically capture a screenshot after each task.
default="only-on-failure"
choices=["on", "off", "only-on-failure"]
viewport_size:
Size to be set for the viewport. Specified as tuple(width, height).
Note:
See also: `robocorp.browser.configure_context` to change other
arguments related to the browser context creation.
""" # noqa
from ._browser_context import _browser_config
config = _browser_config()
for key, value in kwargs.items():
if key == "viewport_size":
width, height = value
configure_context(viewport={"width": width, "height": height})
continue
if not hasattr(config, key):
raise ValueError(f"Invalid configuration: {key}.")
setattr(config, key, value)
def configure_context(**kwargs) -> None:
"""
While the most common configurations may be configured through `configure`,
not all arguments passed to `playwright.Browser.new_context` are covered.
For cases where different context keyword arguments are needed it's possible
to use this method to customize the keyword arguments passed to
`playwright.Browser.new_context`.
Example:
```python
from robocorp import browser
browser.configure_context(ignore_https_errors = True)
```
Note:
The changes done persist through the full session, so, new tasks which
create a browser context will also get the configuration changes.
If the change should not be used across tasks it's possible
to call `robocorp.browser.context(...)` with the required arguments
directly.
"""
from . import _browser_context
browser_context_kwargs = _browser_context.browser_context_kwargs()
browser_context_kwargs.update(kwargs)
def page() -> Page:
"""
Provides a managed instance of the browser page to interact with.
Returns:
The browser page to interact with.
Note that after a page is created, the same page is returned until the
current task finishes or the page is closed.
If a new page is required without closing the current page use:
```python
from robocorp import browser
page = browser.context.new_page()
```
"""
from . import _browser_context
return _browser_context.page()
def browser() -> Browser:
"""
Provides a managed instance of the browser to interact with.
Returns:
The browser which should be interacted with.
If no browser is created yet one is created and the same one
is returned on new invocations.
To customize the browser use the `configure` method (prior
to calling this method).
Note that the returned browser must not be closed. It will be
automatically closed when the task run session finishes.
"""
from . import _browser_context
return _browser_context.browser()
def playwright() -> Playwright:
"""
Provides a managed instance of playwright to interact with.
Returns:
The playwright instance to interact with.
If no playwright instance is created yet one is created and the same one
is returned on new invocations.
To customize it use the `configure` method (prior
to calling this method).
Note that the returned instance must not be closed. It will be
automatically closed when the task run session finishes.
"""
from . import _browser_context
return _browser_context.playwright()
def context(**kwargs) -> BrowserContext:
"""
Provides a managed instance of the browser context to interact with.
Returns:
The browser context instance to interact with.
If no browser context instance is created yet one is created and the
same one is returned on new invocations.
Note that the returned instance must not be closed. It will be
automatically closed when the task run session finishes.
Note:
If the context is not created it's possible to customize the context
arguments through the kwargs provided, by using the `configure(...)`
method or by editing the `configure_context(...)` returned dict.
If the context was already previously created the **kwargs passed will
be ignored.
"""
from . import _browser_context
return _browser_context.context(**kwargs)
def goto(url: str) -> Page:
"""
Changes the url of the current page (creating a page if needed).
Args:
url: Navigates to the provided URL.
Returns:
The page instance managed by the robocorp.tasks framework
(it will be automatically closed when the task finishes).
"""
p = page()
p.goto(url)
return p
def screenshot(
element: Optional[Union[Page, ElementHandle, Locator]] = None,
timeout: int = 5000,
image_type: Literal["png", "jpeg"] = "png",
log_level: Literal["INFO", "WARN", "ERROR"] = "INFO",
) -> bytes:
"""
Takes a screenshot of the given page/element/locator and saves it to the
log. If no element is provided the screenshot will target the current page.
Note: the element.screenshot can be used if the screenshot is not expected
to be added to the log.
Args:
element: The page/element/locator which should have its screenshot taken. If not
given the managed page instance will be used.
Returns:
The bytes from the screenshot.
"""
import base64
from robocorp import log
if element is None:
from . import _browser_context
element = _browser_context.page()
with log.suppress():
# Suppress log because we don't want the bytes to appear at
# the screenshot and then as the final html.
in_bytes = element.screenshot(timeout=timeout, type=image_type)
in_base64 = base64.b64encode(in_bytes).decode("ascii")
log.html(
f'<img src="data:image/{image_type};base64,{in_base64}"/>', level=log_level
)
with log.suppress():
return in_bytes
def install(browser_engine: BrowserEngine):
"""
Downloads and installs the given browser engine.
Note: Google Chrome or Microsoft Edge installations will be installed
at the default global location of your operating system overriding your
current browser installation.
Args:
browser_engine:
help="Browser engine which should be installed",
choices=[chromium", "chrome", "chrome-beta", "msedge", "msedge-beta", "msedge-dev", "firefox", "webkit"]
""" # noqa
from . import _browser_engines
_browser_engines.install_browser(browser_engine, force=False)
__all__ = [
"install",
"configure",
"page",
"browser",
"playwright",
"context",
] | /robocorp_browser-2.1.0.tar.gz/robocorp_browser-2.1.0/src/robocorp/browser/__init__.py | 0.940237 | 0.569254 | __init__.py | pypi |
from typing import List
from logging import getLogger
log = getLogger(__name__)
class _ArgDispatcher:
def __init__(self):
self._name_to_func = {}
self.argparser = self._create_argparser()
def _dispatch(self, parsed) -> int:
if not parsed.command:
self.argparser.print_help()
return 1
method = self._name_to_func[parsed.command]
dct = parsed.__dict__.copy()
dct.pop("command")
return method(**dct)
def register(self, name=None):
def do_register(func):
nonlocal name
if not name:
name = func.__code__.co_name
self._name_to_func[name] = func
return func
return do_register
def _create_argparser(self):
import argparse
parser = argparse.ArgumentParser(
prog="robo",
description="Robocorp framework for RPA development using Python.",
# TODO: Add a proper epilog once we have an actual url to point to.
# epilog="View https://github.com/robocorp/draft-python-framework/ for more information",
)
subparsers = parser.add_subparsers(dest="command")
# Run
run_parser = subparsers.add_parser(
"run",
help="run will collect tasks with the @task decorator and run the first that matches based on the task name filter.",
)
run_parser.add_argument(
dest="path",
help="The directory or file with the tasks to run.",
nargs="?",
default=".",
)
run_parser.add_argument(
"-t",
"--task",
dest="task_name",
help="The name of the task that should be run.",
default="",
)
run_parser.add_argument(
"-o",
"--output-dir",
dest="output_dir",
help="The directory where the logging output files will be stored.",
default="./output",
)
run_parser.add_argument(
"--max-log-files",
dest="max_log_files",
type=int,
help="The maximum number of output files to store the logs.",
default=5,
)
run_parser.add_argument(
"--max-log-file-size",
dest="max_log_file_size",
help="The maximum size for the log files (i.e.: 1MB, 500kb).",
default="1MB",
)
# List tasks
list_parser = subparsers.add_parser(
"list",
help="Provides output to stdout with json contents of the tasks available.",
)
list_parser.add_argument(
dest="path",
help="The directory or file from where the tasks should be listed.",
nargs="?",
default=".",
)
return parser
def process_args(self, args: List[str]) -> int:
"""
Processes the arguments and return the returncode.
"""
log.debug("Processing args: %s", " ".join(args))
parser = self._create_argparser()
try:
parsed = parser.parse_args(args)
except SystemExit as e:
code = e.code
if isinstance(code, int):
return code
if code is None:
return 0
return int(code)
return self._dispatch(parsed)
arg_dispatch = _ArgDispatcher() | /robocorp_core-0.1.3.tar.gz/robocorp_core-0.1.3/src/robo/_argdispatch.py | 0.588653 | 0.195095 | _argdispatch.py | pypi |
from pathlib import Path
from typing import Tuple
import json
import os
import sys
import traceback
from ._argdispatch import arg_dispatch
# Note: the args must match the 'dest' on the configured argparser.
@arg_dispatch.register(name="list")
def list_tasks(
path: str,
) -> int:
"""
Prints the tasks available at a given path to the stdout in json format.
[
{
"name": "task_name",
"line": 10,
"file": "/usr/code/projects/tasks.py",
"docs": "Task docstring",
},
...
]
Args:
path: The path (file or directory) from where tasks should be collected.
"""
from robo._collect_tasks import collect_tasks
from robo._task import Context
from robo._protocols import ITask
p = Path(path)
context = Context()
if not p.exists():
context.show_error(f"Path: {path} does not exist")
return 1
task: ITask
tasks_found = []
for task in collect_tasks(p):
tasks_found.append(
{
"name": task.name,
"line": task.lineno,
"file": task.filename,
"docs": getattr(task.method, "__doc__") or "",
}
)
sys.stdout.write(json.dumps(tasks_found))
return 0
# Note: the args must match the 'dest' on the configured argparser.
@arg_dispatch.register()
def run(
output_dir: str,
path: str,
task_name: str,
max_log_files: int = 5,
max_log_file_size: str = "1MB",
) -> int:
"""
Runs a task.
Args:
output_dir: The directory where output should be put.
path: The path (file or directory where the tasks should be collected from.
task_name: The name of the task to run.
max_log_files: The maximum number of log files to be created (if more would
be needed the oldest one is deleted).
max_log_file_size: The maximum size for the created log files.
Returns:
0 if everything went well.
1 if there was some error running the task.
"""
from ._collect_tasks import collect_tasks
from ._hooks import before_task_run, after_task_run
from ._protocols import ITask
from ._task import Context
from ._protocols import Status
from ._exceptions import RoboCollectError
from ._log_auto_setup import setup_auto_logging, read_filters_from_pyproject_toml
from ._log_output_setup import setup_log_output, setup_stdout_logging
# Don't show internal machinery on tracebacks:
# setting __tracebackhide__ will make it so that robocorp-logging
# won't show this frame onwards in the logging.
__tracebackhide__ = 1
p = Path(path).absolute()
context = Context()
if not p.exists():
context.show_error(f"Path: {path} does not exist")
return 1
import robo_log
filters = read_filters_from_pyproject_toml(context, p)
with setup_auto_logging(
# Note: we can't customize what's a "project" file or a "library" file, right now
# the customizations are all based on module names.
filters=filters
), setup_stdout_logging(), setup_log_output(
output_dir=Path(output_dir),
max_files=max_log_files,
max_file_size=max_log_file_size,
):
run_status = "PASS"
setup_message = ""
run_name = f"{os.path.basename(path)} - {task_name}"
robo_log.start_run(run_name)
try:
robo_log.start_task("Collect tasks", "setup", "", 0, [])
try:
if not task_name:
context.show(f"\nCollecting tasks from: {path}")
else:
context.show(f"\nCollecting task {task_name} from: {path}")
tasks: Tuple[ITask, ...] = tuple(collect_tasks(p, task_name))
if not tasks:
raise RoboCollectError(f"Did not find any tasks in: {path}")
if len(tasks) > 1:
raise RoboCollectError(
f"Expected only 1 task to be run. Found: {', '.join(t.name for t in tasks)}"
)
except Exception as e:
run_status = "ERROR"
setup_message = str(e)
robo_log.exception()
if not isinstance(e, RoboCollectError):
traceback.print_exc()
else:
context.show_error(setup_message)
return 1
finally:
robo_log.end_task("Collect tasks", "setup", run_status, setup_message)
for task in tasks:
before_task_run(task)
try:
task.run()
run_status = task.status = Status.PASS
except Exception as e:
run_status = task.status = Status.ERROR
task.message = str(e)
finally:
after_task_run(task)
returncode = 0 if task.status == Status.PASS else 1
return returncode
raise AssertionError("Should never get here.")
finally:
robo_log.end_run(run_name, run_status)
def main(args=None, exit: bool = True) -> int:
if args is None:
args = sys.argv[1:]
returncode = arg_dispatch.process_args(args)
if exit:
sys.exit(returncode)
return returncode
if __name__ == "__main__":
main() | /robocorp_core-0.1.3.tar.gz/robocorp_core-0.1.3/src/robo/cli.py | 0.62601 | 0.228049 | cli.py | pypi |
from contextlib import contextmanager
from pathlib import Path
from typing import Optional, Sequence, Union, List, Any
from ._protocols import ITask
import typing
if typing.TYPE_CHECKING:
from robo_log import Filter # @UnusedImport
def read_filters_from_pyproject_toml(context, path: Path) -> List["Filter"]:
from robo_log import Filter # @Reimport
from robo_log._config import FilterKind
filters: List[Filter] = []
while True:
pyproject = path / "pyproject.toml"
try:
if pyproject.exists():
break
except OSError:
continue
parent = path.parent
if parent == path or not parent:
# Couldn't find pyproject.toml
return filters
path = parent
try:
toml_contents = pyproject.read_text(encoding="utf-8")
except Exception:
raise OSError(f"Could not read the contents of: {pyproject}.")
obj: Any
try:
try:
import tomllib # type: ignore
except ImportError:
import tomli as tomllib # type: ignore
obj = tomllib.loads(toml_contents)
except Exception:
raise RuntimeError(f"Could not interpret the contents of {pyproject} as toml.")
# Filter(name="RPA", kind=FilterKind.log_on_project_call),
# Filter("selenium", FilterKind.log_on_project_call),
# Filter("SeleniumLibrary", FilterKind.log_on_project_call),
read_parts: List[str] = []
for part in "tool.robo.log.log_filter_rules".split("."):
read_parts.append(part)
obj = obj.get(part)
if not obj:
return filters
if part == "log_filter_rules":
if not isinstance(obj, (list, tuple)):
context.show_error(
f"Expected {'.'.join(read_parts)} to be a list in {pyproject}."
)
return filters
log_filter_rule = obj
break
elif not isinstance(obj, dict):
context.show_error(
f"Expected {'.'.join(read_parts)} to be a dict in {pyproject}."
)
return filters
# If we got here we have the 'log_filter_rules', which should be a list of
# dicts in a structure such as: {name = "difflib", kind = "log_on_project_call"}
# expected kinds are the values of the FilterKind.
if not isinstance(log_filter_rule, (list, tuple)):
return filters
for rule in log_filter_rule:
if isinstance(rule, dict):
name = rule.get("name")
kind = rule.get("kind")
if not name:
context.show_error(
f"Expected a rule from 'tool.robo.log.log_filter_rules' to have a 'name' in {pyproject}."
)
continue
if not kind:
context.show_error(
f"Expected a rule from 'tool.robo.log.log_filter_rules' to have a 'kind' in {pyproject}."
)
continue
if not isinstance(name, str):
context.show_error(
f"Expected a rule from 'tool.robo.log.log_filter_rules' to have 'name' as a str in {pyproject}."
)
continue
if not isinstance(kind, str):
context.show_error(
f"Expected a rule from 'tool.robo.log.log_filter_rules' to have 'kind' as a str in {pyproject}."
)
continue
f: Optional[FilterKind] = getattr(FilterKind, kind, None)
if f is None:
context.show_error(
f"Rule from 'tool.robo.log.log_filter_rules' has invalid 'kind': >>{kind}<< in {pyproject}."
)
continue
filters.append(Filter(name, f))
return filters
def _log_before_task_run(task: ITask):
import robo_log
robo_log.start_task(
task.name,
task.module_name,
task.filename,
task.method.__code__.co_firstlineno,
[],
)
def _log_after_task_run(task: ITask):
import robo_log
status = task.status
robo_log.end_task(task.name, task.module_name, status, task.message)
@contextmanager
def setup_auto_logging(
tracked_folders: Optional[Sequence[Union[Path, str]]] = None,
untracked_folders: Optional[Sequence[Union[Path, str]]] = None,
filters: Sequence["Filter"] = (),
):
# This needs to be called before importing code which needs to show in the log
# (user or library).
import robo_log
from robo._hooks import before_task_run
from robo._hooks import after_task_run
with robo_log.setup_auto_logging(
tracked_folders=tracked_folders,
untracked_folders=untracked_folders,
filters=filters,
):
with before_task_run.register(_log_before_task_run), after_task_run.register(
_log_after_task_run
):
try:
yield
finally:
robo_log.close_log_outputs() | /robocorp_core-0.1.3.tar.gz/robocorp_core-0.1.3/src/robo/_log_auto_setup.py | 0.75401 | 0.175609 | _log_auto_setup.py | pypi |
import json
import logging
import os
import sys
from pathlib import Path
from typing import Any, List, Optional
import webview # type: ignore
from robocorp_dialog.bridge import Bridge # type: ignore
LOGGER = logging.getLogger(__name__)
def static() -> str:
# NB: pywebview uses sys.argv[0] as base
base = Path(sys.argv[0]).resolve().parent
path = Path(__file__).resolve().parent / "static"
return os.path.relpath(str(path), str(base))
def output(obj: Any) -> None:
print(json.dumps(obj), flush=True)
def run(
elements: List,
title: str,
width: int,
height: int,
auto_height: bool,
on_top: bool,
debug: bool,
gui: Optional[str],
) -> None:
"""Spawn the dialog and wait for user input. Print the result into
stdout as JSON.
:param elements: Elements as list
:param title: Title text for window
:param width: Window width in pixels
:param height: Window height in pixels
:param auto_height: Automatic height resize
:param on_top: Always on top
:param debug: Allow developer tools
"""
try:
url = os.path.join(static(), "index.html")
LOGGER.info("Serving from '%s'", url)
LOGGER.info("Displaying %d elements", len(elements))
bridge = Bridge(elements=elements, auto_height=auto_height, on_top=on_top)
window = webview.create_window(
url=url,
js_api=bridge,
resizable=True,
text_select=True,
background_color="#0b1025",
title=title,
width=width,
height=height,
on_top=True,
)
bridge.window = window
LOGGER.info("Starting dialog")
webview.start(debug=debug, gui=gui)
if bridge.error is not None:
output({"error": bridge.error})
if bridge.result is not None:
output({"value": bridge.result})
else:
output({"error": "Aborted by user"})
except Exception as err: # pylint: disable=broad-except
output({"error": str(err)})
finally:
LOGGER.info("Dialog stopped") | /robocorp-dialog-0.5.3.tar.gz/robocorp-dialog-0.5.3/robocorp_dialog/dialog.py | 0.546738 | 0.169922 | dialog.py | pypi |
from typing import TYPE_CHECKING, Any, List, Optional, Union
from PIL import Image
from typing_extensions import deprecated
from robocorp.excel._types import PathType
from robocorp.excel.tables import Table, Tables
if TYPE_CHECKING:
from robocorp.excel.workbook import Workbook
class Worksheet:
"""Common class for worksheets to manage the worksheet's content."""
def __init__(self, workbook: "Workbook", name: str):
self._workbook = workbook
# TODO: name should be a property, with setter that also changes it in the excel
self.name: str = name
assert self.name
def append_rows_to_worksheet(
self,
content: Any,
header: bool = False,
start: Optional[int] = None,
formatting_as_empty: Optional[bool] = False,
) -> "Worksheet":
# files.append_rows_to_worksheet()
if self.name not in self._workbook.list_worksheets():
self._workbook.create_worksheet(self.name)
self._workbook.excel.append_worksheet(
self.name, content, header, start, formatting_as_empty
)
return self
def insert_image(
self, row: int, column: Union[int, str], path: PathType, scale: float = 1.0
):
# files.insert_image_to_worksheet()
image = Image.open(path)
if scale != 1.0:
fmt = image.format
width = int(image.width * float(scale))
height = int(image.height * float(scale))
image = image.resize((width, height), Image.LANCZOS)
image.format = fmt
self._workbook.excel.insert_image(row, column, image, self.name)
return self
def as_table(
self,
header: bool = False,
trim: bool = True,
start: Optional[int] = None,
) -> Table:
# files.read_worksheet_as_table()
tables = Tables()
sheet = self._workbook.excel.read_worksheet(self.name, header, start)
return tables.create_table(sheet, trim)
def as_list(self, header=False, start=None) -> List[dict]:
# files.read_worksheet()
# FIXME: actually implement
return self._workbook.excel.read_worksheet(self.name, header, start)
def rename(self, name):
# files.rename_worksheet()
self._workbook.excel.rename_worksheet(name, self.name)
self.name = name
return self
# Column operations
def delete_columns(self, start, end):
# files.delete_columns()
return None
def auto_size_columns(self, start_column, end_column, width):
# files.auto_size_columns()
pass
def hide_columns(self, start_column, end_column):
# files.hide_columns()
pass
def insert_columns_after(self, column, amount):
# files.insert_columns_after()
pass
def insert_columns_before(self, column, amount) -> None:
# files.insert_columns_before()
pass
def unhide_columns(self, start_column, end_column) -> None:
# files.unhide_columns()
pass
# Row operations
def delete_rows(self, start, end) -> None:
# files.delete_rows()
pass
def find_empty_row(self) -> int:
# files.find_empty_row()
return self._workbook.excel.find_empty_row(self.name)
def insert_rows_after(self, row, amount) -> None:
# files.insert_rows_after()
pass
def insert_rows_before(self, row, amount) -> None:
# files.insert_rows_before()
pass
# manipulate ranges of cells
def move_range(self, range_string, rows, columns, translate) -> None:
# files.move_range()
pass
def set_styles(self, args) -> None:
# files.set_styles()
pass
def get_cell_value(self, row, column):
# files.get_cell_value()
return self._workbook.excel.get_cell_value(row, column, self.name)
@deprecated("Use get_cell_value instead")
def get_value(self, row, column) -> Any:
# files.get_worksheet_value()
# This was actually an alias for get cell value
return self.get_cell_value(row, column)
def set_cell_value(
self,
row: int,
column: Union[str, int],
value: Any,
fmt: Optional[Union[str, float]] = None,
):
# files.set_cell_value()
self._workbook.excel.set_cell_value(row, column, value, self.name)
if fmt:
self.set_cell_format(row, column, fmt)
return self
def set_cell_values(self):
# files.set_cell_values()
pass
def set_cell_format(
self, row: int, column: Union[str, int], fmt: Optional[Union[str, float]]
):
# files.set_cell_format()
self._workbook.excel.set_cell_format(row, column, fmt, self.name)
return self
def set_cell_formula(self):
# files.set_cell_formula()
pass
def copy_cell_values(self):
# files.copy_cell_values()
pass
def clear_cell_range(self):
# files.clear_cell_range()
pass | /robocorp_excel-0.4.0.tar.gz/robocorp_excel-0.4.0/src/robocorp/excel/worksheet.py | 0.711431 | 0.328637 | worksheet.py | pypi |
import copy
import csv
import logging
import re
from collections import OrderedDict, namedtuple
from enum import Enum
from itertools import groupby, zip_longest
from keyword import iskeyword
from numbers import Number
from operator import itemgetter
from typing import (Any, Callable, Dict, Generator, Iterable, List, NamedTuple,
Optional, Tuple, Union)
from robocorp.excel._types import is_dict_like, is_list_like, is_namedtuple
Index = Union[int, str]
Column = Union[int, str]
Row = Union[Dict, List, Tuple, NamedTuple, set]
Data = Optional[Union[Dict[Column, Row], List[Row], "Table"]]
CellCondition = Callable[[Any], bool]
RowCondition = Callable[[Union[Index, Row]], bool]
def return_table_as_raw_list(table, heading=False):
raw_list = []
if heading:
raw_list.append(table.columns)
for index in table.index:
row = []
for column in table.columns:
row.append(table.get_cell(index, column))
raw_list.append(row)
return raw_list
def to_list(obj: Any, size: int = 1):
"""Convert (possibly scalar) value to list of `size`."""
if not is_list_like(obj):
return [obj] * int(size)
else:
return obj
def to_identifier(val: Any):
"""Convert string to valid identifier."""
val = str(val).strip()
# Replaces spaces, dashes, and slashes to underscores
val = re.sub(r"[\s\-/\\]", "_", val)
# Remove remaining invalid characters
val = re.sub(r"[^0-9a-zA-Z_]", "", val)
# Identifier can't start with digits
val = re.sub(r"^[^a-zA-Z_]+", "", val)
if not val or iskeyword(val):
raise ValueError(f"Unable to convert to identifier: {val}")
return val
def to_condition(operator: str, value: Any) -> CellCondition:
"""Convert string operator into callable condition function."""
operator = str(operator).lower().strip()
condition = {
">": lambda x: x is not None and x > value,
"<": lambda x: x is not None and x < value,
">=": lambda x: x is not None and x >= value,
"<=": lambda x: x is not None and x <= value,
"==": lambda x: x == value,
"!=": lambda x: x != value,
"is": lambda x: x is value,
"not is": lambda x: x is not value,
"contains": lambda x: x is not None and value in x,
"not contains": lambda x: x is not None and value not in x,
"in": lambda x: x in value,
"not in": lambda x: x not in value,
}.get(operator)
if not condition:
raise ValueError(f"Unknown operator: {operator}")
return condition
def if_none(value: Any, default: Any):
"""Return default if value is None."""
return value if value is not None else default
def uniq(seq: Iterable):
"""Return list of unique values while preserving order.
Values must be hashable.
"""
seen, result = {}, []
for item in seq:
if item in seen:
continue
seen[item] = None
result.append(item)
return result
class Dialect(Enum):
"""CSV dialect."""
Excel = "excel"
ExcelTab = "excel-tab"
Unix = "unix"
class Table:
"""Container class for tabular data.
Note:
Supported data formats:
- empty: None values populated according to columns/index
- list: list of data Rows
- dict: Dictionary of columns as keys and Rows as values
- table: An existing Table
Row: a namedtuple, dictionary, list or a tuple
"""
def __init__(self, data: Data = None, columns: Optional[List[str]] = None):
"""Creates a Table object.
Args:
data: Values for table, see ``Supported data formats``
columns: Names for columns, should match data dimensions
"""
self._data = []
self._columns = []
# Use public setter to validate data
if columns is not None:
self.columns = list(columns)
if not data:
self._init_empty()
elif isinstance(data, Table):
self._init_table(data)
elif is_dict_like(data):
self._init_dict(data)
elif is_list_like(data):
self._init_list(data)
else:
raise TypeError("Not a valid input format")
if columns:
self._sort_columns(columns)
self._validate_self()
def _init_empty(self):
"""Initialize table with empty data."""
self._data = []
def _init_table(self, table: "Table"):
"""Initialize table with another table."""
if not self.columns:
self.columns = table.columns
self._data = table.data
def _init_list(self, data: List[Any]):
"""Initialize table from list-like container."""
# Assume data is homogenous in regard to row type
obj = data[0]
column_names = self._column_name_getter(obj)
column_values = self._column_value_getter(obj)
# Map of source to destination column
column_map = {}
# Do not update columns if predefined
add_columns = not bool(self._columns)
for obj in data:
row = [None] * len(self._columns)
for column_src in column_names(obj):
# Check if column has been added with different name
column_dst = column_map.get(column_src, column_src)
# Dictionaries and namedtuples can
# contain unknown columns
if column_dst not in self._columns:
if not add_columns:
continue
col = self._add_column(column_dst)
# Store map of source column name to created name
column_dst = self._columns[col]
column_map[column_src] = column_dst
while len(row) < len(self._columns):
row.append(None)
col = self.column_location(column_dst)
row[col] = column_values(obj, column_src)
self._data.append(row)
def _init_dict(self, data: Dict[Column, Row]):
"""Initialize table from dict-like container."""
if not self._columns:
self._columns = list(data.keys())
# Filter values by defined columns
columns = (
to_list(values)
for column, values in data.items()
if column in self._columns
)
# Convert columns to rows
self._data = [list(row) for row in zip_longest(*columns)]
def __repr__(self):
return "Table(columns={}, rows={})".format(self.columns, self.size)
def __len__(self):
return self.size
def __iter__(self):
return self.iter_dicts(with_index=False)
def __eq__(self, other: Any):
if not isinstance(other, Table):
return False
return self._columns == other._columns and self._data == other._data
@property
def data(self):
return self._data.copy()
@property
def size(self) -> int:
return len(self._data)
@property
def dimensions(self):
return self.size, len(self._columns)
@property
def index(self):
return list(range(self.size))
@property
def columns(self):
return self._columns.copy()
@columns.setter
def columns(self, names):
"""Rename columns with given values."""
self._validate_columns(names)
self._columns = list(names)
def _validate_columns(self, names):
"""Validate that given column names can be used."""
if not is_list_like(names):
raise ValueError("Columns should be list-like")
if len(set(names)) != len(names):
raise ValueError("Duplicate column names")
if self._data and len(names) != len(self._data[0]):
raise ValueError("Invalid columns length")
def _column_name_getter(self, obj):
"""Create callable that returns column names for given obj types."""
if is_namedtuple(obj):
# Use namedtuple fields as columns
def get(obj):
return list(obj._fields)
elif is_dict_like(obj):
# Use dictionary keys as columns
def get(obj):
return list(obj.keys())
elif is_list_like(obj):
# Use either predefined columns, or
# generate range-based column values
predefined = list(self._columns)
def get(obj):
count = len(obj)
if predefined:
if count > len(predefined):
raise ValueError(
f"Data had more than defined {len(predefined)} columns"
)
return predefined[:count]
else:
return list(range(count))
else:
# Fallback to single column
def get(_):
return self._columns[:1] if self._columns else [0]
return get
def _column_value_getter(self, obj):
"""Create callable that returns column values for given object types."""
if is_namedtuple(obj):
# Get values using properties
def get(obj, column):
return getattr(obj, column, None)
elif is_dict_like(obj):
# Get values using dictionary keys
def get(obj, column):
return obj.get(column)
elif is_list_like(obj):
# Get values using list indexes
def get(obj, column):
col = self.column_location(column)
try:
return obj[col]
except IndexError:
return None
else:
# Fallback to single column
def get(obj, _):
return obj
return get
def _sort_columns(self, order):
"""Sort columns to match given order."""
unknown = set(self._columns) - set(order)
if unknown:
names = ", ".join(str(name) for name in unknown)
raise ValueError(f"Unknown columns: {names}")
cols = [self.column_location(column) for column in order]
self._columns = [self._columns[col] for col in cols]
self._data = [[row[col] for col in cols] for row in self._data]
def _validate_self(self):
"""Validate that internal data is valid and coherent."""
self._validate_columns(self._columns)
if self._data:
head = self._data[0]
if len(head) != len(self._columns):
raise ValueError("Columns length does not match data")
def index_location(self, value: Index) -> int:
try:
value = int(value)
except ValueError as err:
raise ValueError(f"Index is not a number: {value}") from err
if value < 0:
value += self.size
if self.size == 0:
raise IndexError("No rows in table")
if (value < 0) or (value >= self.size):
raise IndexError(f"Index ({value}) out of range (0..{self.size - 1})")
return value
def column_location(self, value):
"""Find location for column value."""
# Try to use as-is
try:
return self._columns.index(value)
except ValueError:
pass
# Try as integer index
try:
value = int(value)
if value in self._columns:
location = self._columns.index(value)
elif value < 0:
location = value + len(self._columns)
else:
location = value
size = len(self._columns)
if size == 0:
raise IndexError("No columns in table")
if location >= size:
raise IndexError(f"Column ({location}) out of range (0..{size - 1})")
return location
except ValueError:
pass
# No matches
options = ", ".join(str(col) for col in self._columns)
raise ValueError(f"Unknown column name: {value}, current columns: {options}")
def __getitem__(self, key):
"""Helper method for accessing items in the Table.
Examples:
table[:10] First 10 rows
table[0,1] Value in first row and second column
table[2:10,"email"] Values in "email" column for rows 3 to 11
"""
# Both row index and columns given
if isinstance(key, tuple):
index, column = key
index = self._slice_index(index) if isinstance(index, slice) else index
return self.get(indexes=index, columns=column, as_list=True)
# Row indexed with slice, all columns
elif isinstance(key, slice):
return self.get(indexes=self._slice_index(key), as_list=True)
# Single row
else:
return self.get(indexes=key, as_list=True)
def __setitem__(self, key, value):
"""Helper method for setting items in the Table.
Examples:
table[5] = ["Mikko", "Mallikas"]
table[:2] = [["Marko", "Markonen"], ["Pentti", "Penttinen"]]
"""
# Both row index and columns given
if isinstance(key, tuple):
index, column = key
index = self._slice_index(index) if isinstance(index, slice) else index
return self.set(indexes=index, columns=column, values=value)
# Row indexed with slice, all columns
elif isinstance(key, slice):
return self.set(indexes=self._slice_index(key), values=value)
# Single row
else:
return self.set(indexes=key, values=value)
def _slice_index(self, slicer):
"""Create list of index values from slice object."""
start = self.index_location(slicer.start) if slicer.start is not None else 0
end = self.index_location(slicer.stop) if slicer.stop is not None else self.size
return list(range(start, end))
def copy(self):
"""Create a copy of this table."""
return copy.deepcopy(self)
def clear(self):
"""Remove all rows from this table."""
self._data = []
def head(self, rows, as_list=False):
"""Return first n rows of table."""
indexes = self.index[: int(rows)]
return self.get_table(indexes, as_list=as_list)
def tail(self, rows, as_list=False):
"""Return last n rows of table."""
indexes = self.index[-int(rows) :]
return self.get_table(indexes, as_list=as_list)
def get(self, indexes=None, columns=None, as_list=False):
"""Get values from table. Return type depends on input dimensions.
If `indexes` and `columns` are scalar, i.e. not lists:
Returns single cell value
If either `indexes` or `columns` is a list:
Returns matching row or column
If both `indexes` and `columns` are lists:
Returns a new Table instance with matching cell values
Args:
indexes: List of indexes, or all if not given.
columns: List of columns, or all if not given.
as_list: Return as list, instead of dictionary.
"""
indexes = if_none(indexes, self.index)
columns = if_none(columns, self._columns)
if is_list_like(indexes) and is_list_like(columns):
return self.get_table(indexes, columns, as_list)
elif not is_list_like(indexes) and is_list_like(columns):
return self.get_row(indexes, columns, as_list)
elif is_list_like(indexes) and not is_list_like(columns):
return self.get_column(columns, indexes, as_list)
else:
return self.get_cell(indexes, columns)
def get_cell(self, index, column):
"""Get single cell value."""
idx = self.index_location(index)
col = self.column_location(column)
return self._data[idx][col]
def get_row(self, index: Index, columns=None, as_list=False):
"""Get column values from row.
Args:
index: Index for row.
columns: Column names to include, or all if not given.
as_list: Return row as list, instead of dictionary.
"""
columns = if_none(columns, self._columns)
idx = self.index_location(index)
if as_list:
row = []
for column in columns:
col = self.column_location(column)
row.append(self._data[idx][col])
return row
else:
row = {}
for column in columns:
col = self.column_location(column)
row[self._columns[col]] = self._data[idx][col]
return row
def get_column(self, column, indexes=None, as_list=False):
"""Get row values from column.
Args:
column: Name for column
indexes: Row indexes to include, or all if not given
as_list: Return column as dictionary, instead of list
"""
indexes = if_none(indexes, self.index)
col = self.column_location(column)
if as_list:
column = []
for index in indexes:
idx = self.index_location(index)
column.append(self._data[idx][col])
return column
else:
column = {}
for index in indexes:
idx = self.index_location(index)
column[idx] = self._data[idx][col]
return column
def get_table(
self, indexes=None, columns=None, as_list=False
) -> Union[List, "Table"]:
"""Get a new table from all cells matching indexes and columns."""
indexes = if_none(indexes, self.index)
columns = if_none(columns, self._columns)
if indexes == self.index and columns == self._columns:
return self.copy()
idxs = [self.index_location(index) for index in indexes]
cols = [self.column_location(column) for column in columns]
data = [[self._data[idx][col] for col in cols] for idx in idxs]
if as_list:
return data
else:
return Table(data=data, columns=columns)
def get_slice(self, start: Optional[Index] = None, end: Optional[Index] = None):
"""Get a new table from rows between start and end index."""
index = self._slice_index(slice(start, end))
return self.get_table(index, self._columns)
def _add_row(self, index):
"""Add a new empty row into the table."""
if index is None:
index = self.size
if index < self.size:
raise ValueError(f"Duplicate row index: {index}")
for empty in range(self.size, index):
self._add_row(empty)
self._data.append([None] * len(self._columns))
return self.size - 1
def _add_column(self, column):
"""Add a new empty column into the table."""
if column is None:
column = len(self._columns)
if column in self._columns:
raise ValueError(f"Duplicate column name: {column}")
if isinstance(column, int):
assert column >= len(self._columns)
for empty in range(len(self._columns), column):
self._add_column(empty)
self._columns.append(column)
for idx in self.index:
row = self._data[idx]
row.append(None)
return len(self._columns) - 1
def set(self, indexes=None, columns=None, values=None):
"""Sets multiple cell values at a time.
Both ``indexes`` and ``columns`` can be scalar or list-like,
which enables setting individual cells, rows/columns, or regions.
If ``values`` is scalar, all matching cells will be set to that value.
Otherwise, the length should match the cell count defined by the
other parameters.
"""
indexes = to_list(if_none(indexes, self.index))
columns = to_list(if_none(columns, self._columns))
size = len(indexes) + len(columns)
values = to_list(values, size=size)
if not len(values) == size:
raise ValueError("Values size does not match indexes and columns")
for index in indexes:
idx = self.index_location(index)
for column in columns:
col = self.column_location(column)
self.set_cell(index, column, values[idx + col])
def set_cell(self, index, column, value):
"""Set individual cell value.
If either index or column is missing, they are created.
"""
try:
idx = self.index_location(index)
except (IndexError, ValueError):
idx = self._add_row(index)
try:
col = self.column_location(column)
except (IndexError, ValueError):
col = self._add_column(column)
self._data[idx][col] = value
def set_row(self, index, values):
"""Set values in row. If index is missing, it is created."""
try:
idx = self.index_location(index)
except (IndexError, ValueError):
idx = self._add_row(index)
column_values = self._column_value_getter(values)
row = [column_values(values, column) for column in self._columns]
self._data[idx] = row
def set_column(self, column, values):
"""Set values in column. If column is missing, it is created."""
values = to_list(values, size=self.size)
if len(values) != self.size:
raise ValueError(
f"Values length ({len(values)}) should match data length ({self.size})"
)
if column not in self._columns:
self._add_column(column)
for index in self.index:
self.set_cell(index, column, values[index])
def append_row(self, row=None):
"""Append new row to table."""
self.set_row(self.size, row)
def append_rows(self, rows):
"""Append multiple rows to table."""
for row in rows:
self.append_row(row)
def append_column(self, column=None, values=None):
if column is not None and column in self._columns:
raise ValueError(f"Column already exists: {column}")
self.set_column(column, values)
def delete_rows(self, indexes: Union[Index, List[Index]]):
"""Remove rows with matching indexes."""
indexes = [self.index_location(idx) for idx in to_list(indexes)]
unknown = set(indexes) - set(self.index)
if unknown:
names = ", ".join(str(name) for name in unknown)
raise ValueError(f"Unable to remove unknown rows: {names}")
for index in sorted(indexes, reverse=True):
del self._data[index]
def delete_columns(self, columns):
"""Remove columns with matching names."""
columns = to_list(columns)
unknown = set(columns) - set(self._columns)
if unknown:
names = ", ".join(str(name) for name in unknown)
raise ValueError(f"Unable to remove unknown columns: {names}")
for column in columns:
col = self.column_location(column)
for idx in self.index:
del self._data[idx][col]
del self._columns[col]
def append_table(self, table):
"""Append data from table to current data."""
if not table:
return
indexes = []
for idx in table.index:
index = self.size + idx
indexes.append(index)
self.set(indexes=indexes, columns=table.columns, values=table.data)
def sort_by_column(self, columns, ascending=False):
"""Sort table by columns."""
columns = to_list(columns)
# Create sort criteria list, with each row as tuple of column values
values = (self.get_column(column, as_list=True) for column in columns)
values = list(zip(*values))
assert len(values) == self.size
def sorter(row):
"""Sort table by given values, while allowing for disparate types.
Note:
Order priority:
- Values by typename
- Numeric types
- None values
"""
criteria = []
for value in row[1]: # Ignore enumeration
criteria.append(
(
value is not None,
"" if isinstance(value, Number) else type(value).__name__,
value,
)
)
return criteria
# Store original index order using enumerate() before sort,
# and use it to sort data later
values = sorted(enumerate(values), key=sorter, reverse=not ascending)
idxs = [value[0] for value in values]
# Re-order data
self._data = [self._data[idx] for idx in idxs]
def group_by_column(self, column):
"""Group rows by column value and return as list of tables."""
ref = self.copy()
ref.sort_by_column(column)
col = self.column_location(column)
groups = groupby(ref.data, itemgetter(col))
result = []
ref.clear()
for _, group in groups:
table = ref.copy()
table.append_rows(group)
result.append(table)
return result
def _filter(self, condition: RowCondition):
filtered = list(filter(lambda idx: not condition(idx), self.index))
self.delete_rows(filtered)
def filter_all(self, condition: RowCondition):
"""Remove rows by evaluating `condition` for every row.
The filtering will be done in-place and all the rows evaluating as falsy
through the provided condition will be removed.
"""
def _check_row(index: int) -> bool:
row = self.get_row(index)
return condition(row)
self._filter(_check_row)
def filter_by_column(self, column: Column, condition: CellCondition):
"""Remove rows by evaluating `condition` for cells in `column`.
The filtering will be done in-place and all the rows where it evaluates to
falsy are removed.
"""
def _check_cell(index: int) -> bool:
cell = self.get_cell(index, column)
return condition(cell)
self._filter(_check_cell)
def iter_lists(self, with_index=True):
"""Iterate rows with values as lists."""
for idx, row in zip(self.index, self._data):
if with_index:
yield idx, list(row)
else:
yield list(row)
def iter_dicts(self, with_index=True) -> Generator[Dict[Column, Any], None, None]:
"""Iterate rows with values as dicts."""
for index in self.index:
row = {"index": index} if with_index else {}
for column in self._columns:
row[column] = self.get_cell(index, column)
yield row
def iter_tuples(self, with_index=True, name="Row"):
"""Iterate rows with values as namedtuples.
Converts column names to valid Python identifiers,
e.g. "First Name" -> "First_Name"
"""
columns = {column: to_identifier(column) for column in self._columns}
fields = ["index"] if with_index else []
fields.extend(columns.values())
container = namedtuple(name, fields)
for row in self.iter_dicts(with_index):
row = {columns[k]: v for k, v in row.items()}
yield container(**row)
def to_list(self, with_index=True):
"""Convert table to list representation."""
export = []
for index in self.index:
row = OrderedDict()
if with_index:
row["index"] = index
for column in self._columns:
row[column] = self.get_cell(index, column)
export.append(row)
return export
def to_dict(self, with_index=True):
"""Convert table to dict representation."""
export = OrderedDict()
if with_index:
export["index"] = self.index
for column in self._columns:
export[column] = []
for index in self.index:
for column in self._columns:
value = self.get_cell(index, column)
export[column].append(value)
return export
class Tables:
"""``Tables`` is a library for manipulating tabular data.
It can import data from various sources and apply different operations to it.
Common use-cases are reading and writing CSV files, inspecting files in
directories, or running tasks using existing Excel data.
**Import types**
The data a table can be created from can be of two main types:
1. An iterable of individual rows, like a list of lists, or list of dictionaries
2. A dictionary of columns, where each dictionary value is a list of values
For instance, these two input values:
.. code-block:: python
data1 = [
{"name": "Mark", "age": 58},
{"name": "John", "age": 22},
{"name": "Adam", "age": 67},
]
data2 = {
"name": ["Mark", "John", "Adam"],
"age": [ 58, 22, 67],
}
Would both result in the following table:
+-------+------+-----+
| Index | Name | Age |
+=======+======+=====+
| 0 | Mark | 58 |
+-------+------+-----+
| 1 | John | 22 |
+-------+------+-----+
| 2 | Adam | 67 |
+-------+------+-----+
**Indexing columns and rows**
Columns can be referred to in two ways: either with a unique string
name or their position as an integer. Columns can be named either when
the table is created, or they can be (re)named dynamically with keywords.
The integer position can always be used, and it starts from zero.
For instance, a table with columns "Name", "Age", and "Address" would
allow referring to the "Age" column with either the name "Age" or the
number 1.
Rows do not have a name, but instead only have an integer index. This
index also starts from zero. Keywords where rows are indexed also support
negative values, which start counting backwards from the end.
For instance, in a table with five rows, the first row could be referred
to with the number 0. The last row could be accessed with either 4 or
-1.
Examples:
The ``Tables`` library can load tabular data from various other libraries
and manipulate it.
.. code-block:: python
from robocorp.excel.tables import Tables
tables = Tables()
orders = tables.read_table_from_csv(
"orders.csv", columns=["name", "mail", "product"]
)
customers = tables.group_table_by_column(rows, "mail")
for customer in customers:
for order in customer:
add_cart(order)
make_order()
"""
def __init__(self):
self.logger = logging.getLogger(__name__)
@staticmethod
def _requires_table(obj: Any):
if not isinstance(obj, Table):
raise TypeError("Method requires Table object")
def create_table(
self, data: Data = None, trim: bool = False, columns: List[str] = None
) -> Table:
"""Create Table object from data.
Data can be a combination of various iterable containers, e.g.
list of lists, list of dicts, dict of lists.
Args:
data: Source data for table
trim: Remove all empty rows from the end of the worksheet,
default `False`
columns: Names of columns (optional)
Returns:
Table: Table object
See the main documentation for more information about
supported data types.
Example:
.. code-block:: python
# Create a new table using a Dictionary of Lists
# Because of the dictionary keys the column names will be automatically set
from robocorp.excel.tables import Tables
tables = Tables()
table_data_name = ["Mark", "John", "Amy"]
table_data_age = [58, 22, 67]
table_data = { name: table_data_name, age: table_data_age }
table = tables.create_table(table_data)
"""
table = Table(data, columns)
if trim:
self.trim_empty_rows(table)
self.trim_column_names(table)
self.logger.info("Created table: %s", table)
return table
def export_table(
self, table: Table, with_index: bool = False, as_list: bool = True
) -> Union[List, Dict]:
"""Convert a table object into standard Python containers.
Args:
table: Table to convert to dict
with_index: Include index in values
as_list: Export data as list instead of dict
Returns
(Union[list, dict]): A List or Dictionary that represents the table
Example:
.. code-block:: python
from robocorp.excel.tables import Tables
tables = Tables()
table_data_name = ["Mark", "John", "Amy"]
table_data_age = [58, 22, 67]
table_data = { name: table_data_name, age: table_data_age }
table = tables.create_table(table_data)
# manipulate the table..
export = tables.export_table(table)
"""
self._requires_table(table)
if as_list:
return table.to_list(with_index)
else:
return table.to_dict(with_index)
def copy_table(self, table: Table) -> Table:
"""Make a copy of a table object.
Args:
table: Table to copy
Returns:
Table: Table object
"""
self._requires_table(table)
return table.copy()
def clear_table(self, table: Table):
"""Clear table in-place, but keep columns.
Args:
table: Table to clear
Example:
.. code-block:: python
from robocorp.excel.tables import Tables
tables = Tables()
tables.clear_table(table)
"""
self._requires_table(table)
table.clear()
def merge_tables(self, *tables: Table, index: Optional[str] = None) -> Table:
"""Create a union of two tables and their contents.
Args:
tables: Tables to merge
index: Column name to use as index for merge
Returns:
Table: Table object
By default, rows from all tables are appended one after the other.
Optionally a column name can be given with ``index``, which is
used to merge rows together.
Example:
For instance, a ``name`` column could be used to identify
unique rows and the merge operation should overwrite values
instead of appending multiple copies of the same name.
====== =====
Name Price
====== =====
Egg 10.0
Cheese 15.0
Ham 20.0
====== =====
====== =====
Name Stock
====== =====
Egg 12.0
Cheese 99.0
Ham 0.0
====== =====
.. code-block:: python
from robocorp.excel.tables import Tables
tables = Tables()
products = tables.merge_tables(prices, stock, index="Name")
for product in products:
print(f'Product: {product["Name"]}, Product: {product["Price"]}'
"""
if index is None:
return self._merge_by_append(tables)
else:
return self._merge_by_index(tables, index)
def _merge_by_append(self, tables: Tuple[Table, ...]):
"""Merge tables by appending columns and rows."""
columns = uniq(column for table in tables for column in table.columns)
merged = Table(columns=columns)
for table in tables:
merged.append_rows(table)
return merged
def _merge_by_index(self, tables: Tuple[Table, ...], index: str):
"""Merge tables by using a column as shared key."""
columns = uniq(column for table in tables for column in table.columns)
merged = Table(columns=columns)
seen = {}
def find_index(row):
"""Find index for row, if key already exists."""
value = row[index]
if value in seen:
return seen[value]
for row_ in merged.iter_dicts(True):
if row_[index] == value:
seen[value] = row_["index"]
return row_["index"]
return None
for table in tables:
for row in table.iter_dicts(False):
row_index = find_index(row)
if row_index is None:
merged.append_row(row)
else:
for column, value in row.items():
merged.set_cell(row_index, column, value)
return merged
def get_table_dimensions(self, table: Table) -> Tuple[int, int]:
"""Return table dimensions, as (rows, columns).
Args:
table: Table to inspect
Returns:
(Tuple[int, int]): Two integer values that represent the number of rows and columns
Example:
.. code-block:: robotframework
${rows} ${columns}= Get table dimensions ${table}
Log Table has ${rows} rows and ${columns} columns.
"""
self._requires_table(table)
return table.dimensions
def rename_table_columns(
self, table: Table, names: List[Union[str, None]], strict: bool = False
):
"""Renames columns in the Table with given values.
Columns with name as ``None`` will use the previous value.
Args:
table: Table to modify
names: List of new column names
strict: If True, raises ValueError if column lengths
do not match
The renaming will be done in-place.
Examples:
.. code-block:: robotframework
# Initially set the column names
${columns}= Create list First Second Third
Rename table columns ${table} ${columns}
# First, Second, Third
# Update the first and second column names to Uno and Dos
${columns}= Create list Uno Dos
Rename table columns ${table} ${columns}
# Uno, Dos, Third
"""
self._requires_table(table)
before = table.columns
if strict and len(before) != len(names):
raise ValueError("Column lengths do not match")
after = []
for old, new in zip_longest(before, names):
if old is None:
break
elif new is None:
after.append(old)
else:
after.append(new)
table.columns = after
def add_table_column(
self, table: Table, name: Optional[str] = None, values: Any = None
):
"""Append a column to a table.
Args:
table: Table to modify
name: Name of new column
values: Value(s) for new column
The ``values`` can either be a list of values, one for each row, or
one single value that is set for all rows.
Examples:
.. code-block:: robotframework
# Add empty column
Add table column ${table}
# Add empty column with name
Add table column ${table} name=Home Address
# Add new column where every every row has the same value
Add table column ${table} name=TOS values=${FALSE}
# Add new column where every row has a unique value
${is_first}= Create list ${TRUE} ${FALSE} ${FALSE}
Add table column ${table} name=IsFirst values=${is_first}
"""
self._requires_table(table)
table.append_column(name, values)
def add_table_row(self, table: Table, values: Any = None):
"""Append rows to a table.
Args:
table: Table to modify
values: Value(s) for new row
The ``values`` can either be a list of values, or a dictionary
where the keys match current column names. Values for unknown
keys are discarded.
It can also be a single value that is set for all columns,
which is ``None`` by default.
Examples:
.. code-block:: robotframework
# Add empty row
Add table row ${table}
# Add row where every column has the same value
Add table row ${table} Unknown
# Add values per column
${values}= Create dictionary Username=Mark Mail=mark@robocorp.com
Add table row ${table} ${values}
"""
self._requires_table(table)
table.append_row(values)
def get_table_row(
self, table: Table, row: Index, as_list: bool = False
) -> Union[Dict, List]:
"""Get a single row from a table.
Args:
table: Table to read
:param row: Row to read
:param as_list: Return list instead of dictionary
Returns:
(Union[dict, list]): Dictionary or List of table row
Example:
.. code-block:: robotframework
# returns the first row in the table
${first}= Get table row ${orders}
# returns the last row in the table
${last}= Get table row ${orders} -1 as_list=${TRUE}
"""
self._requires_table(table)
values = table.get_row(row, as_list=as_list)
return values
def get_table_column(self, table: Table, column: Column) -> List:
"""Get all values for a single column in a table.
Args:
table: Table to read
column: Column to read
Returns:
list: List of the rows in the selected column
Example:
.. code-block:: robotframework
${emails}= Get table column ${users} E-Mail Address
"""
self._requires_table(table)
col = table.get_column(column, as_list=True)
return col
def set_table_row(self, table: Table, row: Index, values: Any):
"""Assign values to a row in the table.
Args:
table: Table to modify
row: Row to modify
values: Value(s) to set
The ``values`` can either be a list of values, or a dictionary
where the keys match current column names. Values for unknown
keys are discarded.
It can also be a single value that is set for all columns.
Examples:
.. code-block:: robotframework
${columns}= Create list One Two Three
${table}= Create table columns=${columns}
${values}= Create list 1 2 3
Set table row ${table} 0 ${values}
${values}= Create dictionary One=1 Two=2 Three=3
Set table row ${table} 1 ${values}
Set table row ${table} 2 ${NONE}
"""
self._requires_table(table)
table.set_row(row, values)
def set_table_column(self, table: Table, column: Column, values: Any):
"""Assign values to a column in the table.
Args:
table: Table to modify
column: Column to modify
values: Value(s) to set
The ``values`` can either be a list of values, one for each row, or
one single value that is set for all rows.
Examples:
.. code-block:: robotframework
# Set different value for each row (sizes must match)
${ids}= Create list 1 2 3 4 5
Set table column ${users} userId ${ids}
# Set the same value for all rows
Set table column ${users} email ${NONE}
"""
self._requires_table(table)
table.set_column(column, values)
def pop_table_row(
self, table: Table, row: Optional[Index] = None, as_list: bool = False
) -> Union[Dict, List]:
"""Remove row from table and return it.
Args:
table: Table to modify
row: Row index, pops first row if none given
as_list: Return list instead of dictionary
Returns:
(Union[dict, list]): Dictionary or List of the removed, popped, row
Examples:
.. code-block:: robotframework
# Pop the firt row in the table and discard it
Pop table row ${orders}
# Pop the last row in the table and store it
${row}= Pop table row ${data} -1 as_list=${TRUE}
"""
self._requires_table(table)
row = if_none(row, table.index[0])
values = table.get_row(row, as_list=as_list)
table.delete_rows(row)
return values
def pop_table_column(
self, table: Table, column: Optional[Column] = None
) -> Union[Dict, List]:
"""Remove column from table and return it.
Args:
table: Table to modify
column: Column to remove
Returns:
(Union[dict, list]): Dictionary or List of the removed, popped, column
Examples:
.. code-block:: robotframework
# Remove column from table and discard it
Pop table column ${users} userId
# Remove column from table and iterate over it
${ids}= Pop table column ${users} userId
FOR ${id} IN @{ids}
Log User id: ${id}
END
"""
self._requires_table(table)
column: Column = if_none(column, table.columns[0])
values = self.get_table_column(table, column)
table.delete_columns(column)
return values
def get_table_slice(
self, table: Table, start: Optional[Index] = None, end: Optional[Index] = None
) -> Union[Table, List[List]]:
"""Return a new Table from a range of given Table rows.
Args:
table: Table to read from
start: Start index (inclusive)
start: End index (exclusive)
Returns:
(Union[Table, list[list]]): Table object of the selected rows
If ``start`` is not defined, starts from the first row.
If ``end`` is not defined, stops at the last row.
Examples:
.. code-block:: robotframework
# Get all rows except first five
${slice}= Get table slice ${table} start=5
# Get rows at indexes 5, 6, 7, 8, and 9
${slice}= Get table slice ${table} start=5 end=10
# Get all rows except last five
${slice}= Get table slice ${table} end=-5
"""
self._requires_table(table)
return table.get_slice(start, end)
def set_row_as_column_names(self, table: Table, row: Index):
"""Set existing row as names for columns.
Args:
table: Table to modify
row: Row to use as column names
Example:
.. code-block:: robotframework
# Set the column names based on the first row
Set row as column names ${table} 0
"""
values = self.pop_table_row(table, row, as_list=True)
table.columns = values
def table_head(
self, table: Table, count: int = 5, as_list: bool = False
) -> Union[Table, List[List]]:
"""Return first ``count`` rows from a table.
Args:
table: Table to read from
count: Number of lines to read
as_list: Return list instead of Table
Returns:
(Union[Table, List[List]]): Return Table object or List of the selected rows
Example:
.. code-block:: robotframework
# Get the first 10 employees
${first}= Table head ${employees} 10
"""
self._requires_table(table)
return table.head(count, as_list)
def table_tail(
self, table: Table, count: int = 5, as_list: bool = False
) -> Union[Table, List[List]]:
"""Return last ``count`` rows from a table.
Args:
table: Table to read from
count: Number of lines to read
as_list: Return list instead of Table
Returns:
(Union[Table, List[List]]): Return Table object or List of the selected rows
Example:
.. code-block:: robotframework
# Get the last 10 orders
${latest}= Table tail ${orders} 10
"""
self._requires_table(table)
return table.tail(count, as_list)
def get_table_cell(self, table: Table, row: Index, column: Column) -> Any:
"""Get a cell value from a table.
Args:
table: Table to read from
row: Row of cell
column: Column of cell
Returns:
(Any): Cell value
Examples:
.. code-block:: robotframework
# Get the value in the first row and first column
Get table cell ${table} 0 0
# Get the value in the last row and first column
Get table cell ${table} -1 0
# Get the value in the last row and last column
Get table cell ${table} -1 -1
# Get the value in the third row and column "Name"
Get table cell ${table} 2 Name
"""
self._requires_table(table)
return table.get_cell(row, column)
def set_table_cell(self, table: Table, row: Index, column: Column, value: Any):
"""Set a cell value in a table.
Args:
table: Table to modify to
row: Row of cell
column: Column of cell
value: Value to set
Examples:
.. code-block:: robotframework
# Set the value in the first row and first column to "First"
Set table cell ${table} 0 0 First
# Set the value in the last row and first column to "Last"
Set table cell ${table} -1 0 Last
# Set the value in the last row and last column to "Corner"
Set table cell ${table} -1 -1 Corner
# Set the value in the third row and column "Name" to "Unknown"
Set table cell ${table} 2 Name Unknown
"""
self._requires_table(table)
table.set_cell(row, column, value)
def find_table_rows(
self, table: Table, column: Column, operator: str, value: Any
) -> Table:
"""Find all the rows in a table which match a condition for a given column.
Args:
table: Table to search into.
column: Name or position of the column to compare with.
operator: Comparison operator used with every cell value on the
specified column.
value: Value to compare against.
Returns:
Table: New `Table` object containing all the rows matching the condition.
Supported operators:
============ ========================================
Operator Description
============ ========================================
> Cell value is larger than
< Cell value is smaller than
>= Cell value is larger or equal than
<= Cell value is smaller or equal than
== Cell value is equal to
!= Cell value is not equal to
is Cell value is the same object
not is Cell value is not the same object
contains Cell value contains given value
not contains Cell value does not contain given value
in Cell value is in given value
not in Cell value is not in given value
============ ========================================
Returns the matches as a new `Table` instance.
Examples:
.. code-block:: robotframework
# Find all rows where price is over 200
@{rows} = Find table rows ${table} Price > ${200}
# Find all rows where the status does not contain "removed"
@{rows} = Find table rows ${table} Status not contains removed
"""
self._requires_table(table)
condition = to_condition(operator, value)
matches = []
for index in table.index:
cell = table.get_cell(index, column)
if condition(cell):
matches.append(index)
return table.get_table(matches)
def sort_table_by_column(
self, table: Table, column: Column, ascending: bool = True
):
"""Sort a table in-place according to ``column``.
Args:
table: Table to sort
column: Column to sort with
ascending: Table sort order
Examples:
.. code-block:: robotframework
# Sorts the `order_date` column ascending
Sort table by column ${orders} order_date
# Sorts the `order_date` column descending
Sort table by column ${orders} order_date ascending=${FALSE}
"""
self._requires_table(table)
table.sort_by_column(column, ascending=ascending)
def group_table_by_column(self, table: Table, column: Column) -> List[Table]:
"""Group a table by ``column`` and return a list of grouped Tables.
Args:
table: Table to use for grouping
column: Column which is used as grouping criteria
Returns:
(List[Table]): List of Table objects
Example:
.. code-block:: robotframework
# Groups rows of matching customers from the `customer` column
# and returns the groups or rows as Tables
@{groups}= Group table by column ${orders} customer
# An example of how to use the List of Tables once returned
FOR ${group} IN @{groups}
# Process all orders for the customer at once
Process order ${group}
END
"""
self._requires_table(table)
groups = table.group_by_column(column)
self.logger.info("Found %s groups", len(groups))
return groups
def filter_table_by_column(
self, table: Table, column: Column, operator: str, value: Any
):
"""Remove all rows where column values don't match the given condition.
Args:
table: Table to filter
column: Column to filter with
operator: Filtering operator, e.g. >, <, ==, contains
value: Value to compare column to (using operator)
See the keyword ``Find table rows`` for all supported operators
and their descriptions.
The filtering will be done in-place.
Examples:
.. code-block:: robotframework
# Only accept prices that are non-zero
Filter table by column ${table} price != ${0}
# Remove uwnanted product types
@{types}= Create list Unknown Removed
Filter table by column ${table} product_type not in ${types}
"""
self._requires_table(table)
condition = to_condition(operator, value)
before = len(table)
table.filter_by_column(column, condition)
after = len(table)
self.logger.info("Filtered %d rows", after - before)
def filter_table_with_function(self, table: Table, func: Callable, *args):
"""Filters the table rows with the given func.
Run a function for each row of a table, then remove all rows where the called
keyword returns a falsy value.
Can be used to create custom RF keyword based filters.
Args:
table: Table to modify.
func: Function used as filter.
args: Additional keyword arguments to be passed. (optional)
The row object will be given as the first argument to the filtering keyword.
"""
self._requires_table(table)
def condition(row: Row) -> bool:
return func(row, *args)
before = len(table)
table.filter_all(condition)
after = len(table)
self.logger.info("Removed %d row(s)", before - after)
def map_column_values(self, table: Table, column: Column, func: Callable, *args):
"""Maps given function to column values.
Run a function for each cell in a given column, and replace its content with
the return value.
Can be used to easily convert column types or values in-place.
Args:
table: Table to modify.
column: Column to modify.
func: Mapping function.
args: Additional keyword arguments. (optional)
The cell value will be given as the first argument to the mapping keyword.
Examples:
.. code-block:: robotframework
# Convert all columns values to a different type
Map column values ${table} Price Convert to integer
# Look up values with a custom keyword
Map column values ${table} User Map user ID to name
"""
self._requires_table(table)
values = []
for index in table.index:
cell = table.get_cell(index, column)
output = func(cell, *args)
values.append(output)
table.set_column(column, values)
def filter_empty_rows(self, table: Table):
"""Remove all rows from a table which have only ``None`` values.
Args:
table: Table to filter
The filtering will be done in-place.
Example:
.. code-block:: robotframework
Filter empty rows ${table}
"""
self._requires_table(table)
empty = []
for idx, row in table.iter_lists():
if all(value is None for value in row):
empty.append(idx)
table.delete_rows(empty)
def trim_empty_rows(self, table: Table):
"""Remove all rows from the *end* of a table, which have only ``None`` as values.
Args:
table: Table to filter
The filtering will be done in-place.
Example:
.. code-block:: robotframework
Trim empty rows ${table}
"""
self._requires_table(table)
empty = []
for idx in reversed(table.index):
row = table[idx]
if any(value is not None for value in row):
break
empty.append(idx)
table.delete_rows(empty)
def trim_column_names(self, table: Table):
"""Remove all extraneous whitespace from column names.
Args:
table: Table to filter
The filtering will be done in-place.
Example:
.. code-block:: robotframework
# This example will take colums such as:
# "One", "Two ", " Three "
# and trim them to become the below:
# "One", "Two", "Three"
Trim column names ${table}
"""
self._requires_table(table)
table.columns = [
column.strip() if isinstance(column, str) else column
for column in table.columns
]
def read_table_from_csv(
self,
path: str,
header: Optional[bool] = None,
columns: Optional[List[str]] = None,
dialect: Optional[Union[str, Dialect]] = None,
delimiters: Optional[str] = None,
column_unknown: str = "Unknown",
encoding: Optional[str] = None,
) -> Table:
"""Read a CSV file as a table.
Args:
path: Path to CSV file
header: CSV file includes header
columns: Names of columns in resulting table
dialect: Format of CSV file
delimiters: String of possible delimiters
column_unknown: Column name for unknown fields
encoding: Text encoding for input file,
uses system encoding by default
Returns:
Table: Table object
By default, attempts to deduce the CSV format and headers
from a sample of the input file. If it's unable to determine
the format automatically, the dialect and header will
have to be defined manually.
Builtin ``dialect`` values are ``excel``, ``excel-tab``, and ``unix``,
and ``header`` is boolean argument (``True``/``False``). Optionally a
set of valid ``delimiters`` can be given as a string.
The ``columns`` argument can be used to override the names of columns
in the resulting table. The amount of columns must match the input
data.
If the source data has a header and rows have more fields than
the header defines, the remaining values are put into the column
given by ``column_unknown``. By default, it has the value "Unknown".
Examples:
.. code-block:: robotframework
# Source dialect is deduced automatically
${table}= Read table from CSV export.csv
Log Found columns: ${table.columns}
# Source dialect is known and given explicitly
${table}= Read table from CSV export-excel.csv dialect=excel
Log Found columns: ${table.columns}
"""
sniffer = csv.Sniffer()
with open(path, newline="", encoding=encoding) as fd:
sample = fd.readline()
if dialect is None:
dialect_name = sniffer.sniff(sample, delimiters)
elif isinstance(dialect, Dialect):
dialect_name = dialect.value
else:
dialect_name = dialect
if header is None:
header = sniffer.has_header(sample)
with open(path, newline="", encoding=encoding) as fd:
if header:
reader = csv.DictReader(
fd, dialect=dialect_name, restkey=str(column_unknown)
)
else:
reader = csv.reader(fd, dialect=dialect_name)
rows = list(reader)
table = Table(rows, columns)
if header and column_unknown in table.columns:
self.logger.warning(
"CSV file (%s) had fields not defined in header, "
"which can be the result of a wrong dialect",
path,
)
return table
def write_table_to_csv(
self,
table: Table,
path: str,
header: bool = True,
dialect: Union[str, Dialect] = Dialect.Excel,
encoding: Optional[str] = None,
delimiter: Optional[str] = ",",
):
"""Write a table as a CSV file.
Args:
table: Table to write
path: Path to write to
header: Write columns as header to CSV file
dialect: The format of output CSV
encoding: Text encoding for output file,
uses system encoding by default
delimiter: Delimiter character between columns
Builtin ``dialect`` values are ``excel``, ``excel-tab``, and ``unix``.
Example:
.. code-block:: robotframework
${sheet}= Read worksheet as table orders.xlsx header=${TRUE}
Write table to CSV ${sheet} output.csv
"""
self._requires_table(table)
if isinstance(dialect, Dialect):
dialect_name = dialect.value
else:
dialect_name = dialect
with open(path, mode="w", newline="", encoding=encoding) as fd:
writer = csv.DictWriter(
fd, fieldnames=table.columns, dialect=dialect_name, delimiter=delimiter
)
if header:
writer.writeheader()
for row in table.iter_dicts(with_index=False):
writer.writerow(row) | /robocorp_excel-0.4.0.tar.gz/robocorp_excel-0.4.0/src/robocorp/excel/tables.py | 0.856632 | 0.364042 | tables.py | pypi |
from typing import Literal, Optional
from robocorp.excel._types import PathType
from robocorp.excel._workbooks import XlsWorkbook, XlsxWorkbook, _load_workbook
from robocorp.excel.workbook import Workbook
def create_workbook(
fmt: Literal["xlsx", "xls"] = "xlsx",
sheet_name: Optional[str] = None,
) -> Workbook:
"""Create and open a new Excel workbook in memory.
Automatically also creates a new worksheet with the name ``sheet_name``.
**Note:** Use the ``save`` method to store the workbook into file.
Args:
fmt: The file format for the workbook. Supported file formats: ``xlsx``, ``xls``.
sheet_name: The name for the initial sheet. If None, then set to ``Sheet``.
Returns:
Workbook: The created Excel workbook object.
Example:
.. code-block:: python
workbook = create_workbook("xlsx", sheet_name="Sheet1")
"""
# FIXME: add missing types from this docs page https://support.microsoft.com/en-us/office/file-formats-that-are-supported-in-excel-0943ff2c-6014-4e8d-aaea-b83d51d46247
# check which of these our python lib behind scenes supports: xlsx, xls, xlsm, xltm, xltx, xlt, xlam, xlsb, xla, xlr, xlw, xll
# removed path, as it is only used when saved
# files.create_workbook()
if fmt == "xlsx":
workbook = XlsxWorkbook()
elif fmt == "xls":
workbook = XlsWorkbook()
else:
raise ValueError(f"Unknown format: {fmt}")
workbook.create()
if sheet_name is not None:
workbook.rename_worksheet(sheet_name, workbook.active)
return Workbook(workbook)
def open_workbook(
path: PathType,
data_only: bool = False,
read_only: bool = False,
) -> Workbook:
"""Open an existing Excel workbook.
Opens the workbook in memory.
The file can be in either ``.xlsx`` or ``.xls`` format.
Args:
path: path to Excel file
data_only: controls whether cells with formulas have either
the formula (default, False) or the value stored the last time Excel
read the sheet (True). Affects only ``.xlsx`` files.
Returns:
Workbook: Workbook object
Example:
::
# Open workbook with only path provided
workbook = open_workbook("path/to/file.xlsx")
# Open workbook with path provided and reading formulas in cells
# as the value stored
# Note: Can only be used with XLSX workbooks
workbook = open_workbook("path/to/file.xlsx", data_only=True)
"""
# files.open_workbook()
return Workbook(_load_workbook(path, data_only, read_only)) | /robocorp_excel-0.4.0.tar.gz/robocorp_excel-0.4.0/src/robocorp/excel/excel.py | 0.82251 | 0.405625 | excel.py | pypi |
import re
from pathlib import Path
from typing import Optional, Union
from urllib.parse import unquote, urlparse
import requests
PathLike = Union[str, Path]
CHUNK_SIZE = 32768
def download(
url: str,
path: Optional[PathLike] = None,
overwrite: bool = False,
) -> Path:
"""Download a file from the given URL.
If the `path` argument is not given, the file is downloaded to the
current working directory. The filename is automatically selected
based on either the response headers or the URL.
Params:
url: URL to download
path: Path to destination file
overwrite: Overwrite file if it already exists
Returns:
Path to created file
"""
if path is not None:
path = Path(path)
if path.is_dir():
dirname = path
filename = None
else:
dirname = path.parent
filename = path.name
else:
dirname = Path.cwd()
filename = None
with requests.get(url, stream=True) as response:
response.raise_for_status()
# Try to resolve filename from response headers
if filename is None:
if content_disposition := response.headers.get("Content-Disposition"):
if match := re.search(r"filename=\"(.+)\"", content_disposition):
filename = match.group(1)
# Try to parse filename from download URL
if filename is None:
parts = urlparse(url).path.split("/")
for part in reversed(parts):
part = unquote(part)
if part.strip():
filename = part
break
# Fallback value (unlikely)
if filename is None:
filename = "unknown"
output = dirname / filename
if output.exists() and not overwrite:
raise FileExistsError(f"File already exists: {output}")
dirname.mkdir(parents=True, exist_ok=True)
with open(output, "wb") as fd:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # Filter out keep-alive new chunks
fd.write(chunk)
return output | /robocorp_http-0.4.0-py3-none-any.whl/robocorp/http/_http.py | 0.765593 | 0.222573 | _http.py | pypi |
import sys
from typing import List, Optional, Union
from selenium.common.exceptions import ( # type: ignore
JavascriptException,
TimeoutException,
)
from selenium.webdriver.support.ui import WebDriverWait # type: ignore
from typing_extensions import Literal, TypedDict
from inspector_commons.bridge.bridge_browser import BrowserBridge # type: ignore
from inspector_commons.bridge.mixin import traceback # type: ignore
class SelectorType(TypedDict):
strategy: str
value: str
class MatchType(TypedDict):
name: str
value: str
class RecordedOperation(TypedDict):
type: str
value: Union[None, str, bool]
selectors: List[SelectorType]
path: Optional[str]
time: Optional[int]
trigger: Literal["click", "change", "unknown"]
source: Optional[str]
screenshot: Optional[str]
matches: Optional[List[MatchType]]
class RecordedEvent(TypedDict):
actions: Optional[List[RecordedOperation]]
actionType: Literal["exception", "stop", "append"]
url: Optional[str]
class RecorderBridge(BrowserBridge):
"""Javascript API bridge for the web recorder functionality."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._current_url: Optional[str] = None
self._recorded_operations: Union[List[RecordedOperation], None] = None
@traceback
def start_recording(self) -> Union[List[RecordedOperation], None]:
self.logger.debug("Starting recording event...")
self._record()
self.logger.debug("Recorded events: %s", self._recorded_operations)
return self._recorded_operations
@traceback
def stop_recording(self) -> Union[List[RecordedOperation], None]:
self.web_driver.stop_recording()
self.logger.debug("Recording should stop...")
return self._recorded_operations
@traceback
def show_guide(self):
self.web_driver.show_guide("recording-guide")
def _record(self):
for attempt_number in range(3):
self.logger.debug("Recording attempt: %s", attempt_number)
try:
self._wait_for_page_to_load()
event: RecordedEvent = self.web_driver.record_event()
self.logger.debug("Raw event: %s", event)
except JavascriptException as exc:
self.logger.debug("Ignoring Javascript exception: %s", exc)
event: RecordedEvent = {
"actionType": "exception",
"actions": None,
"url": self._current_url,
}
continue
except TimeoutException:
self.logger.debug("Retrying after script timeout")
event: RecordedEvent = {
"actionType": "exception",
"actions": None,
"url": self._current_url,
}
continue
if not event:
self.logger.error("Received empty event: %s", event)
continue
if self._handle_event(event):
# valid event so just return from function
return
self._handle_stop_event("force stop")
def _handle_event(self, event: RecordedEvent) -> bool:
self._recorded_operations = None
event_type = event["actionType"]
event_url = event["url"]
if event_url != self._current_url:
message: RecordedOperation = {
"path": None,
"time": None,
"selectors": [],
"type": "comment",
"value": f"Recorder detected that URL has changed from {event_url}",
"trigger": "unknown",
"source": event_url,
"screenshot": None,
"matches": None,
}
self._recorded_operations = [message]
self._current_url = event_url
if event_type == "exception":
self.logger.debug("Event(s) is an exception: %s", event)
elif event_type == "event":
self.logger.debug("Received event from page: %s", event["actions"])
if self._recorded_operations is None:
self._recorded_operations = []
valid_ops = self._get_valid_ops(event=event)
if valid_ops is not None:
self._recorded_operations.extend(valid_ops)
elif event_type == "stop":
self._recorded_operations = [self._handle_stop_event(event_url=event_url)]
else:
raise ValueError(f"Unknown event type: {event_type}")
return True
def _handle_stop_event(self, event_url):
self.logger.debug("Received stop from page")
message: RecordedOperation = {
"path": None,
"time": None,
"selectors": [],
"type": "command",
"value": f"Received stop from: {event_url}",
"trigger": "stop",
"matches": None,
"screenshot": None,
"source": None,
}
self.web_driver.stop_recording()
return message
def _get_valid_ops(self, event: RecordedEvent):
self.logger.debug("Testing operations: %s", event["actions"])
valid_ops = []
if event.get("actions", None) is not None and event["actions"] is not None:
for operation in event["actions"]:
if "selectors" not in operation or len(operation["selectors"]) == 0:
continue
valid_selectors = []
for selector in operation["selectors"]:
self.logger.debug("Raw event selector: %s", selector)
if selector is not None:
valid_selectors.append(selector)
operation["selectors"] = valid_selectors
if len(valid_selectors) > 0:
operation["source"] = event["url"]
valid_ops.append(operation)
if len(valid_ops) == 0:
self.logger.debug("No valid actions...")
return None
return valid_ops
def set_window_height(self, height):
self.logger.debug(
"Content sizes: %s (height) x %s (width)",
height,
self.window.DEFAULTS["width"],
)
local_width = self.window.DEFAULTS["width"]
local_width = local_width + 5 if sys.platform == "win32" else local_width
local_height = height + 5 if sys.platform == "win32" else height
self.logger.debug(
"Setting the window to: %s (height) x %s (width)",
local_height,
local_width,
)
self.window.resize(local_width, local_height)
def _wait_for_page_to_load(self):
try:
waiter = WebDriverWait(self.web_driver, 10)
waiter.until(
lambda x: x.selenium.execute_script("return document.readyState")
== "complete"
)
except Exception as ex: # pylint: disable=W0703
self.logger.debug(
"There was an exception while waiting for page to load: %s", ex
) | /robocorp_inspector_commons-0.10.2-py3-none-any.whl/inspector_commons/bridge/bridge_recorder.py | 0.683525 | 0.196344 | bridge_recorder.py | pypi |
import os
import sys
from pathlib import Path
from typing import Optional
import pytest
from robocorp import log
from robocorp.log.protocols import Status
from robocorp.log.pyproject_config import (
PyProjectInfo,
read_pyproject_toml,
read_robocorp_auto_log_config,
)
from robocorp.log.redirect import setup_stdout_logging
__version__ = "0.0.1"
version_info = [int(x) for x in __version__.split(".")]
def _setup_log_output(
output_dir: Path,
max_file_size: str = "50MB",
max_files: int = 5,
log_name: str = "log.html",
):
# This can be called after user code is imported (but still prior to its
# execution).
return log.add_log_output(
output_dir=output_dir,
max_file_size=max_file_size,
max_files=max_files,
log_html=output_dir / log_name,
)
class _State:
"""
This class helps in keeping the state of the log consistent because
the pytest integration spans multiple hooks.
"""
# Information on the pyproject.toml
pyproject_info: Optional[PyProjectInfo] = None
# 0 if all is ok at the end of the run and 1 otherwise
exitstatus = 1
_import_hook_setup = False
_run_start = 0
# information on the current test case being run.
_func_stack: list = []
# Context managers being tracked for the duration of the run.
_context_manager_stack: list = []
@classmethod
def handle_context_manager(cls, ctx):
"""
Args:
ctx: This is a context manager which should be entered
and exited when the run finishes.
"""
ctx.__enter__()
cls._context_manager_stack.append(ctx)
@classmethod
def on_run_start(cls):
cls._run_start += 1
log.start_run("pytest")
log.start_task("config and collect", "", "", 0, "")
@classmethod
def on_run_end(cls):
log.end_task("run finished (print results)", "", Status.PASS, "")
if cls.exitstatus:
log.end_run("pytest", Status.FAIL)
else:
log.end_run("pytest", Status.PASS)
for ex in reversed(cls._context_manager_stack):
ex.__exit__(None, None, None)
del cls._context_manager_stack[:]
log.close_log_outputs()
@classmethod
def _end_config_and_collect_if_needed(cls):
if cls._run_start:
log.end_task("config and collect", "", Status.PASS, "")
cls._run_start = 0
@classmethod
def on_start_test(cls, item):
cls._end_config_and_collect_if_needed()
cls._func_stack.append(
[
item.name,
item.parent.module.__name__,
Status.PASS,
"",
]
)
log.start_task(
item.name,
item.parent.module.__name__,
item.parent.module.__file__,
item.function.__code__.co_firstlineno,
"",
)
@classmethod
def on_end_test(cls, item):
name, libname, status, message = cls._func_stack.pop(-1)
log.end_task(name, libname, status, message)
@classmethod
def on_run_tests_finished(cls):
cls._end_config_and_collect_if_needed()
log.start_task("run finished (print results)", "", "", 0, "")
def pytest_load_initial_conftests(early_config, parser, args):
# This hook will not be called for conftest.py files, only for setuptools plugins
# (so, we have pytest_addoption as a 2nd option for the import hook setup).
_setup_import_hook()
def pytest_addoption(parser: pytest.Parser, pluginmanager):
parser.addoption(
"--robocorp-log-output",
default="./output",
help=(
"Defines in which directory the robocorp log output should be saved "
"(default: ./output)."
),
)
parser.addoption(
"--robocorp-log-html-name",
default="log.html",
help=("Defines the name of the final log file (default: log.html)."),
)
parser.addoption(
"--robocorp-log-max-file-size",
default="50MB",
help=("Defines the max file size (default: 50MB)."),
)
parser.addoption(
"--robocorp-log-max-files",
default="5",
help=(
"Defines the maximum number of files to keep for the logging (default: 5)."
),
)
_setup_import_hook()
class _ContextErrorReport:
def show_error(self, message: str):
pass
def _setup_import_hook() -> None:
"""
We use one of the early bootstrap modules of pytest because we want to
setup the auto-logging very early so that we can have our import hook
code in place when the user loads his code.
"""
if _State._import_hook_setup:
return
_State._import_hook_setup = True
for mod_name, mod in tuple(sys.modules.items()):
try:
f = mod.__file__
except AttributeError:
continue
if f and log._in_project_roots(f):
log.debug(
f'The module: "{mod_name}" will not be auto-logged because '
"it is already loaded."
)
# Setup the auto-logging import hook ASAP.
config: log.AutoLogConfigBase
pyproject_info = read_pyproject_toml(Path(os.path.abspath(".")))
if pyproject_info is None:
config = log.DefaultAutoLogConfig()
else:
context = _ContextErrorReport()
config = read_robocorp_auto_log_config(context, pyproject_info)
_State.pyproject_info = pyproject_info
_State.handle_context_manager(log.setup_auto_logging(config))
def pytest_configure(config) -> None:
output_dir = Path(config.getoption("--robocorp-log-output", "./output"))
max_file_size: str = config.getoption("--robocorp-log-max-file-size", "50MB")
max_files: int = int(config.getoption("--robocorp-log-max-files", "5"))
log_name: str = config.getoption("--robocorp-log-html-name", "log.html")
_State.handle_context_manager(
_setup_log_output(
output_dir,
max_file_size=max_file_size,
max_files=max_files,
log_name=log_name,
)
)
_State.handle_context_manager(setup_stdout_logging(""))
_State.on_run_start()
def pytest_unconfigure(config):
_State.on_run_end()
def pytest_sessionfinish(session, exitstatus):
_State.exitstatus = exitstatus
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item):
_State.on_start_test(item)
yield
_State.on_end_test(item)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(*args, **kwargs):
yield
_State.on_run_tests_finished()
def pytest_runtest_logreport(report):
if report.failed:
# Update the status if the test failed.
_curr_stack = _State._func_stack[-1]
if _curr_stack[2] == Status.PASS:
_curr_stack[2] = Status.FAIL
_curr_stack[3] = report.longreprtext | /robocorp_log_pytest-0.0.1.tar.gz/robocorp_log_pytest-0.0.1/src/robocorp/log_pytest/__init__.py | 0.576184 | 0.198666 | __init__.py | pypi |
import logging
import os.path
import platform
import sys
import threading
from typing import Dict, List, Optional, Sequence
log = logging.getLogger(__name__)
def normcase(s, NORMCASE_CACHE={}):
try:
return NORMCASE_CACHE[s]
except Exception:
normalized = NORMCASE_CACHE[s] = s.lower()
return normalized
IS_PYPY = platform.python_implementation() == "PyPy"
IS_WINDOWS = sys.platform == "win32"
IS_LINUX = sys.platform in ("linux", "linux2")
IS_MAC = sys.platform == "darwin"
LIBRARY_CODE_BASENAMES_STARTING_WITH = ("<",)
def _convert_to_str_and_clear_empty(roots):
new_roots = []
for root in roots:
assert isinstance(root, str), "%s not str (found: %s)" % (root, type(root))
if root:
new_roots.append(root)
return new_roots
class FilesFiltering(object):
"""
Usage is something as:
files_filtering = FilesFiltering(...)
# Detects if a given file is user-code.
files_filtering.in_project_roots(mod.__file__)
"""
def __init__(
self,
project_roots: Optional[Sequence[str]] = None,
library_roots: Optional[Sequence[str]] = None,
):
self._project_roots: List[str] = []
self._library_roots: List[str] = []
self._cache_in_project_roots: Dict[str, bool] = {}
if project_roots is None:
# If the ROBOT_ROOT is available, use it to signal it's
# user code beneath that folder.
robot_root = os.environ.get("ROBOT_ROOT")
if robot_root:
if os.path.isdir(robot_root):
project_roots = [robot_root]
if project_roots is not None:
self._set_project_roots(project_roots)
if library_roots is None:
library_roots = self._get_default_library_roots()
self._set_library_roots(library_roots)
@classmethod
def _get_default_library_roots(cls):
log.debug("Collecting default library roots.")
# Provide sensible defaults if not in env vars.
import site
roots = []
try:
import sysconfig # Python 2.7 onwards only.
except ImportError:
pass
else:
for path_name in set(("stdlib", "platstdlib", "purelib", "platlib")) & set(
sysconfig.get_path_names()
):
roots.append(sysconfig.get_path(path_name))
# Make sure we always get at least the standard library location (based on the `os` and
# `threading` modules -- it's a bit weird that it may be different on the ci, but it happens).
roots.append(os.path.dirname(os.__file__))
roots.append(os.path.dirname(threading.__file__))
if IS_PYPY:
# On PyPy 3.6 (7.3.1) it wrongly says that sysconfig.get_path('stdlib') is
# <install>/lib-pypy when the installed version is <install>/lib_pypy.
try:
import _pypy_wait # type: ignore
except ImportError:
log.debug(
"Unable to import _pypy_wait on PyPy when collecting default library roots."
)
else:
pypy_lib_dir = os.path.dirname(_pypy_wait.__file__)
log.debug("Adding %s to default library roots.", pypy_lib_dir)
roots.append(pypy_lib_dir)
if hasattr(site, "getusersitepackages"):
site_paths = site.getusersitepackages()
if isinstance(site_paths, (list, tuple)):
for site_path in site_paths:
roots.append(site_path)
else:
roots.append(site_paths)
if hasattr(site, "getsitepackages"):
site_paths = site.getsitepackages()
if isinstance(site_paths, (list, tuple)):
for site_path in site_paths:
roots.append(site_path)
else:
roots.append(site_paths)
for path in sys.path:
if os.path.exists(path) and os.path.basename(path) in (
"site-packages",
"pip-global",
):
roots.append(path)
roots.extend([os.path.realpath(path) for path in roots])
return sorted(set(roots))
def _fix_roots(self, roots):
roots = _convert_to_str_and_clear_empty(roots)
new_roots = []
for root in roots:
path = self._absolute_normalized_path(root)
if IS_WINDOWS:
new_roots.append(path + "\\")
else:
new_roots.append(path + "/")
return new_roots
def _absolute_normalized_path(self, filename, cache={}):
"""
Provides a version of the filename that's absolute and normalized.
"""
try:
return cache[filename]
except KeyError:
pass
if filename.startswith("<"):
cache[filename] = normcase(filename)
return cache[filename]
cache[filename] = normcase(os.path.abspath(filename))
return cache[filename]
def _set_project_roots(self, project_roots):
self._project_roots = self._fix_roots(project_roots)
log.debug("ROBOT_ROOTS %s\n" % project_roots)
def _get_project_roots(self):
return self._project_roots
def _set_library_roots(self, roots):
self._library_roots = self._fix_roots(roots)
log.debug("LIBRARY_ROOTS %s\n" % roots)
def _get_library_roots(self):
return self._library_roots
def _in_project_roots(self, received_filename: str) -> bool:
"""
Note: uncached. Use `in_project_roots`.
"""
DEBUG = False
if received_filename.startswith(LIBRARY_CODE_BASENAMES_STARTING_WITH):
if DEBUG:
log.debug(
"Not in in_project_roots - library basenames - starts with %s (%s)",
received_filename,
LIBRARY_CODE_BASENAMES_STARTING_WITH,
)
return False
project_roots = self._get_project_roots() # roots are absolute/normalized.
absolute_normalized_filename = self._absolute_normalized_path(received_filename)
absolute_normalized_filename_as_dir = absolute_normalized_filename + (
"\\" if IS_WINDOWS else "/"
)
found_in_project = []
for root in project_roots:
if root and (
absolute_normalized_filename.startswith(root)
or root == absolute_normalized_filename_as_dir
):
if DEBUG:
log.debug("In project: %s (%s)", absolute_normalized_filename, root)
found_in_project.append(root)
found_in_library = []
library_roots = self._get_library_roots()
for root in library_roots:
if root and (
absolute_normalized_filename.startswith(root)
or root == absolute_normalized_filename_as_dir
):
found_in_library.append(root)
if DEBUG:
log.debug("In library: %s (%s)", absolute_normalized_filename, root)
else:
if DEBUG:
log.debug(
"Not in library: %s (%s)", absolute_normalized_filename, root
)
if not project_roots:
# If we have no project roots configured, consider it being in the project
# roots if it's not found in site-packages (because we have defaults for those
# and not the other way around).
in_project = not found_in_library
if DEBUG:
log.debug(
"Final in project (no project roots): %s (%s)",
absolute_normalized_filename,
in_project,
)
else:
in_project = False
if found_in_project:
if not found_in_library:
if DEBUG:
log.debug(
"Final in project (in_project and not found_in_library): %s (True)",
absolute_normalized_filename,
)
in_project = True
else:
# Found in both, let's see which one has the bigger path matched.
if max(len(x) for x in found_in_project) > max(
len(x) for x in found_in_library
):
in_project = True
if DEBUG:
log.debug(
"Final in project (found in both): %s (%s)",
absolute_normalized_filename,
in_project,
)
return in_project
def in_project_roots(self, filename: str) -> bool:
try:
return self._cache_in_project_roots[filename]
except KeyError:
pass
in_project_roots = self._in_project_roots(filename)
self._cache_in_project_roots[filename] = in_project_roots
return in_project_roots | /robocorp_log-2.7.0-py3-none-any.whl/robocorp/log/_rewrite_filtering.py | 0.547222 | 0.192805 | _rewrite_filtering.py | pypi |
from dataclasses import dataclass
from pathlib import Path
from typing import Any, List, Optional
from robocorp import log
from robocorp.log.protocols import IContextErrorReport
@dataclass
class PyProjectInfo:
pyproject: Path
toml_contents: dict
def read_pyproject_toml(path: Path) -> Optional[PyProjectInfo]:
"""
Args:
path:
This is the path where the `pyproject.toml` file should be found.
If it's not found directly in the given path, parent folders will
be searched for the `pyproject.toml`.
Returns:
The information on the pyproject file (the toml contents and the actual
path where the pyproject.toml was found).
"""
while True:
pyproject = path / "pyproject.toml"
try:
if pyproject.exists():
break
except OSError:
continue
parent = path.parent
if parent == path or not parent:
# Couldn't find pyproject.toml
return None
path = parent
try:
toml_contents = pyproject.read_text(encoding="utf-8")
except Exception:
raise OSError(f"Could not read the contents of: {pyproject}.")
pyproject_toml: Any = None
try:
try:
import tomllib # type: ignore
except ImportError:
import tomli as tomllib # type: ignore
pyproject_toml = tomllib.loads(toml_contents)
except Exception:
raise RuntimeError(f"Could not interpret the contents of {pyproject} as toml.")
return PyProjectInfo(pyproject=pyproject, toml_contents=pyproject_toml)
def read_section_from_toml(
pyproject_info: PyProjectInfo,
section_name: str,
context: Optional[IContextErrorReport] = None,
) -> Any:
"""
Args:
pyproject_info: Information on the pyroject toml.
section_name: The name of the section to be read
i.e.: tool.robocorp.log
context: The context used to report errors.
Returns:
The section which was read.
"""
read_parts: List[str] = []
obj: Any = pyproject_info.toml_contents
parts = section_name.split(".")
last_part = parts[-1]
parts = parts[:-1]
for part in parts:
read_parts.append(part)
obj = obj.get(part)
if obj is None:
return None
elif not isinstance(obj, dict):
if context is not None:
context.show_error(
f"Expected '{'.'.join(read_parts)}' to be a dict in {pyproject_info.pyproject}."
)
return None
if not isinstance(obj, dict):
if obj is not None:
if context is not None:
context.show_error(
f"Expected '{'.'.join(read_parts)}' to be a dict in {pyproject_info.pyproject}."
)
return None
return obj.get(last_part)
def read_robocorp_auto_log_config(
context: IContextErrorReport, pyproject: PyProjectInfo
) -> log.AutoLogConfigBase:
"""
Args:
context: The context used to report errors.
pyproject: The pyproject information from where the configuration should
be loaded.
Returns:
The autolog configuration read from the given pyproject information.
"""
from robocorp.log import FilterKind
if not pyproject.toml_contents:
log.DefaultAutoLogConfig()
obj: Any = pyproject.toml_contents
filters: List[log.Filter] = []
default_library_filter_kind = FilterKind.log_on_project_call
if isinstance(obj, dict):
# Filter(name="RPA", kind=FilterKind.log_on_project_call),
# Filter("selenium", FilterKind.log_on_project_call),
# Filter("SeleniumLibrary", FilterKind.log_on_project_call),
obj = read_section_from_toml(pyproject, "tool.robocorp.log", context)
if isinstance(obj, dict):
filters = _load_filters(obj, context, pyproject.pyproject)
kind = obj.get("default_library_filter_kind")
if kind is not None:
if not isinstance(kind, str):
context.show_error(
f"Expected 'tool.robocorp.log.log_filter_rules.default_library_filter_kind' to have "
f"'kind' as a str (and not {type(kind)} in {pyproject}."
)
else:
f: Optional[log.FilterKind] = getattr(log.FilterKind, kind, None)
if f is None:
context.show_error(
f"Rule from 'tool.robocorp.log.log_filter_rules.default_library_filter_kind' "
f"has invalid 'kind': >>{kind}<< in {pyproject}."
)
else:
default_library_filter_kind = f
return log.DefaultAutoLogConfig(
filters=filters, default_library_filter_kind=default_library_filter_kind
)
def _load_filters(
obj: dict, context: IContextErrorReport, pyproject: Path
) -> List[log.Filter]:
filters: List[log.Filter] = []
log_filter_rules: list = []
list_obj = obj.get("log_filter_rules")
if not list_obj:
return filters
if isinstance(list_obj, list):
log_filter_rules = list_obj
else:
context.show_error(
f"Expected 'tool.robocorp.log.log_filter_rules' to be a list in {pyproject}."
)
return filters
# If we got here we have the 'log_filter_rules', which should be a list of
# dicts in a structure such as: {name = "difflib", kind = "log_on_project_call"}
# expected kinds are the values of the FilterKind.
for rule in log_filter_rules:
if isinstance(rule, dict):
name = rule.get("name")
kind = rule.get("kind")
if not name:
context.show_error(
f"Expected rule: {rule} from 'tool.robocorp.log.log_filter_rules' to have a 'name' in {pyproject}."
)
continue
if not kind:
context.show_error(
f"Expected rule: {rule} from 'tool.robocorp.log.log_filter_rules' to have a 'kind' in {pyproject}."
)
continue
if not isinstance(name, str):
context.show_error(
f"Expected rule: {rule} from 'tool.robocorp.log.log_filter_rules' to have 'name' as a str in {pyproject}."
)
continue
if not isinstance(kind, str):
context.show_error(
f"Expected rule: {rule} from 'tool.robocorp.log.log_filter_rules' to have 'kind' as a str in {pyproject}."
)
continue
f: Optional[log.FilterKind] = getattr(log.FilterKind, kind, None)
if f is None:
context.show_error(
f"Rule from 'tool.robocorp.log.log_filter_rules' ({rule}) has invalid 'kind': >>{kind}<< in {pyproject}."
)
continue
filters.append(log.Filter(name, f))
return filters | /robocorp_log-2.7.0-py3-none-any.whl/robocorp/log/pyproject_config.py | 0.861203 | 0.255727 | pyproject_config.py | pypi |
import datetime
import json
from logging import getLogger
from typing import Any, Callable, Dict, Iterator, List, Optional, Tuple
from .protocols import IReadLines
# Whenever the decoding changes we should bump up this version.
# 0.0.4: accepting continue/break elements
DOC_VERSION = "0.0.4"
# name, libname, source, docstring, lineno
Location = Tuple[str, str, str, str, int]
_LOGGER = getLogger(__name__)
class Decoder:
def __init__(self) -> None:
self.memo: Dict[str, str] = {}
self.location_memo: Dict[str, Location] = {}
def decode_message_type(self, message_type: str, message: str) -> Optional[dict]:
handler = _MESSAGE_TYPE_INFO[message_type]
ret = {"message_type": message_type}
try:
r = handler(self, message)
if not r:
if message_type in ("M", "P"):
return None
raise RuntimeError(
f"No return when decoding: {message_type} - {message}"
)
if not isinstance(r, dict):
ret[
"error"
] = f"Expected dict return when decoding: {message_type} - {message}. Found: {ret}"
ret.update(r)
except Exception as e:
ret["error"] = f"Error decoding: {message_type}: {e}"
return ret
def _decode_oid(decoder: Decoder, oid: str) -> str:
return decoder.memo[oid]
def _decode_float(decoder: Decoder, msg: str) -> float:
return float(msg)
def _decode_int(decoder: Decoder, msg: str) -> int:
return int(msg)
def _decode_str(decoder: Decoder, msg: str) -> str:
return msg
def _decode_json(decoder: Decoder, msg: str) -> str:
return json.loads(msg)
def _decode_dateisoformat(decoder: Decoder, msg: str) -> str:
d: datetime.datetime = datetime.datetime.fromisoformat(msg)
# The internal time is in utc, so, we need to decode it to the current timezone.
d = d.astimezone()
return d.isoformat(timespec="milliseconds")
def _decode(message_definition: str) -> Callable[[Decoder, str], Any]:
if message_definition == "memorize":
return decode_memo
elif message_definition == "memorize_path":
return decode_path_location
names: List[str] = []
name_to_decode: dict = {}
for s in message_definition.split(","):
s = s.strip()
i = s.find(":")
if i != -1:
s, decode = s.split(":", 1)
else:
raise AssertionError(f"Unexpected definition: {message_definition}")
names.append(s)
if decode == "oid":
name_to_decode[s] = _decode_oid
elif decode == "int":
name_to_decode[s] = _decode_int
elif decode == "float":
name_to_decode[s] = _decode_float
elif decode == "str":
name_to_decode[s] = _decode_str
elif decode == "json.loads":
name_to_decode[s] = _decode_json
elif decode == "dateisoformat":
name_to_decode[s] = _decode_dateisoformat
elif decode == "loc_id":
name_to_decode[s] = "loc_id"
elif decode == "loc_and_doc_id":
name_to_decode[s] = "loc_and_doc_id"
else:
raise RuntimeError(f"Unexpected: {decode}")
def dec_impl(decoder: Decoder, message: str):
splitted = message.split("|", len(names) - 1)
ret: Dict[str, Any] = {}
for i, s in enumerate(splitted):
name = names[i]
try:
dec_func = name_to_decode[name]
if dec_func == "loc_id":
try:
info = decoder.location_memo[s]
except Exception:
_LOGGER.critical(f"Could not find memo: {s}")
raise
ret["name"] = info[0]
ret["libname"] = info[1]
ret["source"] = info[2]
ret["lineno"] = info[4]
elif dec_func == "loc_and_doc_id":
try:
info = decoder.location_memo[s]
except Exception:
_LOGGER.critical(f"Could not find location_memo: {s}")
raise
ret["name"] = info[0]
ret["libname"] = info[1]
ret["source"] = info[2]
ret["doc"] = info[3]
ret["lineno"] = info[4]
else:
ret[name] = dec_func(decoder, s)
except Exception:
ret[name] = None
return ret
return dec_impl
def decode_memo(decoder: Decoder, message: str) -> None:
"""
Args:
message: something as 'a:"Start Suite"'
"""
memo_id: str
memo_value: str
memo_id, memo_value = message.split(":", 1)
# Note: while the json.loads could actually load anything, in the spec we only
# have oid for string messages (which is why it's ok to type it as that).
memo_value = json.loads(memo_value)
decoder.memo[memo_id] = memo_value
def decode_path_location(decoder: Decoder, message: str) -> None:
"""
Args:
message: something as 'a:b|c|d|e|33'
"""
memo_id, memo_references = message.split(":", 1)
name_id, libname_id, source_id, doc_id, lineno = memo_references.split("|", 4)
decoder.location_memo[memo_id] = (
decoder.memo[name_id],
decoder.memo[libname_id],
decoder.memo[source_id],
decoder.memo[doc_id],
int(lineno),
)
MESSAGE_TYPE_YIELD_RESUME = "YR"
MESSAGE_TYPE_YIELD_SUSPEND = "YS"
MESSAGE_TYPE_YIELD_FROM_RESUME = "YFR"
MESSAGE_TYPE_YIELD_FROM_SUSPEND = "YFS"
# Note: the spec appears in 3 places (and needs to be manually updated accordingly).
# _decoder.py (this file)
# decoder.ts
# format.md
_MESSAGE_TYPE_INFO: Dict[str, Callable[[Decoder, str], Any]] = {}
def _build_decoding():
from robocorp.log._decoder_spec import SPEC
for line in SPEC.splitlines(keepends=False):
line = line.strip()
if not line or line.startswith("#"):
continue
try:
key, val = line.split(":", 1)
key = key.strip()
val = val.strip()
except ValueError:
# No ':' there, must be = msg
key, val = line.split("=", 1)
key = key.strip()
val = val.strip()
_MESSAGE_TYPE_INFO[key] = _MESSAGE_TYPE_INFO[val]
else:
_MESSAGE_TYPE_INFO[key] = _decode(val)
_build_decoding()
def iter_decoded_log_format(stream: IReadLines) -> Iterator[dict]:
decoder: Decoder = Decoder()
line: str
message_type: str
message: str
decoded: Optional[dict]
for line in stream.readlines():
line = line.strip()
if line:
try:
message_type, message = line.split(" ", 1)
except Exception:
raise RuntimeError(f"Error decoding line: {line}")
decoded = decoder.decode_message_type(message_type, message)
if decoded:
yield decoded | /robocorp_log-2.7.0-py3-none-any.whl/robocorp/log/_decoder.py | 0.795618 | 0.184088 | _decoder.py | pypi |
import os.path
import sys
import platform
import logging
from typing import Sequence, Optional, List, Dict
import threading
log = logging.getLogger(__name__)
def normcase(s, NORMCASE_CACHE={}):
try:
return NORMCASE_CACHE[s]
except:
normalized = NORMCASE_CACHE[s] = s.lower()
return normalized
IS_PYPY = platform.python_implementation() == "PyPy"
IS_WINDOWS = sys.platform == "win32"
IS_LINUX = sys.platform in ("linux", "linux2")
IS_MAC = sys.platform == "darwin"
LIBRARY_CODE_BASENAMES_STARTING_WITH = ("<",)
def _convert_to_str_and_clear_empty(roots):
new_roots = []
for root in roots:
assert isinstance(root, str), "%s not str (found: %s)" % (root, type(root))
if root:
new_roots.append(root)
return new_roots
class FilesFiltering(object):
"""
Usage is something as:
files_filtering = FilesFiltering(...)
# Detects if a given file is user-code.
files_filtering.in_project_roots(mod.__file__)
"""
def __init__(
self,
project_roots: Optional[Sequence[str]] = None,
library_roots: Optional[Sequence[str]] = None,
):
self._project_roots: List[str] = []
self._library_roots: List[str] = []
self._cache_in_project_roots: Dict[str, bool] = {}
if project_roots is not None:
self._set_project_roots(project_roots)
if library_roots is None:
library_roots = self._get_default_library_roots()
self._set_library_roots(library_roots)
@classmethod
def _get_default_library_roots(cls):
log.debug("Collecting default library roots.")
# Provide sensible defaults if not in env vars.
import site
roots = []
try:
import sysconfig # Python 2.7 onwards only.
except ImportError:
pass
else:
for path_name in set(("stdlib", "platstdlib", "purelib", "platlib")) & set(
sysconfig.get_path_names()
):
roots.append(sysconfig.get_path(path_name))
# Make sure we always get at least the standard library location (based on the `os` and
# `threading` modules -- it's a bit weird that it may be different on the ci, but it happens).
roots.append(os.path.dirname(os.__file__))
roots.append(os.path.dirname(threading.__file__))
if IS_PYPY:
# On PyPy 3.6 (7.3.1) it wrongly says that sysconfig.get_path('stdlib') is
# <install>/lib-pypy when the installed version is <install>/lib_pypy.
try:
import _pypy_wait # type: ignore
except ImportError:
log.debug(
"Unable to import _pypy_wait on PyPy when collecting default library roots."
)
else:
pypy_lib_dir = os.path.dirname(_pypy_wait.__file__)
log.debug("Adding %s to default library roots.", pypy_lib_dir)
roots.append(pypy_lib_dir)
if hasattr(site, "getusersitepackages"):
site_paths = site.getusersitepackages()
if isinstance(site_paths, (list, tuple)):
for site_path in site_paths:
roots.append(site_path)
else:
roots.append(site_paths)
if hasattr(site, "getsitepackages"):
site_paths = site.getsitepackages()
if isinstance(site_paths, (list, tuple)):
for site_path in site_paths:
roots.append(site_path)
else:
roots.append(site_paths)
for path in sys.path:
if os.path.exists(path) and os.path.basename(path) in (
"site-packages",
"pip-global",
):
roots.append(path)
roots.extend([os.path.realpath(path) for path in roots])
return sorted(set(roots))
def _fix_roots(self, roots):
roots = _convert_to_str_and_clear_empty(roots)
new_roots = []
for root in roots:
path = self._absolute_normalized_path(root)
if IS_WINDOWS:
new_roots.append(path + "\\")
else:
new_roots.append(path + "/")
return new_roots
def _absolute_normalized_path(self, filename, cache={}):
"""
Provides a version of the filename that's absolute and normalized.
"""
try:
return cache[filename]
except KeyError:
pass
if filename.startswith("<"):
cache[filename] = normcase(filename)
return cache[filename]
cache[filename] = normcase(os.path.abspath(filename))
return cache[filename]
def _set_project_roots(self, project_roots):
self._project_roots = self._fix_roots(project_roots)
log.debug("IDE_PROJECT_ROOTS %s\n" % project_roots)
def _get_project_roots(self):
return self._project_roots
def _set_library_roots(self, roots):
self._library_roots = self._fix_roots(roots)
log.debug("LIBRARY_ROOTS %s\n" % roots)
def _get_library_roots(self):
return self._library_roots
def _in_project_roots(self, received_filename: str) -> bool:
"""
Note: uncached. Use `in_project_roots`.
"""
DEBUG = False
if received_filename.startswith(LIBRARY_CODE_BASENAMES_STARTING_WITH):
if DEBUG:
log.debug(
"Not in in_project_roots - library basenames - starts with %s (%s)",
received_filename,
LIBRARY_CODE_BASENAMES_STARTING_WITH,
)
return False
project_roots = self._get_project_roots() # roots are absolute/normalized.
absolute_normalized_filename = self._absolute_normalized_path(received_filename)
absolute_normalized_filename_as_dir = absolute_normalized_filename + (
"\\" if IS_WINDOWS else "/"
)
found_in_project = []
for root in project_roots:
if root and (
absolute_normalized_filename.startswith(root)
or root == absolute_normalized_filename_as_dir
):
if DEBUG:
log.debug("In project: %s (%s)", absolute_normalized_filename, root)
found_in_project.append(root)
found_in_library = []
library_roots = self._get_library_roots()
for root in library_roots:
if root and (
absolute_normalized_filename.startswith(root)
or root == absolute_normalized_filename_as_dir
):
found_in_library.append(root)
if DEBUG:
log.debug("In library: %s (%s)", absolute_normalized_filename, root)
else:
if DEBUG:
log.debug(
"Not in library: %s (%s)", absolute_normalized_filename, root
)
if not project_roots:
# If we have no project roots configured, consider it being in the project
# roots if it's not found in site-packages (because we have defaults for those
# and not the other way around).
in_project = not found_in_library
if DEBUG:
log.debug(
"Final in project (no project roots): %s (%s)",
absolute_normalized_filename,
in_project,
)
else:
in_project = False
if found_in_project:
if not found_in_library:
if DEBUG:
log.debug(
"Final in project (in_project and not found_in_library): %s (True)",
absolute_normalized_filename,
)
in_project = True
else:
# Found in both, let's see which one has the bigger path matched.
if max(len(x) for x in found_in_project) > max(
len(x) for x in found_in_library
):
in_project = True
if DEBUG:
log.debug(
"Final in project (found in both): %s (%s)",
absolute_normalized_filename,
in_project,
)
return in_project
def in_project_roots(self, filename: str) -> bool:
try:
return self._cache_in_project_roots[filename]
except KeyError:
pass
in_project_roots = self._in_project_roots(filename)
self._cache_in_project_roots[filename] = in_project_roots
return in_project_roots | /robocorp_logging-0.0.9.tar.gz/robocorp_logging-0.0.9/src/robo_log/_rewrite_filtering.py | 0.580709 | 0.189053 | _rewrite_filtering.py | pypi |
import ast
from functools import partial
import itertools
import sys
from typing import (
Iterator,
Optional,
List,
Tuple,
Generic,
TypeVar,
)
import typing
import ast as ast_module
class _NodesProviderVisitor(ast_module.NodeVisitor):
def __init__(self, on_node=lambda node: None):
ast_module.NodeVisitor.__init__(self)
self._stack = []
self.on_node = on_node
def generic_visit(self, node):
self._stack.append(node)
self.on_node(self._stack, node)
ast_module.NodeVisitor.generic_visit(self, node)
self._stack.pop()
class _PrinterVisitor(ast_module.NodeVisitor):
def __init__(self, stream):
ast_module.NodeVisitor.__init__(self)
self._level = 0
self._stream = stream
def _replace_spacing(self, txt):
curr_len = len(txt)
delta = 80 - curr_len
return txt.replace("*SPACING*", " " * delta)
def generic_visit(self, node):
# Note: prints line and col offsets 0-based (even if the ast is 1-based for
# lines and 0-based for columns).
self._level += 1
try:
indent = " " * self._level
node_lineno = getattr(node, "lineno", -1)
if node_lineno != -1:
# Make 0-based
node_lineno -= 1
node_end_lineno = getattr(node, "end_lineno", -1)
if node_end_lineno != -1:
# Make 0-based
node_end_lineno -= 1
self._stream.write(
self._replace_spacing(
"%s%s *SPACING* (%s, %s) -> (%s, %s)\n"
% (
indent,
node.__class__.__name__,
node_lineno,
getattr(node, "col_offset", -1),
node_end_lineno,
getattr(node, "end_col_offset", -1),
)
)
)
tokens = getattr(node, "tokens", [])
for token in tokens:
token_lineno = token.lineno
if token_lineno != -1:
# Make 0-based
token_lineno -= 1
self._stream.write(
self._replace_spacing(
"%s- %s, '%s' *SPACING* (%s, %s->%s)\n"
% (
indent,
token.type,
token.value.replace("\n", "\\n").replace("\r", "\\r"),
token_lineno,
token.col_offset,
token.end_col_offset,
)
)
)
ast_module.NodeVisitor.generic_visit(self, node)
finally:
self._level -= 1
if sys.version_info[:2] < (3, 8):
class Protocol(object):
pass
else:
from typing import Protocol
class INode(Protocol):
type: str
lineno: int
end_lineno: int
col_offset: int
end_col_offset: int
T = TypeVar("T")
Y = TypeVar("Y", covariant=True)
class NodeInfo(Generic[Y]):
stack: Tuple[INode, ...]
node: Y
__slots__ = ["stack", "node"]
def __init__(self, stack, node):
self.stack = stack
self.node = node
def __str__(self):
return f"NodeInfo({self.node.__class__.__name__})"
__repr__ = __str__
def print_ast(node, stream=None):
if stream is None:
stream = sys.stderr
errors_visitor = _PrinterVisitor(stream)
errors_visitor.visit(node)
if typing.TYPE_CHECKING:
from typing import runtime_checkable, Protocol
@runtime_checkable
class _AST_CLASS(INode, Protocol):
pass
else:
# We know that the AST we're dealing with is the INode.
# We can't use runtime_checkable on Python 3.7 though.
_AST_CLASS = ast_module.AST
def iter_and_replace_nodes(
node, internal_stack: Optional[List[INode]] = None, recursive=True
) -> Iterator[Tuple[List[INode], INode]]:
"""
:note: the yielded stack is actually always the same (mutable) list, so,
clients that want to return it somewhere else should create a copy.
"""
stack: List[INode]
if internal_stack is None:
stack = []
if node.__class__.__name__ != "File":
stack.append(node)
else:
stack = internal_stack
if recursive:
for field, value in ast_module.iter_fields(node):
if isinstance(value, list):
new_value = []
changed = False
for item in value:
if isinstance(item, _AST_CLASS):
new = yield stack, item
if new is not None:
changed = True
new_value.extend(new)
else:
new_value.append(item)
stack.append(item)
yield from iter_and_replace_nodes(item, stack, recursive=True)
stack.pop()
if changed:
setattr(node, field, new_value)
elif isinstance(value, _AST_CLASS):
yield stack, value
stack.append(value)
yield from iter_and_replace_nodes(value, stack, recursive=True)
stack.pop()
else:
# Not recursive
for _field, value in ast_module.iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, _AST_CLASS):
yield stack, item
elif isinstance(value, _AST_CLASS):
yield stack, value
def copy_line_and_col(from_node, to_node):
to_node.lineno = from_node.lineno
to_node.col_offset = from_node.col_offset
class NodeFactory:
def __init__(self, lineno, col_offset):
self.lineno = lineno
self.col_offset = col_offset
self.next_var_id = partial(next, itertools.count())
def _set_line_col(self, node):
node.lineno = self.lineno
node.col_offset = self.col_offset
return node
def Call(self) -> ast.Call:
call = ast.Call(keywords=[], args=[])
return self._set_line_col(call)
def Assign(self) -> ast.Assign:
assign = ast.Assign()
return self._set_line_col(assign)
def NameLoad(self, name: str) -> ast.Name:
return self._set_line_col(ast.Name(name, ast.Load()))
def NameTempStore(self) -> ast.Name:
name = f"@tmp_{self.next_var_id()}"
return self.NameStore(name)
def NameStore(self, name) -> ast.Name:
return self._set_line_col(ast.Name(name, ast.Store()))
def Attribute(self, name: ast.AST, attr_name: str) -> ast.Attribute:
return self._set_line_col(ast.Attribute(name, attr_name, ast.Load()))
def NameLoadRewriteCallback(self, builtin_name: str) -> ast.Attribute:
ref = self.NameLoad("@robocorp_rewrite_callbacks")
return self._set_line_col(self.Attribute(ref, builtin_name))
def NameLoadRobo(self, builtin_name: str) -> ast.Attribute:
ref = self.NameLoad("@robo_log")
return self._set_line_col(self.Attribute(ref, builtin_name))
def Str(self, s) -> ast.Str:
return self._set_line_col(ast.Str(s))
def If(self, cond) -> ast.If:
return self._set_line_col(ast.If(cond))
def AndExpr(self, expr1, expr2) -> ast.Expr:
andop = self._set_line_col(ast.And())
bool_op = self._set_line_col(ast.BoolOp(op=andop, values=[expr1, expr2]))
return self.Expr(bool_op)
def Expr(self, expr) -> ast.Expr:
return self._set_line_col(ast.Expr(expr))
def Try(self) -> ast.Try:
try_node = ast.Try(handlers=[], orelse=[])
return self._set_line_col(try_node)
def Dict(self) -> ast.Dict:
return self._set_line_col(ast.Dict())
def LineConstant(self) -> ast.Constant:
return self._set_line_col(ast.Constant(self.lineno))
def NoneConstant(self) -> ast.Constant:
return self._set_line_col(ast.Constant("None"))
def ExceptHandler(self) -> ast.ExceptHandler:
return self._set_line_col(ast.ExceptHandler(body=[]))
def Raise(self) -> ast.Raise:
return self._set_line_col(ast.Raise()) | /robocorp_logging-0.0.9.tar.gz/robocorp_logging-0.0.9/src/robo_log/_ast_utils.py | 0.49292 | 0.187263 | _ast_utils.py | pypi |
from typing import Optional, Sequence, Dict
from collections import namedtuple
import enum
import robo_log
_ROBO_LOG_MODULE_NAME = robo_log.__name__
# Examples:
# Filter("mymodule.ignore", kind="exclude")
# Filter("mymodule.rpa", kind="full_log")
# Filter("RPA", kind="log_on_project_call")
Filter = namedtuple("Filter", "name, kind")
class FilterKind(enum.Enum):
# Note: the values are the name which appears in the .pyc cache.
full_log = "full"
log_on_project_call = "call"
exclude = "exc"
class BaseConfig:
def get_log_rewrite_assigns(self) -> bool:
"""
Returns:
Whether assigns should be tracked. When assigns are tracked,
any assign in a module which has mapped to `FilterKind.full_log`
will be logged.
"""
return True
def get_filter_kind_by_module_name(self, module_name: str) -> Optional[FilterKind]:
"""
Args:
module_name: the name of the module to check.
Returns:
The filter kind or None if the filter kind couldn't be discovered
just with the module name.
"""
raise NotImplementedError()
def get_filter_kind_by_module_name_and_path(
self, module_name: str, filename: str
) -> FilterKind:
"""
Args:
module_name: the name of the module to check.
Returns:
The filter kind to be applied.
"""
raise NotImplementedError()
def set_as_global(self):
"""
May be used to set this config as the global one to determine if a
given file is in the project or not.
"""
def get_rewrite_assigns(self) -> bool:
"""
Returns:
True if assign statements should be rewritten so that assigns
appear in the log and False otherwise.
"""
raise NotImplementedError()
class ConfigFilesFiltering(BaseConfig):
"""
A configuration in which modules are rewritten if they are considered "project" modules.
If no arguments are passed, python is queried for the paths that are "library" paths
and "project" paths are all that aren't inside the "library" paths.
If "project_roots" is passed, then any file inside one of those folders is considered
to be a file to be rewritten.
"""
def __init__(
self,
filters: Sequence[Filter] = (),
):
self._filters = filters
self._cache_modname_to_kind: Dict[str, Optional[FilterKind]] = {}
self._cache_filename_to_kind: Dict[str, FilterKind] = {}
def get_filter_kind_by_module_name(self, module_name: str) -> Optional[FilterKind]:
if module_name.startswith(_ROBO_LOG_MODULE_NAME):
# We can't rewrite our own modules (we could end up recursing).
if "check" in module_name:
# Exception just for testing.
return FilterKind.full_log
return FilterKind.exclude
return self._get_modname_filter_kind(module_name)
def get_filter_kind_by_module_name_and_path(
self, module_name: str, filename: str
) -> FilterKind:
return self._get_modname_or_file_filter_kind(filename, module_name)
# --- Internal APIs
def _compute_filter_kind(self, module_name: str) -> Optional[FilterKind]:
"""
:return: True if it should be excluded, False if it should be included and None
if no rule matched the given file.
"""
for exclude_filter in self._filters:
if exclude_filter.name == module_name or module_name.startswith(
exclude_filter.name + "."
):
return exclude_filter.kind
return None
def _get_modname_filter_kind(self, module_name: str) -> Optional[FilterKind]:
cache_key = module_name
try:
return self._cache_modname_to_kind[cache_key]
except KeyError:
pass
filter_kind = self._compute_filter_kind(module_name)
self._cache_modname_to_kind[cache_key] = filter_kind
return filter_kind
def _get_modname_or_file_filter_kind(
self, filename: str, module_name: str
) -> FilterKind:
filter_kind = self._get_modname_filter_kind(module_name)
if filter_kind is not None:
return filter_kind
absolute_filename = robo_log._files_filtering._absolute_normalized_path(
filename
)
cache_key = absolute_filename
try:
return self._cache_filename_to_kind[cache_key]
except KeyError:
pass
exclude = not robo_log._in_project_roots(absolute_filename)
if exclude:
filter_kind = FilterKind.exclude
else:
filter_kind = FilterKind.full_log
self._cache_filename_to_kind[cache_key] = filter_kind
return filter_kind | /robocorp_logging-0.0.9.tar.gz/robocorp_logging-0.0.9/src/robo_log/_config.py | 0.894156 | 0.273933 | _config.py | pypi |
import json
import logging
import mimetypes
import os
from functools import lru_cache
from pathlib import Path
from typing import TYPE_CHECKING, Optional, Union
from ._client import AssetNotFound, AssetUploadFailed
if TYPE_CHECKING:
from ._client import AssetsClient
from ._requests import Response
__version__ = "0.3.2"
version_info = [int(x) for x in __version__.split(".")]
JSON = Union[dict[str, "JSON"], list["JSON"], str, int, float, bool, None]
LOGGER = logging.getLogger(__name__)
# Known (additional) mimetypes from file extensions
KNOWN_MIMETYPES = [
("text/x-yaml", ".yml"),
("text/x-yaml", ".yaml"),
]
@lru_cache(maxsize=1)
def _get_client() -> "AssetsClient":
"""
Creates and returns an Asset Storage API client based on the injected
environment variables from Control Room (or RCC).
"""
from ._client import AssetsClient
from ._environment import get_endpoint, get_token, get_workspace
workspace = get_workspace()
endpoint = get_endpoint()
token = get_token()
return AssetsClient(workspace, endpoint, token)
def list_assets() -> list[str]:
"""List all the existing assets.
Returns:
A list of available assets' names
"""
return [asset["name"] for asset in _get_client().list_assets()]
def delete_asset(name: str):
"""Delete an asset by providing its `name`.
This operation cannot be undone.
Args:
name: Asset to delete
Raises:
AssetNotFound: Asset with the given name does not exist
"""
LOGGER.info("Deleting asset: %s", name)
client = _get_client()
client.delete_asset(asset_id=f"name:{name}")
def _get_asset(name: str) -> "Response":
"""Get an asset's payload URL."""
from ._requests import Requests
client = _get_client()
details = client.get_asset(asset_id=f"name:{name}")
payload = details["payload"]
if payload["type"] == "url":
url = payload["url"]
return Requests().get(url)
if payload["type"] == "empty":
raise ValueError(
f"Asset {details['name']!r} is empty."
+ " It could mean an upload is still pending"
+ " or has previously failed."
)
else:
# Note (2023-07-04):
# The 'data' payload type should only be used when uploading,
# and it should never be in the response when getting an asset.
raise RuntimeError(f"Unsupported payload type: {payload['type']}")
def get_text(name: str) -> str:
"""Return the given asset as text.
Arguments:
name: Name of asset
Returns:
Asset content as text
Raises:
AssetNotFound: No asset defined with given name
"""
response = _get_asset(name)
return response.text
def get_json(name: str, **kwargs) -> JSON:
"""Return the given asset as a deserialized JSON object.
Arguments:
name: Name of asset
**kwargs: Additional parameters for `json.loads`
Returns:
Asset content as a Python object (dict, list etc.)
Raises:
AssetNotFound: No asset defined with given name
JSONDecodeError: Asset was not valid JSON
"""
response = _get_asset(name)
return response.json(**kwargs)
def get_file(name: str, path: Union[os.PathLike, str], exist_ok=False) -> Path:
"""Fetch the given asset and store it in a file.
Arguments:
name: Name of asset
path: Destination path for downloaded file
exist_ok: Overwrite file if it already exists
Returns:
Path to created file
Raises:
AssetNotFound: No asset defined with given name
FileExistsError: Destination already exists
"""
response = _get_asset(name)
path = Path(path).absolute()
if path.exists() and not exist_ok:
raise FileExistsError(f"File already exists: {path}")
path.write_bytes(response.content)
return path
def get_bytes(name: str) -> bytes:
"""Return the given asset as bytes.
Arguments:
name: Name of asset
Returns:
Asset content as bytes
Raises:
AssetNotFound: No asset defined with given name
"""
response = _get_asset(name)
return response.content
def _set_asset(name: str, content: bytes, content_type: str, wait: bool):
"""Upload asset content, and create asset if it doesn't already exist."""
client = _get_client()
try:
details = client.get_asset(asset_id=f"name:{name}")
LOGGER.debug("Updating existing asset with id: %s", details["id"])
except AssetNotFound:
details = client.create_asset(name=name)
LOGGER.debug("Created new asset with id: %s", details["id"])
LOGGER.info("Uploading asset %r (content-type: %s)", name, content_type)
client.upload_asset(details["id"], content, content_type, wait)
def set_text(name: str, text: str, wait: bool = True):
"""Create or update an asset to contain the given string.
Arguments:
name: Name of asset
text: Text content for asset
wait: Wait for asset to update
"""
content = text.encode("utf-8")
content_type = "text/plain"
_set_asset(name, content, content_type, wait)
def set_json(name: str, value: JSON, wait: bool = True, **kwargs):
"""Create or update an asset to contain the given object, serialized as JSON.
Arguments:
name: Name of asset
value: Value for asset, e.g. dict or list
wait: Wait for asset to update
**kwargs: Additional arguments for `json.dumps`
"""
content = json.dumps(value, **kwargs).encode("utf-8")
content_type = "application/json"
_set_asset(name, content, content_type, wait)
def set_file(
name: str,
path: Union[os.PathLike, str],
content_type: Optional[str] = None,
wait: bool = True,
):
"""Create or update an asset to contain the contents of the given file.
Arguments:
name: Name of asset
path: Path to file
content_type: Content type (or mimetype) of file, detected automatically
from file extension if not defined
wait: Wait for asset to update
"""
if content_type is None:
for type_, ext in KNOWN_MIMETYPES:
mimetypes.add_type(type_, ext)
content_type, _ = mimetypes.guess_type(path)
if content_type is not None:
LOGGER.info("Detected content type %r", content_type)
else:
content_type = "application/octet-stream"
LOGGER.info("Unable to detect content type, using %r", content_type)
content = Path(path).read_bytes()
_set_asset(name, content, content_type, wait)
def set_bytes(
name: str,
data: bytes,
content_type="application/octet-stream",
wait: bool = True,
):
"""Create or update an asset to contain the given bytes.
Arguments:
name: Name of asset
data: Raw content
content_type: Content type (or mimetype) of asset
wait: Wait for asset to update
"""
_set_asset(name, data, content_type, wait)
__all__ = [
"AssetNotFound",
"AssetUploadFailed",
"list_assets",
"delete_asset",
"get_text",
"get_json",
"get_file",
"get_bytes",
"set_text",
"set_json",
"set_file",
"set_bytes",
] | /robocorp_storage-0.3.2-py3-none-any.whl/robocorp/storage/__init__.py | 0.881405 | 0.157979 | __init__.py | pypi |
import typing
from pathlib import Path
from types import TracebackType
from typing import Any, Callable, Optional, Sequence, Set, TypeVar, Union
ExcInfo = tuple[type[BaseException], BaseException, TracebackType]
OptExcInfo = Union[ExcInfo, tuple[None, None, None]]
T = TypeVar("T")
Y = TypeVar("Y", covariant=True)
def check_implements(x: T) -> T:
"""
Helper to check if a class implements some protocol.
:important: It must be the last method in a class due to
https://github.com/python/mypy/issues/9266
Example:
def __typecheckself__(self) -> None:
_: IExpectedProtocol = check_implements(self)
Mypy should complain if `self` is not implementing the IExpectedProtocol.
"""
return x
# Note: this is a bit messy as we're mixing task states with log levels.
# Note2: This is for the log.html and not really for user APIs.
class Status:
NOT_RUN = "NOT_RUN" # Initial status for a task which is not run.
PASS = "PASS" # Used for task pass
FAIL = "FAIL" # Used for task failure
ERROR = "ERROR" # log.critical
INFO = "INFO" # log.info
WARN = "WARN" # log.warn
DEBUG = "DEBUG" # log.debug
class ITask(typing.Protocol):
module_name: str
filename: str
method: typing.Callable
status: str
message: str
exc_info: Optional[OptExcInfo]
@property
def name(self) -> str:
pass
@property
def lineno(self) -> int:
pass
def run(self) -> None:
pass
@property
def failed(self) -> bool:
"""
Returns true if the task failed.
(in which case usually exc_info is not None).
"""
class IContextErrorReport(typing.Protocol):
def show_error(self, message):
pass
class ICallback(typing.Protocol):
"""
Note: the actual __call__ must be defined in a subclass protocol.
"""
def register(self, callback):
pass
def unregister(self, callback):
pass
class IAutoUnregisterContextManager(typing.Protocol):
def __enter__(self):
pass
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class IOnTaskFuncFoundCallback(ICallback, typing.Protocol):
def __call__(self, func: Callable):
pass
def register(self, callback: Callable) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable) -> None:
pass
class IBeforeCollectTasksCallback(ICallback, typing.Protocol):
def __call__(self, path: Path, task_names: Set[str]):
pass
def register(
self, callback: Callable[[Path, Set[str]], Any]
) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable[[Path, Set[str]], Any]) -> None:
pass
class IBeforeTaskRunCallback(ICallback, typing.Protocol):
def __call__(self, task: ITask):
pass
def register(
self, callback: Callable[[ITask], Any]
) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable[[ITask], Any]) -> None:
pass
class IBeforeAllTasksRunCallback(ICallback, typing.Protocol):
def __call__(self, tasks: Sequence[ITask]):
pass
def register(
self, callback: Callable[[Sequence[ITask]], Any]
) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable[[Sequence[ITask]], Any]) -> None:
pass
class IAfterAllTasksRunCallback(ICallback, typing.Protocol):
def __call__(self, tasks: Sequence[ITask]):
pass
def register(
self, callback: Callable[[Sequence[ITask]], Any]
) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable[[Sequence[ITask]], Any]) -> None:
pass
class IAfterTaskRunCallback(ICallback, typing.Protocol):
def __call__(self, task: ITask):
pass
def register(
self, callback: Callable[[ITask], Any]
) -> IAutoUnregisterContextManager:
pass
def unregister(self, callback: Callable[[ITask], Any]) -> None:
pass | /robocorp_tasks-2.1.3-py3-none-any.whl/robocorp/tasks/_protocols.py | 0.833866 | 0.224778 | _protocols.py | pypi |
import json
import os
import sys
import threading
import traceback
from pathlib import Path
from typing import List, Sequence, Union
from ._argdispatch import arg_dispatch as _arg_dispatch
# Note: the args must match the 'dest' on the configured argparser.
@_arg_dispatch.register(name="list")
def list_tasks(
path: str,
) -> int:
"""
Prints the tasks available at a given path to the stdout in json format.
[
{
"name": "task_name",
"line": 10,
"file": "/usr/code/projects/tasks.py",
"docs": "Task docstring",
},
...
]
Args:
path: The path (file or directory) from where tasks should be collected.
"""
from contextlib import redirect_stdout
from robocorp.tasks._collect_tasks import collect_tasks
from robocorp.tasks._protocols import ITask
from robocorp.tasks._task import Context
p = Path(path)
context = Context()
if not p.exists():
context.show_error(f"Path: {path} does not exist")
return 1
original_stdout = sys.stdout
with redirect_stdout(sys.stderr):
task: ITask
tasks_found = []
for task in collect_tasks(p):
tasks_found.append(
{
"name": task.name,
"line": task.lineno,
"file": task.filename,
"docs": getattr(task.method, "__doc__") or "",
}
)
original_stdout.write(json.dumps(tasks_found))
original_stdout.flush()
return 0
# Note: the args must match the 'dest' on the configured argparser.
@_arg_dispatch.register()
def run(
output_dir: str,
path: str,
task_name: Union[Sequence[str], str, None],
max_log_files: int = 5,
max_log_file_size: str = "1MB",
console_colors: str = "auto",
log_output_to_stdout: str = "",
no_status_rc: bool = False,
) -> int:
"""
Runs a task.
Args:
output_dir: The directory where output should be put.
path: The path (file or directory where the tasks should be collected from.
task_name: The name(s) of the task to run.
max_log_files: The maximum number of log files to be created (if more would
be needed the oldest one is deleted).
max_log_file_size: The maximum size for the created log files.
console_colors:
"auto": uses the default console
"plain": disables colors
"ansi": forces ansi color mode
log_output_to_stdout:
"": query the RC_LOG_OUTPUT_STDOUT value.
"no": don't provide log output to the stdout.
"json": provide json output to the stdout.
no_status_rc:
Set to True so that if running tasks has an error inside the task
the return code of the process is 0.
Returns:
0 if everything went well.
1 if there was some error running the task.
"""
from robocorp.log import console, redirect
from ._collect_tasks import collect_tasks
from ._config import RunConfig, set_config
from ._exceptions import RobocorpTasksCollectError
from ._hooks import (
after_all_tasks_run,
after_task_run,
before_all_tasks_run,
before_task_run,
)
from ._log_auto_setup import setup_cli_auto_logging
from ._log_output_setup import setup_log_output, setup_log_output_to_port
from ._protocols import ITask, Status
from ._task import Context, set_current_task
from robocorp.log.pyproject_config import read_pyproject_toml
from robocorp.log.pyproject_config import read_robocorp_auto_log_config
console.set_mode(console_colors)
# Don't show internal machinery on tracebacks:
# setting __tracebackhide__ will make it so that robocorp-log
# won't show this frame onwards in the logging.
__tracebackhide__ = 1
p = Path(path).absolute()
context = Context()
if not p.exists():
context.show_error(f"Path: {path} does not exist")
return 1
# Enable faulthandler (writing to sys.stderr) early on in the
# task execution process.
import faulthandler
faulthandler.enable()
from robocorp import log
task_names: Sequence[str]
if not task_name:
task_names = []
task_or_tasks = "tasks"
elif isinstance(task_name, str):
task_names = [task_name]
task_or_tasks = "task"
else:
task_names = task_name
task_name = ", ".join(str(x) for x in task_names)
task_or_tasks = "task" if len(task_names) == 1 else "tasks"
config: log.AutoLogConfigBase
pyproject_path_and_contents = read_pyproject_toml(p)
pyproject_toml_contents: dict
if pyproject_path_and_contents is None:
config = log.DefaultAutoLogConfig()
pyproject_toml_contents = {}
else:
config = read_robocorp_auto_log_config(context, pyproject_path_and_contents)
pyproject_toml_contents = pyproject_path_and_contents.toml_contents
run_config = RunConfig(
Path(output_dir),
p,
task_names,
max_log_files,
max_log_file_size,
console_colors,
log_output_to_stdout,
no_status_rc,
pyproject_toml_contents,
)
with set_config(run_config), setup_cli_auto_logging(
# Note: we can't customize what's a "project" file or a "library" file, right now
# the customizations are all based on module names.
config
), redirect.setup_stdout_logging(log_output_to_stdout), setup_log_output(
output_dir=Path(output_dir),
max_files=max_log_files,
max_file_size=max_log_file_size,
), setup_log_output_to_port(), context.register_lifecycle_prints():
run_status = "PASS"
setup_message = ""
run_name = os.path.basename(p)
if task_name:
run_name += f" - {task_name}"
log.start_run(run_name)
try:
log.start_task("Collect tasks", "setup", "", 0)
try:
if not task_name:
context.show(f"\nCollecting tasks from: {path}")
else:
context.show(
f"\nCollecting {task_or_tasks} {task_name} from: {path}"
)
tasks: List[ITask] = list(collect_tasks(p, task_names))
if not tasks:
raise RobocorpTasksCollectError(
f"Did not find any tasks in: {path}"
)
except Exception as e:
run_status = "ERROR"
setup_message = str(e)
log.exception()
if not isinstance(e, RobocorpTasksCollectError):
traceback.print_exc()
else:
context.show_error(setup_message)
return 1
finally:
log.end_task("Collect tasks", "setup", run_status, setup_message)
returncode = 0
before_all_tasks_run(tasks)
try:
for task in tasks:
set_current_task(task)
before_task_run(task)
try:
task.run()
run_status = task.status = Status.PASS
except Exception as e:
run_status = task.status = Status.ERROR
if not no_status_rc:
returncode = 1
task.message = str(e)
task.exc_info = sys.exc_info()
finally:
after_task_run(task)
set_current_task(None)
finally:
log.start_task("Teardown tasks", "teardown", "", 0)
try:
after_all_tasks_run(tasks)
# Always do a process snapshot as the process is about to finish.
log.process_snapshot()
finally:
log.end_task("Teardown tasks", "teardown", Status.PASS, "")
return returncode
finally:
log.end_run(run_name, run_status)
# After the run is finished, start a timer which will print the
# current threads if the process doesn't exit after a given timeout.
from threading import Timer
var_name = "RC_DUMP_THREADS_AFTER_RUN_TIMEOUT"
try:
timeout = float(os.environ.get(var_name, "40"))
except Exception:
sys.stderr.write(
f"Invalid value for: {var_name} environment value. Cannot convert to float."
)
timeout = 40
def on_timeout():
_dump_threads(
message=f"All tasks have run but the process still hasn't exited after {timeout} seconds. Showing threads found:"
)
t = Timer(timeout, on_timeout)
t.daemon = True
t.start()
def _dump_threads(stream=None, message="Threads found"):
if stream is None:
stream = sys.stderr
thread_id_to_name = {}
try:
for t in threading.enumerate():
thread_id_to_name[t.ident] = "%s (daemon: %s)" % (t.name, t.daemon)
except Exception:
pass
stack_trace = [
"===============================================================================",
message,
"================================= Thread Dump =================================",
]
for thread_id, stack in sys._current_frames().items():
stack_trace.append(
"\n-------------------------------------------------------------------------------"
)
stack_trace.append(" Thread %s" % thread_id_to_name.get(thread_id, thread_id))
stack_trace.append("")
if "self" in stack.f_locals:
sys.stderr.write(str(stack.f_locals["self"]) + "\n")
for filename, lineno, name, line in traceback.extract_stack(stack):
stack_trace.append(' File "%s", line %d, in %s' % (filename, lineno, name))
if line:
stack_trace.append(" %s" % (line.strip()))
stack_trace.append(
"\n=============================== END Thread Dump ==============================="
)
stream.write("\n".join(stack_trace)) | /robocorp_tasks-2.1.3-py3-none-any.whl/robocorp/tasks/_commands.py | 0.641535 | 0.209187 | _commands.py | pypi |
from logging import getLogger
from typing import List
log = getLogger(__name__)
class _ArgDispatcher:
def __init__(self):
self._name_to_func = {}
self.argparser = self._create_argparser()
def _dispatch(self, parsed) -> int:
if not parsed.command:
self.argparser.print_help()
return 1
method = self._name_to_func[parsed.command]
dct = parsed.__dict__.copy()
dct.pop("command")
return method(**dct)
def register(self, name=None):
def do_register(func):
nonlocal name
if not name:
name = func.__code__.co_name
self._name_to_func[name] = func
return func
return do_register
def _create_argparser(self):
import argparse
parser = argparse.ArgumentParser(
prog="robocorp.tasks",
description="Robocorp framework for RPA development using Python.",
epilog="View https://github.com/robocorp/robo for more information",
)
subparsers = parser.add_subparsers(dest="command")
# Run
run_parser = subparsers.add_parser(
"run",
help="run will collect tasks with the @task decorator and run the first that matches based on the task name filter.",
)
run_parser.add_argument(
dest="path",
help="The directory or file with the tasks to run.",
nargs="?",
default=".",
)
run_parser.add_argument(
"-t",
"--task",
dest="task_name",
help="The name of the task that should be run.",
action="append",
)
run_parser.add_argument(
"-o",
"--output-dir",
dest="output_dir",
help="The directory where the logging output files will be stored.",
default="./output",
)
run_parser.add_argument(
"--max-log-files",
dest="max_log_files",
type=int,
help="The maximum number of output files to store the logs.",
default=5,
)
run_parser.add_argument(
"--max-log-file-size",
dest="max_log_file_size",
help="The maximum size for the log files (i.e.: 1MB, 500kb).",
default="1MB",
)
run_parser.add_argument(
"--console-colors",
help="""Define how the console messages shown should be color encoded.
"auto" (default) will color either using the windows API or the ansi color codes.
"plain" will disable console coloring.
"ansi" will force the console coloring to use ansi color codes.
""",
dest="console_colors",
type=str,
choices=["auto", "plain", "ansi"],
default="auto",
)
run_parser.add_argument(
"--log-output-to-stdout",
help="Can be used so that log messages are also sent to the 'stdout' (if not specified the RC_LOG_OUTPUT_STDOUT is also queried).",
dest="log_output_to_stdout",
type=str,
choices=["no", "json"],
default="",
)
run_parser.add_argument(
"--no-status-rc",
help="When set, if running tasks has an error inside the task the return code of the process is 0.",
dest="no_status_rc",
action="store_true",
)
# List tasks
list_parser = subparsers.add_parser(
"list",
help="Provides output to stdout with json contents of the tasks available.",
)
list_parser.add_argument(
dest="path",
help="The directory or file from where the tasks should be listed (default is the current directory).",
nargs="?",
default=".",
)
return parser
def process_args(self, args: List[str]) -> int:
"""
Processes the arguments and return the returncode.
"""
log.debug("Processing args: %s", " ".join(args))
parser = self._create_argparser()
try:
parsed = parser.parse_args(args)
except SystemExit as e:
code = e.code
if isinstance(code, int):
return code
if code is None:
return 0
return int(code)
return self._dispatch(parsed)
arg_dispatch = _ArgDispatcher() | /robocorp_tasks-2.1.3-py3-none-any.whl/robocorp/tasks/_argdispatch.py | 0.837487 | 0.195748 | _argdispatch.py | pypi |
from pathlib import Path
from typing import Optional
from ._protocols import ITask
__version__ = "2.1.3"
version_info = [int(x) for x in __version__.split(".")]
def task(func):
"""
Decorator for tasks (entry points) which can be executed by `robocorp.tasks`.
i.e.:
If a file such as tasks.py has the contents below:
..
from robocorp.tasks import task
@task
def enter_user():
...
It'll be executable by robocorp tasks as:
python -m robocorp.tasks run tasks.py -t enter_user
Args:
func: A function which is a task to `robocorp.tasks`.
"""
from . import _hooks
# When a task is found, register it in the framework as a target for execution.
_hooks.on_task_func_found(func)
return func
def session_cache(func):
"""
Provides decorator which caches return and clears automatically when all tasks have been run.
A decorator which automatically cache the result of the given function and
will return it on any new invocation until robocorp-tasks finishes running
all tasks.
The function may be either a generator with a single yield (so, the first
yielded value will be returned and when the cache is released the generator
will be resumed) or a function returning some value.
Args:
func: wrapped function.
"""
from ._hooks import after_all_tasks_run
from ._lifecycle import _cache
return _cache(after_all_tasks_run, func)
def task_cache(func):
"""
Provides decorator which caches return and clears it automatically when the current task has been run.
A decorator which automatically cache the result of the given function and
will return it on any new invocation until robocorp-tasks finishes running
the current task.
The function may be either a generator with a single yield (so, the first
yielded value will be returned and when the cache is released the generator
will be resumed) or a function returning some value.
Args:
func: wrapped function.
"""
from ._hooks import after_task_run
from ._lifecycle import _cache
return _cache(after_task_run, func)
def get_output_dir() -> Optional[Path]:
"""
Provide the output directory being used for the run or None if there's no output dir configured.
"""
from ._config import get_config
config = get_config()
if config is None:
return None
return config.output_dir
def get_current_task() -> Optional[ITask]:
"""
Provides the task which is being currently run or None if not currently running a task.
"""
from . import _task
return _task.get_current_task()
__all__ = ["task", "session_cache", "task_cache", "get_output_dir", "get_current_task"] | /robocorp_tasks-2.1.3-py3-none-any.whl/robocorp/tasks/__init__.py | 0.891793 | 0.478773 | __init__.py | pypi |
from pathlib import Path
from typing import Optional, Sequence
class RunConfig:
def __init__(
self,
output_dir: Path,
path: Path,
task_names: Sequence[str],
max_log_files: int,
max_log_file_size: str,
console_colors: str,
log_output_to_stdout: str,
no_status_rc: bool,
pyproject_contents: dict,
):
"""
Args:
output_dir: The directory where output should be put.
path: The path (file or directory where the tasks should be collected from.
task_name: The name(s) of the task(s) to run.
max_log_files: The maximum number of log files to be created (if more would
be needed the oldest one is deleted).
max_log_file_size: The maximum size for the created log files.
console_colors:
"auto": uses the default console
"plain": disables colors
"ansi": forces ansi color mode
log_output_to_stdout:
"": query the RC_LOG_OUTPUT_STDOUT value.
"no": don't provide log output to the stdout.
"json": provide json output to the stdout.
no_status_rc:
Set to True so that if running tasks has an error inside the task
the return code of the process is 0.
pyproject_contents:
The contents loaded from pyproject.toml.
"""
self.output_dir = output_dir
self.path = path
self.task_names = task_names
self.max_log_files = max_log_files
self.max_log_file_size = max_log_file_size
self.console_colors = console_colors
self.log_output_to_stdout = log_output_to_stdout
self.no_status_rc = no_status_rc
self.pyproject_contents = pyproject_contents
class _GlobalConfig:
instance: Optional[RunConfig] = None
def set_config(config: Optional[RunConfig]):
from ._callback import OnExitContextManager
_GlobalConfig.instance = config
def on_exit():
_GlobalConfig.instance = None
return OnExitContextManager(on_exit)
def get_config() -> Optional[RunConfig]:
return _GlobalConfig.instance | /robocorp_tasks-2.1.3-py3-none-any.whl/robocorp/tasks/_config.py | 0.859413 | 0.236797 | _config.py | pypi |
import base64
import binascii
import copy
import json
import logging
import os
from abc import ABCMeta, abstractmethod
from typing import Callable, Tuple
from cryptography.exceptions import InvalidTag
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding, rsa
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
from ._errors import RobocorpVaultError
from ._requests import HTTPError, Requests
from ._secrets import SecretContainer
from ._utils import required_env, resolve_path, url_join
class BaseSecretManager(metaclass=ABCMeta):
"""Abstract class for secrets management.
Should be used as a base-class for any adapter implementation.
"""
@abstractmethod
def get_secret(self, secret_name) -> SecretContainer:
"""Return ``SecretContainer`` object with given name."""
@abstractmethod
def set_secret(self, secret: SecretContainer):
"""Set a secret with a new value."""
class FileSecrets(BaseSecretManager):
"""Adapter for secrets stored in a database file.
Supports only plaintext secrets, and should be used mainly for debugging.
The path to the secrets file can be set with the
environment variable ``RC_VAULT_SECRETS_FILE``, or as
an argument to the library.
The format of the secrets file should be one of the following:
.. code-block:: JSON
{
"name1": {
"key1": "value1",
"key2": "value2"
},
"name2": {
"key1": "value1"
}
}
OR
.. code-block:: YAML
name1:
key1: value1
key2: value2
name2:
key1: value1
"""
def __init__(self, secret_file="secrets.json"):
self.logger = logging.getLogger(__name__)
if "RC_VAULT_SECRETS_FILE" in os.environ:
path = required_env("RC_VAULT_SECRETS_FILE", secret_file)
elif "RPA_SECRET_FILE" in os.environ: # Backward-compatibility
path = required_env("RPA_SECRET_FILE", secret_file)
else:
path = secret_file
self.logger.info("Resolving path: %s", path)
self.path = resolve_path(path)
extension = self.path.suffix
self._loader, self._dumper = self._get_serializer(extension)
def _get_serializer(self, extension: str) -> Tuple[Callable, Callable]:
if extension == ".json":
return (json.load, json.dump)
if extension == ".yaml":
import yaml
return (yaml.safe_load, yaml.dump)
# NOTE(cmin764): This will raise instead of returning an empty secrets object
# because it is wrong starting from the "env.json" configuration level.
raise ValueError(
f"Local vault secrets file extension {extension!r} not supported."
)
def _load(self):
"""Load secrets file."""
with open(self.path, encoding="utf-8") as fd:
data = self._loader(fd)
if not isinstance(data, dict):
raise ValueError(f"Expected dictionary, was '{type(data)}'")
return data
def _save(self, data):
"""Save the secrets content to disk."""
if not isinstance(data, dict):
raise ValueError(f"Expected dictionary, was '{type(data)}'")
with open(self.path, "w", encoding="utf-8") as f:
self._dumper(data, f, indent=4)
def get_secret(self, secret_name: str) -> SecretContainer:
"""Get secret defined with given name from file.
Args:
secret_name: Name of secret to fetch
Returns:
SecretContainer: SecretContainer object
Raises:
KeyError: No secret with given name
"""
values = self._load().get(secret_name)
if values is None:
raise KeyError(f"Undefined secret: {secret_name}")
return SecretContainer(secret_name, "", values)
def set_secret(self, secret: SecretContainer) -> None:
"""Set the secret value in the local Vault with the given
``SecretContainer`` object.
Args:
secret: A ``SecretContainer`` object.
Raises:
IOError: Writing the local vault failed.
ValueError: Writing the local vault failed.
"""
data = self._load()
data[secret.name] = dict(secret)
self._save(data)
class RobocorpVault(BaseSecretManager):
"""Adapter for secrets stored in Robocorp Vault.
The following environment variables should exist:
- RC_API_SECRET_HOST: URL to Robocorp Secret API
- RC_API_SECRET_TOKEN: API token with access to Robocorp Secret API
- RC_WORKSPACE_ID: Robocorp Workspace ID
If the robot run is started from the Robocorp Control Room these environment
variables will be configured automatically.
"""
ENCRYPTION_SCHEME = "robocloud-vault-transit-v2"
def __init__(self, *args, **kwargs):
# pylint: disable=unused-argument
self.logger = logging.getLogger(__name__)
# Environment variables set by runner
host = required_env("RC_API_SECRET_HOST")
token = required_env("RC_API_SECRET_TOKEN")
workspace = required_env("RC_WORKSPACE_ID")
headers = {"Authorization": f"Bearer {token}"}
endpoint = url_join(
host,
"secrets-v1",
"workspaces",
workspace,
"secrets",
"", # Ensure endpoint has trailing slash
)
self._client = Requests(route_prefix=endpoint, default_headers=headers)
# Generated lazily on request
self.__private_key = None
self.__public_bytes = None
@property
def _private_key(self):
"""Cryptography private key object."""
if self.__private_key is None:
self.__private_key = rsa.generate_private_key(
public_exponent=65537, key_size=4096, backend=default_backend()
)
return self.__private_key
@property
def _public_bytes(self):
"""Serialized public key bytes."""
if self.__public_bytes is None:
self.__public_bytes = base64.b64encode(
self._private_key.public_key().public_bytes(
encoding=serialization.Encoding.DER,
format=serialization.PublicFormat.SubjectPublicKeyInfo,
)
)
return self.__public_bytes
def get_secret(self, secret_name: str) -> SecretContainer:
"""Get secret defined with given name from Robocorp Vault.
Args:
secret_name: Name of secret to fetch
Returns:
SecretContainer object
Raises:
RobocorpVaultError: Error with API request or response payload
"""
params = {
"encryptionScheme": self.ENCRYPTION_SCHEME,
"publicKey": self._public_bytes,
}
try:
response = self._client.get(secret_name, params=params)
payload = response.json()
payload = self._decrypt_payload(payload)
return SecretContainer(
name=payload["name"],
description=payload["description"],
values=payload["values"],
)
except InvalidTag as err:
raise RobocorpVaultError("Failed to validate authentication tag") from err
except HTTPError as err:
if err.status_code == 403:
message = "Not authorized to read secret. Is your token valid?"
elif err.status_code == 404:
message = f"No secret with name '{secret_name}' available"
else:
message = str(err)
raise RobocorpVaultError(message) from err
except Exception as err:
raise RobocorpVaultError(str(err)) from err
def _decrypt_payload(self, payload):
payload = copy.deepcopy(payload)
fields = payload.pop("encryption", None)
if fields is None:
raise KeyError("Missing encryption fields from response")
scheme = fields["encryptionScheme"]
if scheme != self.ENCRYPTION_SCHEME:
raise ValueError(f"Unexpected encryption scheme: {scheme}")
aes_enc = base64.b64decode(fields["encryptedAES"])
aes_tag = base64.b64decode(fields["authTag"])
aes_iv = base64.b64decode(fields["iv"])
# Decrypt AES key using our private key
aes_key = self._private_key.decrypt(
aes_enc,
padding.OAEP(
mgf=padding.MGF1(algorithm=hashes.SHA256()),
algorithm=hashes.SHA256(),
label=None,
),
)
# Decrypt actual value using decrypted AES key
ciphertext = base64.b64decode(payload.pop("value")) + aes_tag
data = AESGCM(aes_key).decrypt(binascii.hexlify(aes_iv), ciphertext, b"")
payload["values"] = json.loads(data)
return payload
def set_secret(self, secret: SecretContainer) -> None:
"""Set the secret value in the Vault.
Note:
The secret possibly consists of multiple key-value pairs,
which will all be overwritten with the values given here.
So don't try to update only one item of the secret, update all of them.
Args:
secret: A ``SecretContainer`` object
"""
value, aes_iv, aes_key, aes_tag = self._encrypt_secret_value_with_aes(secret)
pub_key = self.get_publickey()
aes_enc = self._encrypt_aes_key_with_public_rsa(aes_key, pub_key)
payload = {
"encryption": {
"authTag": aes_tag.decode(),
"encryptedAES": aes_enc.decode(),
"encryptionScheme": self.ENCRYPTION_SCHEME,
"iv": aes_iv.decode(),
},
"name": secret.name,
"value": value.decode(),
"description": secret.description,
}
try:
self._client.put(secret.name, json=payload)
except HTTPError as err:
if err.status_code == 403:
message = "Not authorized to write secret. Is your token valid?"
elif err.status_code == 404:
message = f"No secret with name '{secret.name}' available"
else:
message = str(err)
raise RobocorpVaultError(message) from err
except Exception as err:
raise RobocorpVaultError(str(err)) from err
def get_publickey(self) -> bytes:
"""Get the public key for AES encryption with the existing token."""
try:
response = self._client.get("publicKey")
return response.content
except HTTPError as err:
if err.status_code == 403:
message = "Not authorized to read public key. Is your token valid?"
else:
message = str(err)
raise RobocorpVaultError(message) from err
except Exception as err:
raise RobocorpVaultError(str(err)) from err
@staticmethod
def _encrypt_secret_value_with_aes(
secret: SecretContainer,
) -> Tuple[bytes, bytes, bytes, bytes]:
def generate_aes_key() -> Tuple[bytes, bytes]:
aes_key = AESGCM.generate_key(bit_length=256)
aes_iv = os.urandom(16)
return aes_key, aes_iv
def split_auth_tag_from_encrypted_value(
encrypted_value: bytes,
) -> Tuple[bytes, bytes]:
"""AES auth tag is the last 16 bytes of the AES encrypted value.
Split the tag from the value, as that is required for the API.
"""
aes_tag = encrypted_value[-16:]
trimmed_encrypted_value = encrypted_value[:-16]
return trimmed_encrypted_value, aes_tag
value = json.dumps(dict(secret)).encode()
aes_key, aes_iv = generate_aes_key()
encrypted_value = AESGCM(aes_key).encrypt(aes_iv, value, b"")
encrypted_value, aes_tag = split_auth_tag_from_encrypted_value(encrypted_value)
return (
base64.b64encode(encrypted_value),
base64.b64encode(aes_iv),
aes_key,
base64.b64encode(aes_tag),
)
@staticmethod
def _encrypt_aes_key_with_public_rsa(aes_key: bytes, public_rsa: bytes) -> bytes:
pub_decoded = base64.b64decode(public_rsa)
public_key = serialization.load_der_public_key(pub_decoded)
aes_enc = public_key.encrypt( # type: ignore [union-attr]
aes_key,
padding.OAEP(
mgf=padding.MGF1(algorithm=hashes.SHA256()),
algorithm=hashes.SHA256(),
label=None,
),
)
return base64.b64encode(aes_enc) | /robocorp_vault-1.2.0-py3-none-any.whl/robocorp/vault/_adapters.py | 0.896458 | 0.167729 | _adapters.py | pypi |
from contextlib import nullcontext
from functools import lru_cache
from typing import Any
from ._errors import RobocorpVaultError
from ._secrets import SecretContainer
__version__ = "1.2.0"
version_info = [int(x) for x in __version__.split(".")]
@lru_cache
def _get_vault():
from ._vault import Vault
return Vault()
def get_secret(name: str, hide: bool = True) -> SecretContainer:
"""Get a secret with the given name.
Args:
name: Name of secret to fetch
hide: Hide secret values from log output
Note:
If `robocorp.log` is available in the environment, the `hide` argument
controls if the given values are automatically hidden in the log
output.
Returns:
Secret container of name, description, and key-value pairs
Raises:
RobocorpVaultError: Error with API request or response payload.
"""
with _suppress_variables():
vault = _get_vault()
secret = vault.get_secret(name)
if hide:
_hide_secret_values(secret)
return secret
def set_secret(secret: SecretContainer, hide: bool = True) -> None:
"""Set a secret value using an existing container.
**Note:** If the secret already exists, all contents are replaced.
Args:
secret: Secret container, created manually or returned by `get_secret`
hide: Hide secret values from log output
Note:
If `robocorp.log` is available in the environment, the `hide` argument
controls if the given values are automatically hidden in the log
output.
Raises:
RobocorpVaultError: Error with API request or response payload
"""
with _suppress_variables():
vault = _get_vault()
vault.set_secret(secret)
if hide:
_hide_secret_values(secret)
def create_secret(
name: str,
values: dict[str, Any],
description: str = "",
exist_ok: bool = False,
hide: bool = True,
) -> SecretContainer:
"""Create a new secret, or overwrite an existing one.
Args:
name: Name of secret
values: Mapping of secret keys and values
description: Optional description for secret
exist_ok: Overwrite existing secret
hide: Hide secret values from log output
Note:
If `robocorp.log` is available in the environment, the `hide` argument
controls if the given values are automatically hidden in the log
output.
Returns:
Secret container of name, description, and key-value pairs
Raises:
RobocorpVaultError: Error with API request or response payload
"""
with _suppress_variables():
vault = _get_vault()
if not exist_ok:
try:
vault.get_secret(name)
except Exception:
pass
else:
raise RobocorpVaultError(f"Secret with name '{name}' already exists")
secret = SecretContainer(
name=name,
description=description,
values=values,
)
set_secret(secret, hide=hide)
return secret
def _suppress_variables():
try:
from robocorp import log # type: ignore [attr-defined]
return log.suppress_variables()
except ImportError:
return nullcontext()
def _hide_secret_values(secret):
try:
from robocorp import log
except ImportError:
return
for value in secret.values():
s = str(value)
log.hide_from_output(s)
# Now, also take care of the case where the user does a repr(value)
# and not just str(value) as in some places it's the repr(value)
# that'll appear in the logs.
r = repr(value)
if r.startswith("'") and r.endswith("'"):
r = r[1:-1]
if r != s:
log.hide_from_output(r)
log.hide_from_output(r.replace("\\", "\\\\"))
__all__ = [
"get_secret",
"set_secret",
"create_secret",
"SecretContainer",
"RobocorpVaultError",
] | /robocorp_vault-1.2.0-py3-none-any.whl/robocorp/vault/__init__.py | 0.89278 | 0.370709 | __init__.py | pypi |
import logging
from typing import Callable, Optional, Union, cast
from robocorp.tasks import get_current_task, task_cache
from ._adapters import BaseAdapter, FileAdapter, RobocorpAdapter
from ._context import Context
from ._exceptions import (
ApplicationException,
BusinessException,
EmptyQueue,
to_exception_type,
)
from ._types import ExceptionType, JSONType, State
from ._workitem import Input, Output
__version__ = "1.3.2"
version_info = [int(x) for x in __version__.split(".")]
LOGGER = logging.getLogger(__name__)
@task_cache
def __ctx():
"""Create a shared context for the task execution.
Automatically loads the first input item, as one is always automatically
reserved from the queue by Control Room.
After the task finishes, logs a warning if any output work items are
unsaved, and releases the current input if the task failed with an
exception.
"""
ctx = Context()
ctx.reserve_input()
yield ctx
for item in ctx.outputs:
if not item.saved:
LOGGER.warning("%s has unsaved changes that will be discarded", item)
current = ctx.current_input
if current is None or current.released:
return
task = get_current_task()
if not task:
return
if task.exc_info is not None:
exc_type, exc_value, _ = task.exc_info
exception_type = to_exception_type(exc_type)
code = getattr(exc_value, "code", None)
message = getattr(exc_value, "message", str(exc_value))
current.fail(exception_type, code, message)
# Workaround for @task_cache handling the generator
_ctx = cast(Callable[[], Context], __ctx)
class Inputs:
"""Inputs represents the input queue of work items.
It can be used to reserve and release items from the queue,
and iterate over them.
"""
def __iter__(self):
# NOTE: This iterator can't catch exceptions, so the
# context manager is mostly there to set the `completed` state
if self.current and not self.current.released:
with self.current as item:
yield item
while True:
try:
with self.reserve() as item:
yield item
except EmptyQueue:
break
@property
def current(self) -> Optional[Input]:
"""The current reserved input item."""
return _ctx().current_input
@property
def released(self) -> list[Input]:
"""A list of inputs reserved and released during the lifetime
of the library.
"""
return [item for item in _ctx().inputs if item.released]
def reserve(self) -> Input:
"""Reserve a new input work item.
There can only be one item reserved at a time.
Returns:
Input work item
Raises:
RuntimeError: An input work item is already reserved
workitems.EmptyQueue: There are no further items in the queue
"""
return _ctx().reserve_input()
class Outputs:
"""Outputs represents the output queue of work items.
It can be used to create outputs and inspect the items created during the execution.
"""
def __len__(self):
return len(_ctx().outputs)
def __getitem__(self, key):
return _ctx().outputs[key]
def __iter__(self):
return iter(_ctx().outputs)
def __reversed__(self):
return reversed(_ctx().outputs)
@property
def last(self) -> Optional[Output]:
"""The most recently created output work item, or `None`."""
if not _ctx().outputs:
return None
return _ctx().outputs[-1]
def create(
self,
payload: Optional[JSONType] = None,
files: Optional[Union[str, list[str]]] = None,
save: bool = True,
) -> Output:
"""Create a new output work item, which can have both a JSON
payload and attached files.
Creating an output item requires an input to be currently reserved.
Args:
payload: JSON serializable data (dict, list, scalar, etc.)
files: List of paths to files or glob pattern
save: Immediately save item after creation
Raises:
RuntimeError: No input work item reserved
"""
item = _ctx().create_output()
if payload is not None:
item.payload = payload
if files is not None:
if isinstance(files, str):
item.add_files(pattern=files)
else:
for path in files:
item.add_file(path=path)
if save:
item.save()
return item
inputs = Inputs()
outputs = Outputs()
__all__ = [
"inputs",
"outputs",
"Inputs",
"Outputs",
"Input",
"Output",
"State",
"ExceptionType",
"EmptyQueue",
"BusinessException",
"ApplicationException",
"RobocorpAdapter",
"FileAdapter",
"BaseAdapter",
] | /robocorp_workitems-1.3.2.tar.gz/robocorp_workitems-1.3.2/src/robocorp/workitems/__init__.py | 0.89317 | 0.23824 | __init__.py | pypi |
import importlib
import json
import os
from pathlib import Path
from typing import Any, Optional
from ._types import JSONType
# Sentinel value for undefined argument
UNDEFINED = object()
def required_env(name: str, default: Any = UNDEFINED) -> str:
"""Load required environment variable.
Args:
name: Name of environment variable
default: Value to use if variable is undefined.
If not given and variable is undefined, raises KeyError.
"""
val = os.getenv(name, default)
if val is UNDEFINED:
raise KeyError(f"Missing required environment variable: {name}")
return val
def import_by_name(name: str, caller: Optional[str] = None) -> Any:
"""Import module (or attribute) by name.
Args:
name: Import path, e.g. RPA.Robocorp.WorkItems.RobocorpAdapter
caller: Caller file name
"""
name = str(name)
# Attempt import as path module
try:
return importlib.import_module(name)
except ImportError:
pass
# Attempt import from calling file
if caller is not None:
try:
module = importlib.import_module(caller)
return getattr(module, name)
except AttributeError:
pass
# Attempt import as path to attribute inside module
if "." in name:
try:
path, attr = name.rsplit(".", 1)
module = importlib.import_module(path)
return getattr(module, attr)
except (AttributeError, ImportError):
pass
raise ValueError(f"No module/attribute with name: {name}")
def resolve_path(path: str) -> Path:
"""Resolve a string-based path, and replace variables."""
# TODO: Support RF syntax for replacement, such as ${ROBOT_ROOT}?
return Path(path).expanduser().resolve()
def url_join(*parts):
"""Join parts of URL and handle missing/duplicate slashes."""
return "/".join(str(part).strip("/") for part in parts)
def json_dumps(payload: JSONType, **kwargs):
"""Create JSON string in UTF-8 encoding."""
kwargs.setdefault("ensure_ascii", False)
return json.dumps(payload, **kwargs)
def truncate(text: str, size: int):
"""Truncate a string from the middle."""
if len(text) <= size:
return text
ellipsis = " ... "
segment = (size - len(ellipsis)) // 2
return text[:segment] + ellipsis + text[-segment:] | /robocorp_workitems-1.3.2.tar.gz/robocorp_workitems-1.3.2/src/robocorp/workitems/_utils.py | 0.643665 | 0.186984 | _utils.py | pypi |
import json
import logging
import os
from pathlib import Path
from typing import Any, Literal, Optional, Union
from robocorp.workitems._exceptions import EmptyQueue
from robocorp.workitems._types import State
from robocorp.workitems._utils import JSONType, json_dumps, resolve_path
from ._base import BaseAdapter
SourceType = Union[Literal["input"], Literal["output"]]
WorkItem = dict[str, Any]
UNDEFINED = object() # Undefined default value
ENCODING = "utf-8"
LOGGER = logging.getLogger(__name__)
INPUT_HELP = """\
Work items input path not defined, to simulate different input values set the
environment variable `RC_WORKITEM_INPUT_PATH`.
Generated input path: {path}
"""
OUTPUT_HELP = """\
Work items output path not defined, to control the path of the
generated output file set the environment variable `RC_WORKITEM_OUTPUT_PATH`.
Generated output path: {path}
"""
def _show_input_help(path: Path):
LOGGER.warning(INPUT_HELP.format(path=path), stacklevel=2)
def _show_output_help(path: Path):
LOGGER.warning(OUTPUT_HELP.format(path=path), stacklevel=2)
class FileAdapter(BaseAdapter):
"""Adapter for simulating work item input queues.
Uses a local JSON file to test different queue values locally before
the project is deploying into Control Room.
If no input or output path is defined by the environment variables
described below, a path is automatically generated. Additionally,
the input queue will be populated with an empty work item, as
Control Room runs will always have at least one input work item.
Reads and writes all work item files from/to the same parent
folder as the given input or outputs database.
Optional environment variables:
* RC_WORKITEM_INPUT_PATH: Path to work items input database file
* RC_WORKITEM_OUTPUT_PATH: Path to work items output database file
lazydocs: ignore
"""
def __init__(self) -> None:
self._input_path: Optional[Path] = None
self._output_path: Optional[Path] = None
self._inputs: list[WorkItem] = self._load_inputs()
self._outputs: list[WorkItem] = []
self._index: int = 0
@property
def input_path(self):
if self._input_path is None:
self._input_path = self._resolve_input_path()
return self._input_path
@property
def output_path(self) -> Path:
if self._output_path is None:
self._output_path = self._resolve_output_path()
return self._output_path
def _load_inputs(self) -> list[WorkItem]:
try:
with open(self.input_path, "r", encoding=ENCODING) as infile:
data = json.load(infile)
if not isinstance(data, list):
raise ValueError("Expected list of items")
if any(not isinstance(d, dict) for d in data):
raise ValueError("Items should be dictionaries")
if len(data) == 0:
raise ValueError("Expected at least one item")
return data
except Exception as exc:
raise ValueError(f"Invalid work items file: {exc}") from exc
def _resolve_input_path(self) -> Path:
env = os.getenv("RC_WORKITEM_INPUT_PATH") or os.getenv(
"RPA_INPUT_WORKITEM_PATH"
)
if env:
path = resolve_path(env)
else:
parent = resolve_path(os.getenv("ROBOT_ARTIFACTS") or "output")
path = parent / "work-items-in" / "workitems.json"
_show_input_help(path)
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "w", encoding=ENCODING) as fd:
fd.write(json_dumps([{"payload": None, "files": {}}]))
return path
def _save_inputs(self):
self.input_path.parent.mkdir(parents=True, exist_ok=True)
with open(self.input_path, "w", encoding=ENCODING) as fd:
fd.write(json_dumps(self._inputs, indent=4))
LOGGER.info("Saved input work items: %s", self.input_path)
def _resolve_output_path(self) -> Path:
env = os.getenv("RC_WORKITEM_OUTPUT_PATH") or os.getenv(
"RPA_OUTPUT_WORKITEM_PATH"
)
if env:
path = resolve_path(env)
else:
parent = resolve_path(os.getenv("ROBOT_ARTIFACTS") or "output")
path = parent / "work-items-out" / "workitems.json"
path.parent.mkdir(parents=True, exist_ok=True)
_show_output_help(path)
return path
def _save_outputs(self):
self.output_path.parent.mkdir(parents=True, exist_ok=True)
with open(self.output_path, "w", encoding=ENCODING) as fd:
fd.write(json_dumps(self._outputs, indent=4))
LOGGER.info("Saved output work items: %s", self.output_path)
def _get_item(self, item_id: str) -> tuple[SourceType, WorkItem]:
# The work item ID is analogue to inputs/outputs list queues index
idx = int(item_id)
if idx < len(self._inputs):
return "input", self._inputs[idx]
if idx < (len(self._inputs) + len(self._outputs)):
return "output", self._outputs[idx - len(self._inputs)]
raise ValueError(f"Unknown work item ID: {item_id}")
# Base class methods:
def reserve_input(self) -> str:
if self._index >= len(self._inputs):
raise EmptyQueue("No work items in the input queue")
try:
return str(self._index)
finally:
self._index += 1
def release_input(
self,
item_id: str,
state: State,
exception: Optional[dict] = None,
):
# Note: No effect (as of now) when releasing during local development
LOGGER.info(
"Releasing item %r with %s state and exception: %s",
item_id,
state.value,
exception,
)
def create_output(self, parent_id: str, payload: Optional[JSONType] = None) -> str:
# Note: The `parent_id` is not used during local development
del parent_id
item: WorkItem = {"payload": payload, "files": {}}
self._outputs.append(item)
self._save_outputs()
item_id = str(len(self._inputs) + len(self._outputs) - 1)
return item_id
def load_payload(self, item_id: str) -> JSONType:
_, item = self._get_item(item_id)
return item.get("payload", None)
def save_payload(self, item_id: str, payload: JSONType):
source, item = self._get_item(item_id)
item["payload"] = payload
if source == "input":
self._save_inputs()
else:
self._save_outputs()
def list_files(self, item_id: str) -> list[str]:
_, item = self._get_item(item_id)
files = item.get("files", {})
return list(files.keys())
def get_file(self, item_id: str, name: str) -> bytes:
source, item = self._get_item(item_id)
files = item.get("files", {})
path = files[name]
if not Path(path).is_absolute():
if source == "input":
path = self.input_path.parent / path
else:
path = self.output_path.parent / path
with open(path, "rb") as fd:
return fd.read()
def add_file(self, item_id: str, name: str, content: bytes):
source, item = self._get_item(item_id)
files = item.setdefault("files", {})
files[name] = name
if source == "input":
path = self.input_path.parent / name
else:
path = self.output_path.parent / name
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "wb") as fd:
fd.write(content)
LOGGER.info("Created file: %s", path)
if source == "input":
self._save_inputs()
else:
self._save_outputs()
def remove_file(self, item_id: str, name: str):
source, item = self._get_item(item_id)
files = item.get("files", {})
path = files[name]
# Note: Doesn't actually remove the file from disk
LOGGER.info("Would remove file: %s", path)
del files[name]
if source == "input":
self._save_inputs()
else:
self._save_outputs() | /robocorp_workitems-1.3.2.tar.gz/robocorp_workitems-1.3.2/src/robocorp/workitems/_adapters/_file.py | 0.799247 | 0.175927 | _file.py | pypi |
from ev3dev2.button import Button
from ev3dev2.sound import Sound
from ev3dev2.led import Leds
from ev3dev2.motor import SpeedPercent
from math import sin, cos, tan
# ev3dev API Reference:
# https://ev3dev-lang.readthedocs.io/projects/python-ev3dev/en/stable/spec.html
#
# RoboCup Module Example
# ExampleProject()
class Driver:
IR_SENSOR = "ht-nxt-ir-seek-v2"
COMPASS_SENSOR = "ht-nxt-compass"
def Unpack(values) -> list:
for item in values:
try:
yield from Unpack(item)
except TypeError:
yield item
def Average(values) -> float:
return sum(values) / len(values)
class Clamper():
def __init__(self,min_value: int,max_value: int) -> None:
self.min_value = min_value; self.max_value = max_value
def Clamp(self,values: int or list[int]) -> list[int]:
values = list(Unpack(values))
clamped = [None] * len(values)
for i,value in enumerate(values):
if value < self.min_value: value = self.min_value
if value > self.max_value: value = self.max_value
clamped[i] = value
return clamped
class PID_Controller():
def __init__(self,p,i,d) -> None:
self.kp = p; self.ki = i; self.kd = d;
self.reset()
def reset(self) -> None:
self.last_error = 0
def update(self,value: float) -> float:
error = value - self.last_error
p = self.kp * value
i = value * error
d = error
return p + (self.ki * i) + (self.kd * d)
class FilteredSensor():
def __init__(self,difference: int = 200, outliers: int = 15) -> None:
self.stored = []
self.counter = 0
self.difference = difference
self.outliers = outliers
def Value(self,value) -> float:
if self.counter > self.outliers: self.stored = []
if len(self.stored) == 0:
self.stored.append(value)
change = abs(self.Average(self.stored) - value)
if change >= self.difference:
self.counter += 1
else:
self.stored.append(value)
return self.Average(self.stored)
class Robot():
def __init__(self) -> None:
self.Port: dict = {
"A": None,
"B": None,
"C": None,
"D": None,
"1": None,
"2": None,
"3": None,
"4": None,
}
self.Leds = Leds()
self.Sound = Sound()
self.Buttons = Button()
self.Speed = Clamper(-100,100)
def CoastMotors(self) -> None:
[self.Port[p].off(brake=False) for p in list("ABCD") if self.Port[p] != None]
def Color(self,color) -> None:
self.Leds.set_color('LEFT',color)
self.Leds.set_color('RIGHT',color)
def ScaleSpeeds(self,target_value:int,speeds:list[float]) -> list[float]:
greatest = max([abs(speed) for speed in speeds])
if greatest > target_value: greatest = target_value
fix = target_value / greatest
for i in range(len(speeds)):
speeds[i] = round(speeds[i] * fix,2)
return speeds
def PrintPorts(self) -> None:
print('\n'.join([self.Port[x] != None for x in self.Ports]))
def AngleToXY(self,angle,speed) -> tuple[float]:
x = speed * cos(angle)
y = speed * sin(angle)
return (x,y)
def StartMotors(self,speeds) -> None:
speeds = Unpack(speeds)
for p in list("ABCD"):
if self.Port[p] != None:
self.Port[p].on(SpeedPercent(self.Speed.Clamp(speeds[0])))
if len(speeds) > 1: speeds.pop(0)
def SmoothAngle(self,current: float,target: float,smoothing: float=1.25) -> float:
diff = abs(current - target)
if diff > 270: diff -= 270
diff /= smoothing
if current - smoothing / 2 < target:
return current + diff
elif current + smoothing / 2 > target:
return current - diff
return current
def ExampleProject() -> None:
with open("Demo_RoboCup.py","w+") as file:
file.write("""import RoboCup as rc
from ev3dev2.motor import LargeMotor, OUTPUT_B, OUTPUT_C
from ev3dev2.sensor import Sensor, INPUT_1, INPUT_2, INPUT_3
from ev3dev2.sensor.lego import ColorSensor, UltrasonicSensor
robot = rc.Robot()
robot.Port['B'] = LargeMotor(OUTPUT_B)
robot.Port['C'] = LargeMotor(OUTPUT_C)
robot.Port['1'] = ColorSensor(INPUT_1)
robot.Port['2'] = UltrasonicSensor(INPUT_2)
robot.Port['3'] = Sensor(INPUT_3,driver_name=rc.Driver.IR_SENSOR)
filtered_ultrasonic = rc.FilteredSensor()
robot.Color('RED')
robot.PrintPorts()
try:
while not robot.Buttons.any:
color = self.Port['1'].rgb()
distance = filtered_ultrasonic.Value(robot.Port['2'].distance_centimeters)
if rc.Average(color) < 100:
robot.StartMotors([50,-50])
else:
robot.StartMotors(50)
except KeyboardInterrupt:
print("Interrupted")
robot.CoastMotors()
robot.Color('GREEN')""")
if __name__ == '__main__':
ExampleProject() | /robocup_tools-0.0.1-py3-none-any.whl/RoboCup/__init__.py | 0.620277 | 0.356167 | __init__.py | pypi |
# Roboflow Python
---

<div align="center">
<a href="https://youtube.com/roboflow">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://www.linkedin.com/company/roboflow-ai/">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://docs.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://disuss.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584"
width="3%"
/>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://blog.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605"
width="3%"
/>
</a>
</a>
</div>
<br>
**Roboflow** streamlines your computer vision pipeline - upload data, label it, download datasets, train models, deploy models, and repeat.
The **Roboflow Python Package** is a python wrapper around the core Roboflow web application and REST API.
We also maintain an open source set of CV utililities and notebook tutorials in Python:
* :fire: https://github.com/roboflow/supervision :fire:
* :fire: https://github.com/roboflow/notebooks :fire:
## Installation
To install this package, please use `Python 3.6` or higher.
Install from PyPi (Recommended):
```bash
pip install roboflow
```
Install from Source:
```bash
git clone https://github.com/roboflow-ai/roboflow-python.git
cd roboflow-python
python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt
```
## Authentication
```python
import roboflow
roboflow.login()
```
## Quickstart
### Datasets
Download any of over 200,000 public computer vision datasets from [Roboflow Universe](universe.roboflow.com). Label and download your own datasets on app.roboflow.com.
```python
import roboflow
dataset = roboflow.download_dataset(dataset_url="universe.roboflow.com/...", model_format="yolov8")
#ex. dataset = roboflow.download_dataset(dataset_url="https://universe.roboflow.com/joseph-nelson/bccd/dataset/1", model_format="yolov8")
print(dataset.location)
```
### Models
Predict with any of over 50,000 public computer vision models. Train your own computer vision models on app.roboflow.com or train upload your model from open source models - see https://github.com/roboflow/notebooks
```python
img_url = "https://media.roboflow.com/quickstart/aerial_drone.jpeg?updatedAt=1678743716455"
universe_model_url = "https://universe.roboflow.com/brad-dwyer/aerial-solar-panels/model/6"
model = roboflow.load_model(model_url=universe_model_url)
pred = model.predict(img_url, hosted=True)
pred.plot()
```
## Library Structure
The Roboflow python library is structured by the core Roboflow application objects.
Workspace (workspace.py) --> Project (project.py) --> Version (version.py)
```python
from roboflow import Roboflow
rf = Roboflow()
workspace = rf.workspace("WORKSPACE_URL")
project = workspace.project("PROJECT_URL")
version = project.version("VERSION_NUMBER")
```
The workspace, project, and version parameters are the same that you will find in the URL addresses at app.roboflow.com and universe.roboflow.com.
Within the workspace object you can perform actions like making a new project, listing your projects, or performing active learning where you are using predictions from one project's model to upload images to a new project.
Within the project object, you can retrieve metadata about the project, list versions, generate a new dataset version with preprocessing and augmentation settings, train a model in your project, and upload images and annotations to your project.
Within the version object, you can download the dataset version in any model format, train the version on Roboflow, and deploy your own external model to Roboflow.
## Contributing
If you want to extend our Python library or if you find a bug, please open a PR!
Also be sure to test your code the `unittest` command at the `/root` level directory.
Run tests:
```bash
python -m unittest
```
When creating new functions, please follow the [Google style Python docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html). See example below:
```python
def example_function(param1: int, param2: str) -> bool:
"""Example function that does something.
Args:
param1: The first parameter.
param2: The second parameter.
Returns:
The return value. True for success, False otherwise.
"""
```
We provide a `Makefile` to format and ensure code quality. **Be sure to run them before creating a PR**.
```bash
# format code with `black` and `isort`
make style
# check code with flake8
make check_code_quality
```
**Note** These tests will be run automatically when you commit thanks to git hooks. | /roboflow-1.1.4.tar.gz/roboflow-1.1.4/README.md | 0.552781 | 0.906777 | README.md | pypi |
import base64
import io
import os
import requests
import urllib
from PIL import Image
import json
from roboflow.config import OBJECT_DETECTION_MODEL
from roboflow.util.prediction import PredictionGroup
from roboflow.util.image_utils import check_image_url
class ObjectDetectionModel:
def __init__(self, api_key, id, name=None, version=None, local=False, classes=None, overlap=30, confidence=40,
stroke=1, labels=False, format="json"):
"""
From Roboflow Docs:
:param api_key: Your API key (obtained via your workspace API settings page)
:param name: The url-safe version of the dataset name. You can find it in the web UI by looking at
the URL on the main project view or by clicking the "Get curl command" button in the train results section of
your dataset version after training your model.
:param local: Boolean value dictating whether to use the local server or hosted API
:param version: The version number identifying the version of of your dataset
:param classes: Restrict the predictions to only those of certain classes. Provide as a comma-separated string.
:param overlap: The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are
allowed to overlap before being combined into a single box.
:param confidence: A threshold for the returned predictions on a scale of 0-100. A lower number will return
more predictions. A higher number will return fewer high-certainty predictions
:param stroke: The width (in pixels) of the bounding box displayed around predictions (only has an effect when
format is image)
:param labels: Whether or not to display text labels on the predictions (only has an effect when format is
image).
:param format: json - returns an array of JSON predictions. (See response format tab).
image - returns an image with annotated predictions as a binary blob with a Content-Type
of image/jpeg.
"""
# Instantiate different API URL parameters
# To be moved to predict
self.__api_key = api_key
self.id = id
self.name = name
self.version = version
self.classes = classes
self.overlap = overlap
self.confidence = confidence
self.stroke = stroke
self.labels = labels
self.format = format
# local needs to be passed from Project
if not local:
self.base_url = "https://detect.roboflow.com/"
else:
self.base_url = "http://localhost:9001/"
# If dataset slug not none, instantiate API URL
if name is not None and version is not None:
self.__generate_url()
def load_model(self, name, version, local=None, classes=None, overlap=None, confidence=None,
stroke=None, labels=None, format=None):
"""
Loads a Model based on a Model Endpoint
:param model_endpoint: This is the endpoint that is loaded into the api_url
"""
# To load a model manually, they must specify a dataset slug
self.name = name
self.version = version
# Generate URL based on parameters
self.__generate_url(local=local, classes=classes, overlap=overlap, confidence=confidence,
stroke=stroke, labels=labels, format=format)
def predict(self, image_path, hosted=False, format=None, classes=None, overlap=30, confidence=40,
stroke=1, labels=False):
"""
Infers detections based on image from specified model and image path
:param image_path: Path to image (can be local or hosted)
:param hosted: If image located on a hosted server, hosted should be True
:param format: output format from this method
:return: PredictionGroup --> a group of predictions based on Roboflow JSON response
"""
# Generate url before predicting
self.__generate_url(format=format, classes=classes, overlap=overlap, confidence=confidence, stroke=stroke,
labels=labels)
# Check if image exists at specified path or URL
self.__exception_check(image_path_check=image_path)
# If image is local image
if not hosted:
# Open Image in RGB Format
image = Image.open(image_path).convert("RGB")
# Create buffer
buffered = io.BytesIO()
image.save(buffered, quality=90, format="JPEG")
# Base64 encode image
img_str = base64.b64encode(buffered.getvalue())
img_str = img_str.decode("ascii")
# Post to API and return response
resp = requests.post(self.api_url, data=img_str, headers={
"Content-Type": "application/x-www-form-urlencoded"
})
else:
# Create API URL for hosted image (slightly different)
self.api_url += "&image=" + urllib.parse.quote_plus(image_path)
# POST to the API
resp = requests.get(self.api_url)
if resp.status_code != 200:
raise Exception(resp.text)
# Return a prediction group if JSON data
if self.format == "json":
return PredictionGroup.create_prediction_group(resp.json(),
image_path=image_path,
prediction_type=OBJECT_DETECTION_MODEL)
# Returns base64 encoded Data
elif self.format == "image":
return resp.content
def __exception_check(self, image_path_check=None):
# Check if Image path exists exception check (for both hosted URL and local image)
if image_path_check is not None:
if not os.path.exists(image_path_check) and not check_image_url(image_path_check):
raise Exception("Image does not exist at " + image_path_check + "!")
def __generate_url(self, local=None, classes=None, overlap=None, confidence=None,
stroke=None, labels=None, format=None):
# Reassign parameters if any parameters are changed
if local is not None:
if not local:
self.base_url = "https://detect.roboflow.com/"
else:
self.base_url = "http://localhost:9001/"
"""
There is probably a better way to do this lol
TODO: Refactor this!
"""
# Change any variables that the user wants to change
if classes is not None:
self.classes = classes
if overlap is not None:
self.overlap = overlap
if confidence is not None:
self.confidence = confidence
if stroke is not None:
self.stroke = stroke
if labels is not None:
self.labels = labels
if format is not None:
self.format = format
# Create the new API URL
splitted = self.id.rsplit("/")
without_workspace = splitted[1]
self.api_url = "".join([
self.base_url + without_workspace + '/' + str(self.version),
"?api_key=" + self.__api_key,
"&name=YOUR_IMAGE.jpg",
"&overlap=" + str(self.overlap),
"&confidence=" + str(self.confidence),
"&stroke=" + str(self.stroke),
"&labels=" + str(self.labels).lower(),
"&format=" + self.format
])
# add classes parameter to api
if self.classes is not None:
self.api_url += "&classes=" + self.classes
def __str__(self):
# Create the new API URL
splitted = self.id.rsplit("/")
without_workspace = splitted[1]
json_value = {
'id': without_workspace + '/' + str(self.version),
'name': self.name,
'version': self.version,
'classes': self.classes,
'overlap': self.overlap,
'confidence': self.confidence,
'stroke': self.stroke,
'labels': self.labels,
'format': self.format,
'base_url': self.base_url
}
return json.dumps(json_value, indent=2) | /roboflowtest-0.0.2-py3-none-any.whl/roboflow/models/object_detection.py | 0.673729 | 0.327561 | object_detection.py | pypi |
from typing import Literal
import torch
import torch.nn.functional as F
from torch import Tensor, nn
from roboformer.models.opt.model import OPT
SamplingMode = Literal["multinomial", "greedy", "nucleus"]
def sample_step(next_logits: Tensor, mode: str, nucleus_prob: float) -> Tensor:
"""Does a single sampling step on a given set of logits.
Args:
next_logits: The logits to sample from, with shape (B, C)
mode: The sampling mode to use
nucleus_prob: Nucleus sampling probability
Returns:
The sampled tokens, with shape (B, 1)
Raises:
NotImplementedError: If the mode is invalid
"""
if mode == "multinomial":
return torch.multinomial(next_logits.softmax(1), num_samples=1)
if mode == "greedy":
return next_logits.argmax(dim=1, keepdim=True)
if mode == "nucleus":
probs = next_logits.softmax(dim=1)
sorted_probs, indices = torch.sort(probs, dim=-1, descending=True)
cumsum_probs = torch.cumsum(sorted_probs, dim=-1)
nucleus = cumsum_probs < nucleus_prob
nucleus = torch.cat([nucleus.new_ones(nucleus.shape[:-1] + (1,)), nucleus[..., :-1]], dim=-1)
sorted_log_probs = torch.log(sorted_probs)
sorted_log_probs[~nucleus] = float("-inf")
# sampled_sorted_indices = Categorical(logits=sorted_log_probs).sample()
probs = F.softmax(sorted_log_probs, dim=-1)
probs_2d = probs.flatten(0, -2)
samples_2d = torch.multinomial(probs_2d, 1, True).T
sampled_sorted_indices = samples_2d.reshape(probs.shape[:-1]).unsqueeze(-1)
res = indices.gather(-1, sampled_sorted_indices)
return res
raise NotImplementedError(f"Invalid mode: {mode}")
class Sampler(nn.Module):
__constants__ = ["mode", "max_steps", "nucleus_prob"]
def __init__(
self,
model: OPT,
mode: SamplingMode,
max_steps: int,
*,
nucleus_prob: float = 0.8,
) -> None:
"""Defines a wrapper module to sample from an OPT model.
Args:
model: The model to sample from
mode: The sampling mode to use
max_steps: The maximum number of steps to sample
nucleus_prob: Nucleus sampling probability
"""
super().__init__()
self.model = model
self.mode = mode
self.max_steps = max_steps
self.nucleus_prob = nucleus_prob
def sample(self, prev_token: Tensor) -> Tensor:
"""Samples a sequence for a given prefix.
Args:
prev_token: The prefix to use, with shape (B, T)
Returns:
The sampled tokens, with shape (B, T)
"""
offset = 0
all_tokens = prev_token
# Runs the first step to get the caches.
next_logits, kv_caches = self.model(prev_token)
offset += next_logits.size(2)
prev_token = sample_step(next_logits[..., -1], self.mode, self.nucleus_prob)
all_tokens = torch.cat((all_tokens, prev_token), dim=1)
for _ in range(self.max_steps):
next_logits, kv_caches = self.model(
prev_token,
kv_caches=kv_caches,
offset=offset,
)
offset += next_logits.size(2)
next_logits = next_logits[..., -1]
prev_token = sample_step(next_logits, self.mode, self.nucleus_prob)
all_tokens = torch.cat((all_tokens, prev_token), dim=1)
return all_tokens
def forward(self, prev_token: Tensor) -> Tensor:
return self.sample(prev_token) | /models/opt/sampling.py | 0.950227 | 0.594728 | sampling.py | pypi |
import os
import re
import urllib
from pathlib import Path
from typing import Iterator
USER_AGENT = "minopt"
def _get_model_dir() -> Path:
model_dir_env = os.environ.get("MODEL_DIR_NAME", "MODEL_DIR")
if model_dir_env in os.environ:
return Path(os.environ["MODEL_DIR"])
raise KeyError(
"In order to download a pre-trained model, first set the `MODEL_DIR` "
"environment variable to point to the directory where you would like to "
"download the model weights. Alternatively, you can set `MODEL_DIR_NAME` "
"to the name of some environment variable that points to your model "
"directory, such as `HF_HOME`"
)
def _save_response_content(content: Iterator[bytes], destination: str) -> None:
with open(destination, "wb") as fh:
for chunk in content:
if not chunk:
continue
fh.write(chunk)
def _urlretrieve(url: str, filename: str, chunk_size: int = 1024 * 32) -> None:
request = urllib.request.Request(url, headers={"User-Agent": USER_AGENT})
with urllib.request.urlopen(request) as response:
_save_response_content(iter(lambda: response.read(chunk_size), b""), filename)
def _get_redirect_url(url: str, max_hops: int = 3) -> str:
initial_url = url
headers = {"Method": "HEAD", "User-Agent": USER_AGENT}
for _ in range(max_hops + 1):
with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response:
if response.url == url or response.url is None:
return url
url = response.url
raise RecursionError(
f"Request to {initial_url} exceeded {max_hops} redirects. " f"The last redirect points to {url}."
)
def download_url(url: str, root: str, filename: str | None = None, max_redirect_hops: int = 3) -> None:
"""Download a file from a url and place it in root.
This function and it's subfunctions are mostly copied from the TorchVision
implementation: `torchvision.datasets.utils.download_url`
Args:
url: URL to download file from
root: Directory to place downloaded file in
filename: Name to save the file under. If None, use the basename of the URL
max_redirect_hops: Maximum number of redirect hops allowed
Raises:
OSError: If there is an error dowloading the requested URL
"""
root = os.path.expanduser(root)
if not filename:
filename = os.path.basename(url)
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
# expand redirect chain if needed
url = _get_redirect_url(url, max_hops=max_redirect_hops)
# download the file
try:
print("Downloading " + url + " to " + fpath)
_urlretrieve(url, fpath)
except (urllib.error.URLError, OSError) as e:
if url[:5] == "https":
url = url.replace("https:", "http:")
print("Failed download. Trying https -> http instead. Downloading " + url + " to " + fpath)
_urlretrieve(url, fpath)
else:
raise e
def download_sharded_weights(urls: list[str]) -> list[Path]:
"""Downloads the sharded OPT weights and returns their location.
Args:
urls: The URLs of the sharded weights files to download
Returns:
The path of the downloaded weights
"""
# Parses a unique identifier from the URLs.
match = re.search(r".com/(.+)$", urls[0])
assert match is not None
identifier = os.path.dirname(match.group(1))
assert all(identifier in url for url in urls)
identifier = "_".join(i.upper() for i in identifier.split("/"))
# All files are downloaded to the same directory.
root = (_get_model_dir() / identifier).resolve()
filepaths: list[Path] = []
for url in urls:
filename = os.path.basename(url)
filepath = root / filename
filepaths.append(filepath)
if not filepath.exists():
download_url(url, str(root), filename=filename)
return filepaths | /models/opt/utils.py | 0.755997 | 0.240618 | utils.py | pypi |
from enum import Flag
from polymetis import GripperInterface
from mj_envs.robot.hardware_base import hardwareBase
import numpy as np
import argparse
import time
class Robotiq(hardwareBase):
def __init__(self, name, ip_address, **kwargs):
self.name = name
self.ip_address = ip_address
self.robot = None
self.max_width = 0.0
self.min_width = 0.0
def connect(self, policy=None):
"""Establish hardware connection"""
connection = False
# Initialize self.robot interface
print("RBQ:> Connecting to {}: ".format(self.name), end="")
try:
self.robot = GripperInterface(
ip_address=self.ip_address,
)
print("Success")
except Exception as e:
self.robot = None # declare dead
print("Failed with exception: ", e)
return connection
print("RBQ:> Testing {} connection: ".format(self.name), end="")
if self.okay():
print("Okay")
self.max_width = self.robot.get_state().max_width
connection = True
else:
print("Not ready. Please retry connection")
return connection
def okay(self):
"""Return hardware health"""
okay = False
if self.robot:
try:
state = self.robot.get_state()
delay = time.time() - (state.timestamp.seconds + 1e-9 * state.timestamp.nanos)
assert delay < 5, "Acquired state is stale by {} seconds".format(delay)
okay = True
except:
self.robot = None # declare dead
okay = False
return okay
def close(self):
"""Close hardware connection"""
if self.robot:
print("RBQ:> Resetting robot before close: ", end="")
try:
self.reset()
print("RBQ:> Success: ", end="")
except:
print("RBQ:> Failed. Exiting : ", end="")
self.robot = None
print("Connection closed")
return True
def reconnect(self):
print("RBQ:> Attempting re-connection")
self.connect()
while not self.okay():
self.connect()
time.sleep(2)
print("RBQ:> Re-connection success")
def reset(self, width=None, **kwargs):
"""Reset hardware"""
if not width:
width = self.max_width
self.apply_commands(width=width, **kwargs)
def get_sensors(self):
"""Get hardware sensors"""
try:
curr_state = self.robot.get_state()
except:
print("RBQ:> Failed to get current sensors: ", end="")
self.reconnect()
return self.get_sensors()
return np.array([curr_state.width])
def apply_commands(self, width:float, speed:float=0.1, force:float=0.1):
assert width>=0.0 and width<=self.max_width, "Gripper desired width ({}) is out of bound (0,{})".format(width, self.max_width)
self.robot.goto(width=width, speed=speed, force=force)
return 0
# Get inputs from user
def get_args():
parser = argparse.ArgumentParser(description="Polymetis based gripper client")
parser.add_argument("-i", "--server_ip",
type=str,
help="IP address or hostname of the franka server",
default="localhost") # 172.16.0.1
return parser.parse_args()
if __name__ == "__main__":
args = get_args()
# user inputs
time_to_go = 2.0*np.pi
m = 0.5 # magnitude of sine wave (rad)
T = 2.0 # period of sine wave
hz = 50 # update frequency
# Initialize robot
rbq = Robotiq(name="Demo_robotiq", ip_address=args.server_ip)
# connect to robot
status = rbq.connect()
assert status, "Can't connect to Robotiq"
# reset using the user controller
rbq.reset()
# Close gripper
des_width = 0.0
rbq.apply_commands(width=des_width)
time.sleep(2)
curr_width = rbq.get_sensors()
print("RBQ:> Testing gripper close: Desired:{}, Achieved:{}".format(des_width, curr_width))
# Open gripper
des_width = rbq.max_width
rbq.apply_commands(width=des_width)
time.sleep(2)
curr_width = rbq.get_sensors()
print("RBQ:> Testing gripper Open: Desired:{}, Achieved:{}".format(des_width, curr_width))
# Contineous control
for i in range(int(time_to_go * hz)):
des_width = rbq.max_width * ( 1 + np.cos(np.pi * i / (T * hz)) )/2
rbq.apply_commands(width=des_width)
time.sleep(1 / hz)
# Drive gripper using keyboard
if False:
from vtils.keyboard import key_input as keyboard
ky = keyboard.Key()
sen = None
print("Press 'q' to stop listening")
while sen != 'q':
sen = ky.get_sensor()
if sen is not None:
print(sen, end=", ", flush=True)
if sen == 'up':
rbq.apply_commands(width=rbq.max_width)
elif sen=='down':
rbq.apply_commands(width=rbq.min_width)
time.sleep(.01)
# close connection
rbq.close()
print("RBQ:> Demo Finished") | /robohive-0.3.0-py3-none-any.whl/mj_envs/robot/hardware_robotiq.py | 0.606498 | 0.177526 | hardware_robotiq.py | pypi |
DESC = '''
Helper script to record/examine a rollout's openloop effects (render/ playback/ recover) on an environment\n
> Examine options:\n
- Record: Record an execution. (Useful for kinesthetic demonstrations on hardware)\n
- Render: Render back the execution. (sim.forward)\n
- Playback: Playback the rollout action sequence in openloop (sim.step(a))\n
- Recover: Playback actions recovered from the observations \n
> Render options\n
- either onscreen, or offscreen, or just rollout without rendering.\n
> Save options:\n
- save resulting paths as pickle or as 2D plots\n
USAGE:\n
$ python examine_rollout.py --env_name door-v0 \n
$ python examine_rollout.py --env_name door-v0 --rollout_path my_rollouts.pickle --repeat 10 \n
'''
import gym
from mj_envs.utils.paths_utils import plot as plotnsave_paths
from mj_envs.utils.tensor_utils import split_tensor_dict_list
from mj_envs.utils import tensor_utils
import click
import numpy as np
import pickle
import h5py
import time
import os
import skvideo.io
@click.command(help=DESC)
@click.option('-e', '--env_name', type=str, help='environment to load', required=True)
@click.option('-p', '--rollout_path', type=str, help='absolute path of the rollout', default=None)
@click.option('-m', '--mode', type=click.Choice(['record', 'render', 'playback', 'recover']), help='How to examine rollout', default='playback')
@click.option('-h', '--horizon', type=int, help='Rollout horizon, when mode is record', default=-1)
@click.option('-s', '--seed', type=int, help='seed for generating environment instances', default=123)
@click.option('-n', '--num_repeat', type=int, help='number of repeats for the rollouts', default=1)
@click.option('-r', '--render', type=click.Choice(['onscreen', 'offscreen', 'none']), help='visualize onscreen or offscreen', default='onscreen')
@click.option('-c', '--camera_name', type=str, default=None, help=('Camera name for rendering'))
@click.option('-o', '--output_dir', type=str, default='./', help=('Directory to save the outputs'))
@click.option('-on', '--output_name', type=str, default=None, help=('The name to save the outputs as'))
@click.option('-sp', '--save_paths', type=bool, default=False, help=('Save the rollout paths'))
@click.option('-cp', '--compress_paths', type=bool, default=True, help=('compress paths. Remove obs and env_info/state keys'))
@click.option('-pp', '--plot_paths', type=bool, default=False, help=('2D-plot of individual paths'))
@click.option('-ea', '--env_args', type=str, default=None, help=('env args. E.g. --env_args "{\'is_hardware\':True}"'))
@click.option('-ns', '--noise_scale', type=float, default=0.0, help=('Noise amplitude in randians}"'))
def main(env_name, rollout_path, mode, horizon, seed, num_repeat, render, camera_name, output_dir, output_name, save_paths, compress_paths, plot_paths, env_args, noise_scale):
# seed and load environments
np.random.seed(seed)
env = gym.make(env_name) if env_args==None else gym.make(env_name, **(eval(env_args)))
env.seed(seed)
# load paths
if mode == 'record':
assert horizon>0, "Rollout horizon must be specified when recording rollout"
assert output_name is not None, "Specify the name of the recording"
if save_paths is False:
print("Warning: Recording is not being saved. Enable save_paths=True to log the recorded path")
paths = [None,]*num_repeat # empty paths for recordings
else:
assert rollout_path is not None, "Rollout path is required for mode:{} ".format(mode)
if output_dir == './': # overide the default
output_dir = os.path.dirname(rollout_path)
if output_name is None:
rollout_name = os.path.split(rollout_path)[-1]
output_name, output_type = os.path.splitext(rollout_name)
# file_name = os.path.join(output_dir, output_name+"_"+"-".join(cam_names))
# resolve data format
if output_type=='.h5':
paths = h5py.File(rollout_path, 'r')
elif output_type=='.pickle':
paths = pickle.load(open(rollout_path, 'rb'))
else:
raise TypeError("Unknown path format. Check file")
# resolve rendering
if render == 'onscreen':
env.env.mujoco_render_frames = True
elif render =='offscreen':
env.mujoco_render_frames = False
frame_size=(640,480)
frames = np.zeros((env.horizon, frame_size[1], frame_size[0], 3), dtype=np.uint8)
elif render == None:
env.mujoco_render_frames = False
# playback paths
pbk_paths = []
for i_loop in range(num_repeat):
print("Starting playback loop:{}".format(i_loop))
ep_rwd = 0.0
for i_path, path in enumerate(paths):
if output_type=='.h5':
data = paths[path]['data']
path_horizon = data['ctrl_arm'].shape[0]
else:
data = path['env_infos']['obs_dict']
path_horizon = path['env_infos']['time'].shape[0]
# initialize buffers
ep_t0 = time.time()
obs = []
act = []
rewards = []
env_infos = []
states = []
# initialize env to the starting position
if path:
path['actions'] = path['action']
# recover env initial state
state_t = split_tensor_dict_list(path['env_infos']['state'])
env.env.set_env_state(state_t[0])
# reset env
if output_type=='.h5':
reset_qpos = env.init_qpos.copy()
reset_qpos[:7] = data['qp_arm'][0]
reset_qpos[7] = data['qp_ee'][0]
env.reset(reset_qpos=reset_qpos)
elif output_type=='.pickle' and "state" in path['env_infos'].keys():
env.reset(reset_qpos=path['env_infos']['state']['qpos'][0], reset_qvel=path['env_infos']['state']['qvel'][0])
else:
raise TypeError("Unknown path type")
else:
env.reset()
# Rollout
o = env.get_obs()
if output_type=='.h5':
path_horizon = horizon if mode == 'record' else data['qp_arm'].shape[0]
else:
path_horizon = horizon if mode == 'record' else path['actions'].shape[0]
for i_step in range(path_horizon):
# Record Execution. Useful for kinesthetic demonstrations on hardware
if mode=='record':
a = env.action_space.sample() # dummy random sample
onext, r, d, info = env.step(a) # t ==> t+1
# Directly create the scene
elif mode=='render':
env.env.set_env_state(state_t[i_step])
env.mj_render()
# copy over from exiting path
a = path['actions'][i_step]
if (i_step+1) < path_horizon:
onext = path['observations'][i_step+1]
r = path['rewards'][i_step+1]
info = {}
# Apply actions in open loop
elif mode=='playback':
if output_type=='.h5':
a = np.concatenate([data['ctrl_arm'][i_step], data['ctrl_ee'][i_step]])
else:
a = path['actions'][i_step] if output_type=='.pickle' else path['data']['ctrl_arm']
onext, r, d, info = env.step(a) # t ==> t+1
# Recover actions from states
elif mode=='recover':
# assumes position controls
a = path['env_infos']['obs_dict']['qp'][i_step]
if noise_scale:
a = a + env.env.np_random.uniform(high=noise_scale, low=-noise_scale, size=len(a)).astype(a.dtype)
if env.normalize_act:
a = env.robot.normalize_actions(controls=a)
onext, r, d, info = env.step(a) # t ==> t+1
# populate rollout paths
ep_rwd += r
act.append(a)
rewards.append(r)
if compress_paths:
obs.append([]); o = onext # don't save obs
if 'state' in info.keys(): del info['state'] # don't save state
else:
obs.append(o); o = onext
env_infos.append(info)
# Render offscreen
if render =='offscreen':
curr_frame = env.render_camera_offscreen(
sim=env.sim,
cameras=[camera_name],
width=frame_size[0],
height=frame_size[1],
device_id=0
)
frames[i_step,:,:,:] = curr_frame[0]
print(i_step, end=', ', flush=True)
# Create rollout outputs
pbk_path = dict(observations=np.array(obs),
actions=np.array(act),
rewards=np.array(rewards),
env_infos=tensor_utils.stack_tensor_dict_list(env_infos),
states=states)
pbk_paths.append(pbk_path)
# save offscreen buffers as video
if render =='offscreen':
file_name = output_dir + 'rollout' + str(i_path) + ".mp4"
skvideo.io.vwrite(file_name, np.asarray(frames))
print("saved", file_name)
# Finish rollout
print("-- Finished playback path %d :: Total reward = %3.3f, Total time = %2.3f" % (i_path, ep_rwd, ep_t0-time.time()))
# Finish loop
print("Finished playback loop:{}".format(i_loop))
# Save paths
time_stamp = time.strftime("%Y%m%d-%H%M%S")
if save_paths:
file_name = os.path.join(output_dir, output_name + '{}_paths.pickle'.format(time_stamp))
pickle.dump(pbk_paths, open(file_name, 'wb'))
print("Saved: "+file_name)
# plot paths
if plot_paths:
file_name = os.path.join(output_dir, output_name + '{}'.format(time_stamp))
plotnsave_paths(pbk_paths, env=env, fileName_prefix=file_name)
if __name__ == '__main__':
main() | /robohive-0.3.0-py3-none-any.whl/mj_envs/utils/examine_rollout.py | 0.526586 | 0.222415 | examine_rollout.py | pypi |
import numpy as np
# For testing whether a number is close to zero
_FLOAT_EPS = np.finfo(np.float64).eps
_EPS4 = _FLOAT_EPS * 4.0
def mulQuat(qa, qb):
res = np.zeros(4)
res[0] = qa[0]*qb[0] - qa[1]*qb[1] - qa[2]*qb[2] - qa[3]*qb[3]
res[1] = qa[0]*qb[1] + qa[1]*qb[0] + qa[2]*qb[3] - qa[3]*qb[2]
res[2] = qa[0]*qb[2] - qa[1]*qb[3] + qa[2]*qb[0] + qa[3]*qb[1]
res[3] = qa[0]*qb[3] + qa[1]*qb[2] - qa[2]*qb[1] + qa[3]*qb[0]
return res
def negQuat(quat):
return np.array([quat[0], -quat[1], -quat[2], -quat[3]])
def quat2Vel(quat, dt=1):
axis = quat[1:].copy()
sin_a_2 = np.sqrt(np.sum(axis**2))
axis = axis/(sin_a_2+1e-8)
speed = 2*np.arctan2(sin_a_2, quat[0])/dt
return speed, axis
def quatDiff2Vel(quat1, quat2, dt):
neg = negQuat(quat1)
diff = mulQuat(quat2, neg)
return quat2Vel(diff, dt)
def axis_angle2quat(axis, angle):
c = np.cos(angle/2)
s = np.sin(angle/2)
return np.array([c, s*axis[0], s*axis[1], s*axis[2]])
def euler2mat(euler):
""" Convert Euler Angles to Rotation Matrix """
euler = np.asarray(euler, dtype=np.float64)
assert euler.shape[-1] == 3, "Invalid shaped euler {}".format(euler)
ai, aj, ak = -euler[..., 2], -euler[..., 1], -euler[..., 0]
si, sj, sk = np.sin(ai), np.sin(aj), np.sin(ak)
ci, cj, ck = np.cos(ai), np.cos(aj), np.cos(ak)
cc, cs = ci * ck, ci * sk
sc, ss = si * ck, si * sk
mat = np.empty(euler.shape[:-1] + (3, 3), dtype=np.float64)
mat[..., 2, 2] = cj * ck
mat[..., 2, 1] = sj * sc - cs
mat[..., 2, 0] = sj * cc + ss
mat[..., 1, 2] = cj * sk
mat[..., 1, 1] = sj * ss + cc
mat[..., 1, 0] = sj * cs - sc
mat[..., 0, 2] = -sj
mat[..., 0, 1] = cj * si
mat[..., 0, 0] = cj * ci
return mat
def euler2quat(euler):
""" Convert Euler Angles to Quaternions """
euler = np.asarray(euler, dtype=np.float64)
assert euler.shape[-1] == 3, "Invalid shape euler {}".format(euler)
ai, aj, ak = euler[..., 2] / 2, -euler[..., 1] / 2, euler[..., 0] / 2
si, sj, sk = np.sin(ai), np.sin(aj), np.sin(ak)
ci, cj, ck = np.cos(ai), np.cos(aj), np.cos(ak)
cc, cs = ci * ck, ci * sk
sc, ss = si * ck, si * sk
quat = np.empty(euler.shape[:-1] + (4,), dtype=np.float64)
quat[..., 0] = cj * cc + sj * ss
quat[..., 3] = cj * sc - sj * cs
quat[..., 2] = -(cj * ss + sj * cc)
quat[..., 1] = cj * cs - sj * sc
return quat
def mat2euler(mat):
""" Convert Rotation Matrix to Euler Angles """
mat = np.asarray(mat, dtype=np.float64)
assert mat.shape[-2:] == (3, 3), "Invalid shape matrix {}".format(mat)
cy = np.sqrt(mat[..., 2, 2] * mat[..., 2, 2] + mat[..., 1, 2] * mat[..., 1, 2])
condition = cy > _EPS4
euler = np.empty(mat.shape[:-1], dtype=np.float64)
euler[..., 2] = np.where(condition,
-np.arctan2(mat[..., 0, 1], mat[..., 0, 0]),
-np.arctan2(-mat[..., 1, 0], mat[..., 1, 1]))
euler[..., 1] = np.where(condition,
-np.arctan2(-mat[..., 0, 2], cy),
-np.arctan2(-mat[..., 0, 2], cy))
euler[..., 0] = np.where(condition,
-np.arctan2(mat[..., 1, 2], mat[..., 2, 2]),
0.0)
return euler
def mat2quat(mat):
""" Convert Rotation Matrix to Quaternion """
mat = np.asarray(mat, dtype=np.float64)
assert mat.shape[-2:] == (3, 3), "Invalid shape matrix {}".format(mat)
Qxx, Qyx, Qzx = mat[..., 0, 0], mat[..., 0, 1], mat[..., 0, 2]
Qxy, Qyy, Qzy = mat[..., 1, 0], mat[..., 1, 1], mat[..., 1, 2]
Qxz, Qyz, Qzz = mat[..., 2, 0], mat[..., 2, 1], mat[..., 2, 2]
# Fill only lower half of symmetric matrix
K = np.zeros(mat.shape[:-2] + (4, 4), dtype=np.float64)
K[..., 0, 0] = Qxx - Qyy - Qzz
K[..., 1, 0] = Qyx + Qxy
K[..., 1, 1] = Qyy - Qxx - Qzz
K[..., 2, 0] = Qzx + Qxz
K[..., 2, 1] = Qzy + Qyz
K[..., 2, 2] = Qzz - Qxx - Qyy
K[..., 3, 0] = Qyz - Qzy
K[..., 3, 1] = Qzx - Qxz
K[..., 3, 2] = Qxy - Qyx
K[..., 3, 3] = Qxx + Qyy + Qzz
K /= 3.0
# TODO: vectorize this -- probably could be made faster
q = np.empty(K.shape[:-2] + (4,))
it = np.nditer(q[..., 0], flags=['multi_index'])
while not it.finished:
# Use Hermitian eigenvectors, values for speed
vals, vecs = np.linalg.eigh(K[it.multi_index])
# Select largest eigenvector, reorder to w,x,y,z quaternion
q[it.multi_index] = vecs[[3, 0, 1, 2], np.argmax(vals)]
# Prefer quaternion with positive w
# (q * -1 corresponds to same rotation as q)
if q[it.multi_index][0] < 0:
q[it.multi_index] *= -1
it.iternext()
return q
def quat2euler(quat):
""" Convert Quaternion to Euler Angles """
return mat2euler(quat2mat(quat))
def quat2mat(quat):
""" Convert Quaternion to Euler Angles """
quat = np.asarray(quat, dtype=np.float64)
assert quat.shape[-1] == 4, "Invalid shape quat {}".format(quat)
w, x, y, z = quat[..., 0], quat[..., 1], quat[..., 2], quat[..., 3]
Nq = np.sum(quat * quat, axis=-1)
s = 2.0 / Nq
X, Y, Z = x * s, y * s, z * s
wX, wY, wZ = w * X, w * Y, w * Z
xX, xY, xZ = x * X, x * Y, x * Z
yY, yZ, zZ = y * Y, y * Z, z * Z
mat = np.empty(quat.shape[:-1] + (3, 3), dtype=np.float64)
mat[..., 0, 0] = 1.0 - (yY + zZ)
mat[..., 0, 1] = xY - wZ
mat[..., 0, 2] = xZ + wY
mat[..., 1, 0] = xY + wZ
mat[..., 1, 1] = 1.0 - (xX + zZ)
mat[..., 1, 2] = yZ - wX
mat[..., 2, 0] = xZ - wY
mat[..., 2, 1] = yZ + wX
mat[..., 2, 2] = 1.0 - (xX + yY)
return np.where((Nq > _FLOAT_EPS)[..., np.newaxis, np.newaxis], mat, np.eye(3)) | /robohive-0.3.0-py3-none-any.whl/mj_envs/utils/quat_math.py | 0.615203 | 0.782746 | quat_math.py | pypi |
import numpy as np
import os
import glob
import pickle
import h5py
import skvideo.io
from PIL import Image
import click
from mj_envs.utils.dict_utils import flatten_dict, dict_numpify
import json
# Useful to check the horizon for teleOp / Hardware experiments
def plot_horizon(paths, env, fileName_prefix=None):
import matplotlib as mpl
mpl.use('TkAgg')
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 5})
if "time" in paths[0]['env_infos']:
horizon = np.zeros(len(paths))
# plot timesteps
plt.clf()
rl_dt_ideal = env.env.frame_skip * env.env.model.opt.timestep
for i, path in enumerate(paths):
dt = path['env_infos']['time'][1:] - path['env_infos']['time'][:-1]
horizon[i] = path['env_infos']['time'][-1] - path['env_infos'][
'time'][0]
h1 = plt.plot(
path['env_infos']['time'][1:],
dt,
'-',
label=('time=%1.2f' % horizon[i]))
h1 = plt.plot(
np.array([0, max(horizon)]),
rl_dt_ideal * np.ones(2),
'g', alpha=.5,
linewidth=2.0)
plt.legend([h1[0]], ['ideal'], loc='upper right')
plt.ylabel('time step (sec)')
plt.xlabel('time (sec)')
plt.ylim(rl_dt_ideal - 0.005, rl_dt_ideal + .005)
plt.suptitle('Timestep profile for %d rollouts' % len(paths))
file_name = fileName_prefix + '_timesteps.pdf'
plt.savefig(file_name)
print("Saved:", file_name)
# plot horizon
plt.clf()
h1 = plt.plot(
np.array([0, len(paths)]),
env.horizon * rl_dt_ideal * np.ones(2),
'g',
linewidth=5.0,
label='ideal')
plt.bar(np.arange(0, len(paths)), horizon, label='observed')
plt.ylabel('rollout duration (sec)')
plt.xlabel('rollout id')
plt.legend()
plt.suptitle('Horizon distribution for %d rollouts' % len(paths))
file_name = fileName_prefix + '_horizon.pdf'
plt.savefig(file_name)
print("Saved:", file_name)
# Plot paths to a pdf file
def plot(paths, env=None, fileName_prefix=''):
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 5})
for i, path in enumerate(paths):
plt.clf()
# observations
nplt1 = len(path['env_infos']['obs_dict'])
for iplt1, key in enumerate(
sorted(path['env_infos']['obs_dict'].keys())):
ax = plt.subplot(nplt1, 2, iplt1 * 2 + 1)
if iplt1 != (nplt1 - 1):
ax.axes.xaxis.set_ticklabels([])
if iplt1 == 0:
plt.title('Observations')
ax.yaxis.tick_right()
plt.plot(
path['env_infos']['time'],
path['env_infos']['obs_dict'][key],
label=key)
# plt.ylabel(key)
plt.text(0.01, .01, key, transform=ax.transAxes)
plt.xlabel('time (sec)')
# actions
nplt2 = 3
ax = plt.subplot(nplt2, 2, 2)
ax.set_prop_cycle(None)
# h4 = plt.plot(path['env_infos']['time'], env.env.act_mid + path['actions']*env.env.act_rng, '-', label='act') # plot scaled actions
h4 = plt.plot(
path['env_infos']['time'], path['actions'], '-',
label='act') # plot normalized actions
plt.ylabel('actions')
ax.axes.xaxis.set_ticklabels([])
ax.yaxis.tick_right()
# rewards/ scores
if "score" in path['env_infos']:
ax = plt.subplot(nplt2, 2, 6)
plt.plot(
path['env_infos']['time'],
path['env_infos']['score'],
label='score')
plt.xlabel('time')
plt.ylabel('score')
ax.yaxis.tick_right()
if "rwd_dict" in path['env_infos']:
ax = plt.subplot(nplt2, 2, 4)
ax.set_prop_cycle(None)
for key in sorted(path['env_infos']['rwd_dict'].keys()):
plt.plot(
path['env_infos']['time'],
path['env_infos']['rwd_dict'][key],
label=key)
plt.legend(
loc='upper left',
fontsize='x-small',
bbox_to_anchor=(.75, 0.25),
borderaxespad=0.)
ax.axes.xaxis.set_ticklabels([])
plt.ylabel('rewards')
ax.yaxis.tick_right()
if env and hasattr(env.env, "rwd_keys_wt"):
ax = plt.subplot(nplt2, 2, 6)
ax.set_prop_cycle(None)
for key in env.env.rwd_keys_wt.keys():
plt.plot(
path['env_infos']['time'],
path['env_infos']['rwd_dict'][key]*env.env.rwd_keys_wt[key],
label=key)
plt.legend(
loc='upper left',
fontsize='x-small',
bbox_to_anchor=(.75, 0.25),
borderaxespad=0.)
ax.axes.xaxis.set_ticklabels([])
plt.ylabel('wt*rewards')
ax.yaxis.tick_right()
file_name = fileName_prefix + '_path' + str(i) + '.pdf'
plt.savefig(file_name)
print("saved ", file_name)
# Render frames/videos
def render(rollout_path, render_format:str="mp4", cam_names:list=["left"]):
# rollout_path: Absolute path of the rollout (h5/pickle)', default=None
# format: Format to save. Choice['rgb', 'mp4']
# cam: list of cameras to render. Example ['left', 'right', 'top', 'Franka_wrist']
output_dir = os.path.dirname(rollout_path)
rollout_name = os.path.split(rollout_path)[-1]
output_name, output_type = os.path.splitext(rollout_name)
file_name = os.path.join(output_dir, output_name+"_"+"-".join(cam_names))
# resolve data format
if output_type=='.h5':
paths = h5py.File(rollout_path, 'r')
elif output_type=='.pickle':
paths = pickle.load(open(rollout_path, 'rb'))
else:
raise TypeError("Unknown path format. Check file")
# Run through all trajs in the paths
for i_path, path in enumerate(paths):
if output_type=='.h5':
data = paths[path]['data']
path_horizon = data['time'].shape[0]
else:
data = path['env_infos']['obs_dict']
path_horizon = path['env_infos']['time'].shape[0]
# find full key name
data_keys = data.keys()
cam_keys = []
for cam_name in cam_names:
cam_key = None
for key in data_keys:
if cam_name in key and 'rgb' in key:
cam_key = key
break
assert cam_key != None, "Cam: {} not found in data. Available keys: [{}]".format(cam_name, data_keys)
cam_keys.append(key)
# pre allocate buffer
if i_path==0:
height, width, _ = data[cam_keys[0]][0].shape
frame_tile = np.zeros((height, width*len(cam_keys), 3), dtype=np.uint8)
if render_format == "mp4":
frames = np.zeros((path_horizon, height, width*len(cam_keys), 3), dtype=np.uint8)
# Render
print("Recovering {} frames:".format(render_format), end="")
for t in range(path_horizon):
# render single frame
for i_cam, cam_key in enumerate(cam_keys):
frame_tile[:,i_cam*width:(i_cam+1)*width, :] = data[cam_key][t]
# process single frame
if render_format == "mp4":
frames[t,:,:,:] = frame_tile
elif render_format == "rgb":
image = Image.fromarray(frame_tile)
image.save(file_name+"_{}-{}.png".format(i_path, t))
else:
raise TypeError("Unknown format")
print(t, end=",", flush=True)
# Save video
if render_format == "mp4":
file_name_mp4 = file_name+"_{}.mp4".format(i_path)
skvideo.io.vwrite(file_name_mp4, np.asarray(frames))
print("\nSaving: " + file_name_mp4)
# parse path from robohive format into robopen dataset format
def path2dataset(path:dict, config_path=None)->dict:
"""
Convery Robohive path.pickle format into robopen dataset format
"""
obs_keys = path['env_infos']['obs_dict'].keys()
dataset = {}
# Data =====
dataset['data/time'] = path['env_infos']['obs_dict']['t']
# actions
if 'actions' in path.keys():
dataset['data/ctrl_arm'] = path['actions'][:,:7]
dataset['data/ctrl_ee'] = path['actions'][:,7:]
# states
for key in ['qp_arm', 'qv_arm', 'tau_arm', 'qp_ee', 'qv_ee']:
if key in obs_keys:
dataset['data/'+key] = path['env_infos']['obs_dict'][key]
# cams
for cam in ['left', 'right', 'top', 'wrist']:
for key in obs_keys:
if cam in key:
if 'rgb:' in key:
dataset['data/rgb_'+cam] = path['env_infos']['obs_dict'][key]
elif 'd:' in key:
dataset['data/d_'+cam] = path['env_infos']['obs_dict'][key]
# user
if 'user' in obs_keys:
dataset['data/user'] = path['env_infos']['obs_dict']['user']
# Derived =====
pose_ee = []
if 'pos_ee' in obs_keys or 'rot_ee' in obs_keys:
assert ('pos_ee' in obs_keys and 'rot_ee' in obs_keys), "Both pose_ee and rot_ee are required"
dataset['derived/pose_ee'] = np.hstack([path['env_infos']['obs_dict']['pos_ee'], path['env_infos']['obs_dict']['rot_ee']])
# Config =====
if config_path:
config = json.load(open(config_path, 'rb'))
dataset['config'] = config
if 'user_cmt' in path.keys():
dataset['config/solved'] = float(path['user_cmt'])
return dataset
# Print h5 schema
def print_h5_schema(obj):
"Recursively find all keys in an h5py.Group."
keys = (obj.name,)
if isinstance(obj, h5py.Group):
for key, value in obj.items():
if isinstance(value, h5py.Group):
keys = keys + print_h5_schema(value)
else:
print("\t", "{0:35}".format(value.name), value)
keys = keys + (value.name,)
return keys
# convert paths from pickle to h5 format
def pickle2h5(rollout_path, output_dir=None, verify_output=False, h5_format:str='path', compress_path=False, config_path=None, max_paths=1e6):
# rollout_path: Single path or folder with paths
# output_dir: Directory to save the outputs. use path location if none.
# verify_output: Verify the saved file
# h5_format: robohive path / roboset h5s
# compress_path: produce smaller outputs by removing duplicate data
# config_path: add extra configs
# resolve output dirzz
if output_dir == None: # overide the default
output_dir = os.path.dirname(rollout_path)
# resolve rollout_paths
if os.path.isfile(rollout_path):
rollout_paths = [rollout_path]
else:
rollout_paths = glob.glob(os.path.join(rollout_path, '*.pickle'))
# Parse all rollouts
n_rollouts = 0
for rollout_path in rollout_paths:
# parse all paths
print('Parsing: ', rollout_path)
if n_rollouts>=max_paths:
break
paths = pickle.load(open(rollout_path, 'rb'))
rollout_name = os.path.split(rollout_path)[-1]
output_name = os.path.splitext(rollout_name)[0]
output_path = os.path.join(output_dir, output_name + '.h5')
paths_h5 = h5py.File(output_path, "w")
# Robohive path format
if h5_format == "path":
for i_path, path in enumerate(paths):
print("parsing rollout", i_path)
trial = paths_h5.create_group('Trial'+str(i_path))
# remove duplicate infos
if compress_path:
if 'observations' in path.keys():
del path['observations']
if 'state' in path['env_infos'].keys():
del path['env_infos']['state']
# flatten dict and fix resolutions
path = flatten_dict(data=path)
path = dict_numpify(path, u_res=None, i_res=np.int8, f_res=np.float16)
# add trail
for k, v in path.items():
trial.create_dataset(k, data=v, compression='gzip', compression_opts=4)
n_rollouts+=1
if n_rollouts>=max_paths:
break
# RoboPen dataset format
elif h5_format == "dataset":
for i_path, path in enumerate(paths):
print("parsing rollout", i_path)
trial = paths_h5.create_group('Trial'+str(i_path))
dataset = path2dataset(path, config_path) # convert to robopen dataset format
dataset = flatten_dict(data=dataset)
dataset = dict_numpify(dataset, u_res=None, i_res=np.int8, f_res=np.float16) # numpify + data resolutions
for k, v in dataset.items():
trial.create_dataset(k, data=v, compression='gzip', compression_opts=4)
n_rollouts+=1
if n_rollouts>=max_paths:
break
else:
TypeError('Unsupported h5_format')
# close the h5 writer for this path
print('Saving: ', output_path)
# Read back and verify a few keys
if verify_output:
with h5py.File(output_path, "r") as h5file:
print("Printing schema read from output: ", output_path)
keys = print_h5_schema(h5file)
print("Finished Processing")
DESC="""
Script to recover images and videos from the saved pickle files
- python utils/paths_utils.py -u render -p paths.pickle -rf mp4 -cn right
- python utils/paths_utils.py -u pickle2h5 -p paths.pickle -vo True -cp True -hf dataset
"""
@click.command(help=DESC)
@click.option('-u', '--util', type=click.Choice(['plot_horizon', 'plot', 'render', 'pickle2h5', 'h5schema']), help='pick utility', required=True)
@click.option('-p', '--path', type=click.Path(exists=True), help='absolute path of the rollout (h5/pickle)', default=None)
@click.option('-e', '--env', type=str, help='Env name', default=None)
@click.option('-on', '--output_name', type=str, default=None, help=('Output name'))
@click.option('-od', '--output_dir', type=str, default=None, help=('Directory to save the outputs'))
@click.option('-vo', '--verify_output', type=bool, default=False, help=('Verify the saved file'))
@click.option('-hf', '--h5_format', type=click.Choice(['path', 'dataset']), help='format to save', default="dataset")
@click.option('-cp', '--compress_path', help='compress paths. Remove obs and env_info/state keys', default=False)
@click.option('-rf', '--render_format', type=click.Choice(['rgb', 'mp4']), help='format to save', default="mp4")
@click.option('-cn', '--cam_names', multiple=True, help='camera to render. Eg: left, right, top, Franka_wrist', default=["left", "top", "right", "wrist"])
@click.option('-ac', '--add_config', help='Add extra infos to config using as json', default=None)
@click.option('-mp', '--max_paths', type=int, help='maximum number of paths to process', default=1e6)
def util_path_cli(util, path, env, output_name, output_dir, verify_output, render_format, cam_names, h5_format, compress_path, add_config, max_paths):
if util=='plot_horizon':
fileName_prefix = os.join(output_dir, output_name)
plot_horizon(path, env, fileName_prefix)
elif util=='plot':
fileName_prefix = os.join(output_dir, output_name)
plot(path, env, fileName_prefix)
elif util=='render':
render(rollout_path=path, render_format=render_format, cam_names=cam_names)
elif util=='pickle2h5':
pickle2h5(rollout_path=path, output_dir=output_dir, verify_output=verify_output, h5_format=h5_format, compress_path=compress_path, config_path=add_config, max_paths=max_paths)
elif util=='h5schema':
with h5py.File(path, "r") as h5file:
print("Printing schema read from output: ", path)
keys = print_h5_schema(h5file)
else:
raise TypeError("Unknown utility requested")
if __name__ == '__main__':
util_path_cli() | /robohive-0.3.0-py3-none-any.whl/mj_envs/utils/paths_utils.py | 0.542621 | 0.30252 | paths_utils.py | pypi |
DESC = """
TUTORIAL: Calculate min jerk trajectory using IK \n
- NOTE: written for franka_busbin_v0.xml model and might not be too generic
EXAMPLE:\n
- python tutorials/ik_minjerk_trajectory.py --sim_path envs/arms/franka/assets/franka_busbin_v0.xml\n
"""
from mujoco_py import load_model_from_path, MjSim, MjViewer
from mj_envs.utils.inverse_kinematics import qpos_from_site_pose
from mj_envs.utils.min_jerk import *
from mj_envs.utils.quat_math import euler2quat, euler2mat
import click
import numpy as np
BIN_POS = np.array([.235, 0.5, .85])
BIN_DIM = np.array([.2, .3, 0])
BIN_TOP = 0.10
ARM_nJnt = 7
@click.command(help=DESC)
@click.option('-s', '--sim_path', type=str, help='environment to load', required= True, default='envs/arms/franka/assets/franka_busbin_v0.xml')
@click.option('-h', '--horizon', type=int, help='time (s) to simulate', default=2)
def main(sim_path, horizon):
# Prep
model = load_model_from_path(sim_path)
sim = MjSim(model)
viewer = MjViewer(sim)
# setup
target_sid = sim.model.site_name2id("drop_target")
ARM_JNT0 = np.mean(sim.model.jnt_range[:ARM_nJnt], axis=-1)
while True:
# Update targets
if sim.data.time==0:
print("Resamping new target")
# sample targets
target_pos = BIN_POS + np.random.uniform(high=BIN_DIM, low=-1*BIN_DIM) + np.array([0, 0, BIN_TOP]) # add some z offfset
target_elr = np.random.uniform(high= [3.14, 0, 0], low=[3.14, 0, -3.14])
target_quat= euler2quat(target_elr)
# propagage targets to the sim for viz
sim.model.site_pos[target_sid][:] = target_pos - np.array([0, 0, BIN_TOP])
sim.model.site_quat[target_sid][:] = target_quat
# reseed the arm for IK
sim.data.qpos[:ARM_nJnt] = ARM_JNT0
sim.forward()
# IK
ik_result = qpos_from_site_pose(
physics = sim,
site_name = "end_effector",
target_pos= target_pos,
target_quat= target_quat,
inplace=False,
regularization_strength=1.0)
print("IK:: Status:{}, total steps:{}, err_norm:{}".format(ik_result.success, ik_result.steps, ik_result.err_norm))
# generate min jerk trajectory
waypoints = generate_joint_space_min_jerk(start=ARM_JNT0, goal=ik_result.qpos[:ARM_nJnt], time_to_go=horizon, dt=sim.model.opt.timestep )
# propagate waypoint in sim
waypoint_ind = int(sim.data.time/sim.model.opt.timestep)
sim.data.ctrl[:ARM_nJnt] = waypoints[waypoint_ind]['position']
sim.step()
# update time and render
sim.data.time += sim.model.opt.timestep
viewer.render()
# reset time if horizon elapsed
if sim.data.time>horizon:
sim.data.time = 0
if __name__ == '__main__':
main() | /robohive-0.3.0-py3-none-any.whl/mj_envs/tutorials/ik_minjerk_trajectory.py | 0.561936 | 0.396594 | ik_minjerk_trajectory.py | pypi |
DESC = '''
Tutorial: Demonstrate how to use the RoboHive's Robot class in isolation. This usecase is common in scenarios where the entire env definitions aren't required for experiments. In this tutorial we demonstate how we can use the RoboHive's Franka Robot in the real world using specifications available in a hardware config file.
USAGE Help:\n
$ python tutorials/examine_robot.py --sim_path PATH_robot_sim.xml --config_path PATH_robot_configurations.config\n
EXAMPLE:\n
$ python tutorials/examine_robot.py --sim_path envs/arms/franka/assets/franka_reach_v0.xml --config_path envs/arms/franka/assets/franka_reach_v0.config
'''
from mujoco_py import MjViewer
from mj_envs.robot.robot import Robot
import numpy as np
import click
@click.command(help=DESC)
@click.option('-sp', '--sim_path', type=str, help='environment to load', required= True, default='envs/arms/franka/assets/franka_reach_v0.xml')
@click.option('-cp', '--config_path', type=str, help='Config to load', required= True, default='envs/arms/franka/assets/franka_reach_v0.config')
@click.option('-ih', '--is_hardware', type=bool, help='Use on real robot hardware', default=False)
@click.option('-ja', '--jnt_amp', type=float, help='Range for random poses. 0:mean-pose. 1:full joint range', default=.15)
@click.option('-fs', '--frame_skip', type=int, help='hardware_dt = frame_skip*sim_dt', default=40)
@click.option('-th', '--traj_horizon', type=int, help='Trajectory duration in seconds', default=4)
@click.option('-lr', '--live_render', type=bool, help='Open a rendering window?', default=True)
def main(sim_path, config_path, is_hardware, jnt_amp, frame_skip, traj_horizon, live_render):
# start robots and visualizers
robot = Robot(robot_name="Robot Demo", model_path=sim_path, config_path=config_path, act_mode='pos', is_hardware=is_hardware)
sim = robot.sim
if live_render:
viewer = MjViewer(sim)
render_cbk = viewer.render
else:
render_cbk = None
# derived variables
traj_dt = frame_skip*sim.model.opt.timestep
traj_nsteps = int(traj_horizon/traj_dt)
jnt_mean = np.mean(sim.model.jnt_range, axis=1)
djnt_mean = np.zeros(sim.model.nv)
jnt_rng = 0.5*(sim.model.jnt_range[:,1]-sim.model.jnt_range[:,0])
# Goto joint targets
while True:
# Sample a new desired position
des_jnt_pos = jnt_mean + jnt_amp*np.random.uniform(high=jnt_rng, low=-jnt_rng)
act = des_jnt_pos
# execute on robot
robot.reset(reset_pos=jnt_mean, reset_vel=djnt_mean)
sensors = robot.get_sensors() # gets latest sensors and propage it in the sim
for _ in range(traj_nsteps):
robot.step(ctrl_desired=act, step_duration=traj_dt, ctrl_normalized=False, realTimeSim=True, render_cbk=render_cbk)
sensors = robot.get_sensors() # gets current sensors and propage it in the sim
# Report progress
jnt_err_0 = des_jnt_pos-jnt_mean
jnt_err_t = des_jnt_pos-sim.data.qpos
print("Mean joint pose error:: t[0]={:.3f} => t[{}]={:.3f} rad".format(np.linalg.norm(jnt_err_0), traj_nsteps, np.linalg.norm(jnt_err_t)))
if __name__ == '__main__':
main() | /robohive-0.3.0-py3-none-any.whl/mj_envs/tutorials/examine_robot.py | 0.663451 | 0.6488 | examine_robot.py | pypi |
import gym
from gym.envs.registration import register
import collections
from copy import deepcopy
from flatten_dict import flatten, unflatten
# Update base_dict using update_dict
def update_dict(base_dict:dict, update_dict:dict, override_keys:list=None):
"""
Update a dict using another dict.
INPUTS:
base_dict: dict to update
update_dict: dict with updates (merge operation with base_dict)
override_keys: base_dict keys to override. Removes the keys from base_dict and relies on update_dict for updates, if any.
"""
if override_keys:
base_dict = {key: item for key, item in base_dict.items() if key not in override_keys}
base_dict_flat = flatten(base_dict, reducer='dot', keep_empty_types=(dict,))
update_dict_flat = flatten(update_dict, reducer='dot')
update_keyval_str = ""
for key, value in update_dict_flat.items():
base_dict_flat[key] = value
update_keyval_str = "{}-{}_{}".format(update_keyval_str, key, value)
merged_dict = unflatten(base_dict_flat, splitter='dot')
return merged_dict, update_keyval_str
# Register a variant of pre-registered environment
def register_env_variant(env_id:str, variants:dict, variant_id=None, silent=False, override_keys=None):
"""
Register a variant of pre-registered environment. Very useful for hyper-parameters sweeps when small changes are required on top of an env
INPUTS:
env_id: name of the original env
variants: dict with updates we want on the original env (merge operation with base env)
variant_id: name of the varient env. Auto populated if None
silent: prints the name of the newly registered env, if True.
override_keys: base_env keys to override. Removes the keys from base_env and relies on update_dict for updates, if any.
"""
# check if the base env is registered
assert env_id in gym.envs.registry.env_specs.keys(), "ERROR: {} not found in env registry".format(env_id)
# recover the specs of the existing env
env_variant_specs = deepcopy(gym.envs.registry.env_specs[env_id])
env_variant_id = env_variant_specs.id[:-3]
# update horizon if requested
if 'max_episode_steps' in variants.keys():
env_variant_specs.max_episode_steps = variants['max_episode_steps']
env_variant_id = env_variant_id+"-hor_{}".format(env_variant_specs.max_episode_steps)
del variants['max_episode_steps']
# merge specs._kwargs with variants
env_variant_specs._kwargs, variants_update_keyval_str = update_dict(env_variant_specs._kwargs, variants, override_keys=override_keys)
env_variant_id += variants_update_keyval_str
# finalize name and register env
env_variant_specs.id = env_variant_id+env_variant_specs.id[-3:] if variant_id is None else variant_id
register(
id=env_variant_specs.id,
entry_point=env_variant_specs._entry_point,
max_episode_steps=env_variant_specs.max_episode_steps,
kwargs=env_variant_specs._kwargs
)
if not silent:
print("Registered a new env-variant:", env_variant_specs.id)
return env_variant_specs.id
# Example usage
if __name__ == '__main__':
import mj_envs
import pprint
# Register a variant
base_env_name = "kitchen-v3"
base_env_variants={
'max_episode_steps':50, # special key
'obj_goal': {"lightswitch_joint": -0.7}, # obj_goal keys will be updated
'obs_keys_wt': { # obs_keys_wt will be updated
'robot_jnt': 5.0,
'obj_goal': 5.0,
'objs_jnt': 5.0,}
}
variant_env_name = register_env_variant(env_id=base_env_name, variants=base_env_variants)
variant_overide_env_name = register_env_variant(env_id=base_env_name, variants=base_env_variants, override_keys="obs_keys_wt") # Instead of updating via merge, obs_keys_wt key will be completeley overwritten
# Test variant
print("Base-env kwargs: ")
pprint.pprint(gym.envs.registry.env_specs[base_env_name]._kwargs)
print("Env-variant kwargs: ")
pprint.pprint(gym.envs.registry.env_specs[variant_env_name]._kwargs)
print("Env-variant (with override) kwargs: ")
pprint.pprint(gym.envs.registry.env_specs[variant_overide_env_name]._kwargs)
# Test one of the newly minted env
env = gym.make(variant_env_name)
env.reset()
env.sim.render(mode='window')
for _ in range(50):
env.step(env.action_space.sample()) # take a random action
env.close() | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/env_variants.py | 0.748812 | 0.288281 | env_variants.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs.myo.base_v0 import BaseV0
class ReachEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['qpos', 'qvel', 'tip_pos', 'reach_err']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"reach": 1.0,
"bonus": 4.0,
"penalty": 50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
target_reach_range:dict,
far_th = .35,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:dict = DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
self.far_th = far_th
self.target_reach_range = target_reach_range
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
sites=self.target_reach_range.keys(),
**kwargs,
)
def get_obs_vec(self):
self.obs_dict['t'] = np.array([self.sim.data.time])
self.obs_dict['qpos'] = self.sim.data.qpos[:].copy()
self.obs_dict['qvel'] = self.sim.data.qvel[:].copy()*self.dt
if self.sim.model.na>0:
self.obs_dict['act'] = self.sim.data.act[:].copy()
# reach error
self.obs_dict['tip_pos'] = np.array([])
self.obs_dict['target_pos'] = np.array([])
for isite in range(len(self.tip_sids)):
self.obs_dict['tip_pos'] = np.append(self.obs_dict['tip_pos'], self.sim.data.site_xpos[self.tip_sids[isite]].copy())
self.obs_dict['target_pos'] = np.append(self.obs_dict['target_pos'], self.sim.data.site_xpos[self.target_sids[isite]].copy())
self.obs_dict['reach_err'] = np.array(self.obs_dict['target_pos'])-np.array(self.obs_dict['tip_pos'])
t, obs = self.obsdict2obsvec(self.obs_dict, self.obs_keys)
return obs
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['qpos'] = sim.data.qpos[:].copy()
obs_dict['qvel'] = sim.data.qvel[:].copy()*self.dt
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
# reach error
obs_dict['tip_pos'] = np.array([])
obs_dict['target_pos'] = np.array([])
for isite in range(len(self.tip_sids)):
obs_dict['tip_pos'] = np.append(obs_dict['tip_pos'], sim.data.site_xpos[self.tip_sids[isite]].copy())
obs_dict['target_pos'] = np.append(obs_dict['target_pos'], sim.data.site_xpos[self.target_sids[isite]].copy())
obs_dict['reach_err'] = np.array(obs_dict['target_pos'])-np.array(obs_dict['tip_pos'])
return obs_dict
def get_reward_dict(self, obs_dict):
reach_dist = np.linalg.norm(obs_dict['reach_err'], axis=-1)
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
far_th = self.far_th*len(self.tip_sids) if np.squeeze(obs_dict['t'])>2*self.dt else np.inf
near_th = len(self.tip_sids)*.0125
rwd_dict = collections.OrderedDict((
# Optional Keys
('reach', -1.*reach_dist),
('bonus', 1.*(reach_dist<2*near_th) + 1.*(reach_dist<near_th)),
('act_reg', -1.*act_mag),
('penalty', -1.*(reach_dist>far_th)),
# Must keys
('sparse', -1.*reach_dist),
('solved', reach_dist<near_th),
('done', reach_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
# generate a valid target
def generate_target_pose(self):
for site, span in self.target_reach_range.items():
sid = self.sim.model.site_name2id(site+'_target')
self.sim.model.site_pos[sid] = self.np_random.uniform(low=span[0], high=span[1])
self.sim.forward()
def reset(self):
self.generate_target_pose()
self.robot.sync_sims(self.sim, self.sim_obsd)
obs = super().reset()
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/reach_v0.py | 0.516108 | 0.307715 | reach_v0.py | pypi |
import collections
import numpy as np
import gym
from mj_envs.envs.myo.base_v0 import BaseV0
from mj_envs.envs.env_base import get_sim
from mj_envs.utils.quat_math import euler2quat
from mj_envs.utils.vector_math import calculate_cosine
from os import sendfile
class PenTwirlFixedEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['hand_jnt', 'obj_pos', 'obj_vel', 'obj_rot', 'obj_des_rot', 'obj_err_pos', 'obj_err_rot']
DEFAULT_RWD_KEYS_AND_WEIGHTS= {
'pos_align':1.0,
'rot_align':1.0,
'act_reg':5.,
'drop':5.0,
'bonus':10.0
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
self.target_obj_bid = self.sim.model.body_name2id("target")
self.S_grasp_sid = self.sim.model.site_name2id('S_grasp')
self.obj_bid = self.sim.model.body_name2id('Object')
self.eps_ball_sid = self.sim.model.site_name2id('eps_ball')
self.obj_t_sid = self.sim.model.site_name2id('object_top')
self.obj_b_sid = self.sim.model.site_name2id('object_bottom')
self.tar_t_sid = self.sim.model.site_name2id('target_top')
self.tar_b_sid = self.sim.model.site_name2id('target_bottom')
self.pen_length = np.linalg.norm(self.sim.model.site_pos[self.obj_t_sid] - self.sim.model.site_pos[self.obj_b_sid])
self.tar_length = np.linalg.norm(self.sim.model.site_pos[self.tar_t_sid] - self.sim.model.site_pos[self.tar_b_sid])
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
**kwargs,
)
self.init_qpos[:-6] *= 0 # Use fully open as init pos
self.init_qpos[0] = -1.5 # place palm up
def get_obs_vec(self):
# qpos for hand, xpos for obj, xpos for target
self.obs_dict['t'] = np.array([self.sim.data.time])
self.obs_dict['hand_jnt'] = self.sim.data.qpos[:-6].copy()
self.obs_dict['obj_pos'] = self.sim.data.body_xpos[self.obj_bid].copy()
self.obs_dict['obj_des_pos'] = self.sim.data.site_xpos[self.eps_ball_sid].ravel()
self.obs_dict['obj_vel'] = self.sim.data.qvel[-6:].copy()*self.dt
self.obs_dict['obj_rot'] = (self.sim.data.site_xpos[self.obj_t_sid] - self.sim.data.site_xpos[self.obj_b_sid])/self.pen_length
self.obs_dict['obj_des_rot'] = (self.sim.data.site_xpos[self.tar_t_sid] - self.sim.data.site_xpos[self.tar_b_sid])/self.tar_length
self.obs_dict['obj_err_pos'] = self.obs_dict['obj_pos']-self.obs_dict['obj_des_pos']
self.obs_dict['obj_err_rot'] = self.obs_dict['obj_rot']-self.obs_dict['obj_des_rot']
if self.sim.model.na>0:
self.obs_dict['act'] = self.sim.data.act[:].copy()
t, obs = self.obsdict2obsvec(self.obs_dict, self.obs_keys)
return obs
def get_obs_dict(self, sim):
obs_dict = {}
# qpos for hand, xpos for obj, xpos for target
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_jnt'] = sim.data.qpos[:-6].copy()
obs_dict['obj_pos'] = sim.data.body_xpos[self.obj_bid].copy()
obs_dict['obj_des_pos'] = sim.data.site_xpos[self.eps_ball_sid].ravel()
obs_dict['obj_vel'] = sim.data.qvel[-6:].copy()*self.dt
obs_dict['obj_rot'] = (sim.data.site_xpos[self.obj_t_sid] - sim.data.site_xpos[self.obj_b_sid])/self.pen_length
obs_dict['obj_des_rot'] = (sim.data.site_xpos[self.tar_t_sid] - sim.data.site_xpos[self.tar_b_sid])/self.tar_length
obs_dict['obj_err_pos'] = obs_dict['obj_pos']-obs_dict['obj_des_pos']
obs_dict['obj_err_rot'] = obs_dict['obj_rot']-obs_dict['obj_des_rot']
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
pos_err = obs_dict['obj_err_pos']
pos_align = np.linalg.norm(pos_err, axis=-1)
rot_align = calculate_cosine(obs_dict['obj_rot'], obs_dict['obj_des_rot'])
# dropped = obs_dict['obj_pos'][:,:,2] < 0.075 if obs_dict['obj_pos'].ndim==3 else obs_dict['obj_pos'][2] < 0.075
dropped = (pos_align > 0.075)
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
rwd_dict = collections.OrderedDict((
# Optional Keys
('pos_align', -1.*pos_align),
('rot_align', rot_align),
('act_reg', -1.*act_mag),
('drop', -1.*dropped),
('bonus', 1.*(rot_align > 0.9)*(pos_align<0.075) + 5.0*(rot_align > 0.95)*(pos_align<0.075) ),
# Must keys
('sparse', -1.0*pos_align+rot_align),
('solved', (rot_align > 0.95)*(~dropped)),
('done', dropped),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
class PenTwirlRandomEnvV0(PenTwirlFixedEnvV0):
def reset(self):
# randomize target
desired_orien = np.zeros(3)
desired_orien[0] = self.np_random.uniform(low=-1, high=1)
desired_orien[1] = self.np_random.uniform(low=-1, high=1)
self.sim.model.body_quat[self.target_obj_bid] = euler2quat(desired_orien)
self.robot.sync_sims(self.sim, self.sim_obsd)
obs = super().reset()
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/pen_v0.py | 0.587588 | 0.270118 | pen_v0.py | pypi |
import collections
import numpy as np
import gym
from mj_envs.envs.myo.base_v0 import BaseV0
from mj_envs.utils.quat_math import mat2euler, euler2quat
class ReorientEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['hand_qpos', 'hand_qvel', 'obj_pos', 'goal_pos', 'pos_err', 'obj_rot', 'goal_rot', 'rot_err']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"pos_dist": 100.0,
"rot_dist": 1.0,
"bonus": 4.0,
"act_reg": 1,
"penalty": 10,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# Two step construction (init+setup) is required for pickling to work correctly.
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
goal_pos = (0.0, 0.0), # goal position range (relative to initial pos)
goal_rot = (.785, .785), # goal rotation range (relative to initial rot)
pos_th = .025, # position error threshold
rot_th = 0.262, # rotation error threshold
drop_th = .200, # drop height threshold
**kwargs,
):
self.object_sid = self.sim.model.site_name2id("object_o")
self.goal_sid = self.sim.model.site_name2id("target_o")
self.success_indicator_sid = self.sim.model.site_name2id("target_ball")
self.goal_bid = self.sim.model.body_name2id("target")
self.goal_init_pos = self.sim.data.site_xpos[self.goal_sid].copy()
self.goal_obj_offset = self.sim.data.site_xpos[self.goal_sid]-self.sim.data.site_xpos[self.object_sid] # visualization offset between target and object
self.goal_pos = goal_pos
self.goal_rot = goal_rot
self.pos_th = pos_th
self.rot_th = rot_th
self.drop_th = drop_th
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
**kwargs,
)
self.init_qpos[:-7] *= 0 # Use fully open as init pos
self.init_qpos[0] = -1.5 # Palm up
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_qpos'] = sim.data.qpos[:-7].copy()
obs_dict['hand_qvel'] = sim.data.qvel[:-6].copy()*self.dt
obs_dict['obj_pos'] = sim.data.site_xpos[self.object_sid]
obs_dict['goal_pos'] = sim.data.site_xpos[self.goal_sid]
obs_dict['pos_err'] = obs_dict['goal_pos'] - obs_dict['obj_pos'] - self.goal_obj_offset # correct for visualization offset between target and object
obs_dict['obj_rot'] = mat2euler(np.reshape(sim.data.site_xmat[self.object_sid],(3,3)))
obs_dict['goal_rot'] = mat2euler(np.reshape(sim.data.site_xmat[self.goal_sid],(3,3)))
obs_dict['rot_err'] = obs_dict['goal_rot'] - obs_dict['obj_rot']
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
pos_dist = np.abs(np.linalg.norm(self.obs_dict['pos_err'], axis=-1))
rot_dist = np.abs(np.linalg.norm(self.obs_dict['rot_err'], axis=-1))
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
drop = pos_dist > self.drop_th
rwd_dict = collections.OrderedDict((
# Optional Keys
('pos_dist', -1.*pos_dist),
('rot_dist', -1.*rot_dist),
('bonus', 1.*(pos_dist<2*self.pos_th) + 1.*(pos_dist<self.pos_th)),
('act_reg', -1.*act_mag),
('penalty', -1.*drop),
# Must keys
('sparse', -rot_dist-10.0*pos_dist),
('solved', (pos_dist<self.pos_th) and (rot_dist<self.rot_th) and (not drop) ),
('done', drop),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
# Sucess Indicator
self.sim.model.site_rgba[self.success_indicator_sid, :2] = np.array([0, 2]) if rwd_dict['solved'] else np.array([2, 0])
return rwd_dict
def reset(self):
self.sim.model.body_pos[self.goal_bid] = self.goal_init_pos + \
self.np_random.uniform( high=self.goal_pos[1], low=self.goal_pos[0], size=3)
self.sim.model.body_quat[self.goal_bid] = \
euler2quat(self.np_random.uniform( high=self.goal_rot[1], low=self.goal_rot[0], size=3))
obs = super().reset()
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/reorient_v0.py | 0.638159 | 0.398319 | reorient_v0.py | pypi |
import collections
import numpy as np
import gym
from mj_envs.envs.myo.base_v0 import BaseV0
class KeyTurnEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['hand_qpos', 'hand_qvel', 'key_qpos', 'key_qvel', 'IFtip_approach', 'THtip_approach']
DEFAULT_RWD_KEYS_AND_WEIGHTS= {
'key_turn':1.0,
'IFtip_approach':10.0,
'THtip_approach':10.0,
'act_reg':1.0,
'bonus':4.0,
'penalty':25.0
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
goal_th:float=3.14,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
key_init_range:tuple=(0,0),
**kwargs,
):
self.goal_th = goal_th
self.keyhead_sid = self.sim.model.site_name2id("keyhead")
self.IF_sid = self.sim.model.site_name2id("IFtip")
self.TH_sid = self.sim.model.site_name2id("THtip")
self.key_init_range = key_init_range
self.key_init_pos = self.sim.data.site_xpos[self.keyhead_sid].copy()
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
**kwargs,
)
self.init_qpos[:-1] *= 0 # Use fully open as init pos
def get_obs_vec(self):
self.obs_dict['t'] = np.array([self.sim.data.time])
self.obs_dict['hand_qpos'] = self.sim.data.qpos[:-1].copy()
self.obs_dict['hand_qvel'] = self.sim.data.qvel[:-1].copy()*self.dt
self.obs_dict['key_qpos'] = np.array([self.sim.data.qpos[-1]])
self.obs_dict['key_qvel'] = np.array([self.sim.data.qvel[-1]])*self.dt
self.obs_dict['IFtip_approach'] = self.sim.data.site_xpos[self.keyhead_sid]-self.sim.data.site_xpos[self.IF_sid]
self.obs_dict['THtip_approach'] = self.sim.data.site_xpos[self.keyhead_sid]-self.sim.data.site_xpos[self.TH_sid]
if self.sim.model.na>0:
self.obs_dict['act'] = self.sim.data.act[:].copy()
t, obs = self.obsdict2obsvec(self.obs_dict, self.obs_keys)
return obs
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_qpos'] = sim.data.qpos[:-1].copy()
obs_dict['hand_qvel'] = sim.data.qvel[:-1].copy()*self.dt
obs_dict['key_qpos'] = np.array([sim.data.qpos[-1]])
obs_dict['key_qvel'] = np.array([sim.data.qvel[-1]])*self.dt
obs_dict['IFtip_approach'] = sim.data.site_xpos[self.keyhead_sid]-sim.data.site_xpos[self.IF_sid]
obs_dict['THtip_approach'] = sim.data.site_xpos[self.keyhead_sid]-sim.data.site_xpos[self.TH_sid]
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
IF_approach_dist = np.abs(np.linalg.norm(self.obs_dict['IFtip_approach'], axis=-1)-0.030)
TH_approach_dist = np.abs(np.linalg.norm(self.obs_dict['THtip_approach'], axis=-1)-0.030)
key_pos = obs_dict['key_qpos'][:,:,0] if obs_dict['key_qpos'].ndim==3 else obs_dict['key_qpos'][0]
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
far_th = 0.1
rwd_dict = collections.OrderedDict((
# Optional Keys
('key_turn', key_pos),
('IFtip_approach', -1.*IF_approach_dist),
('THtip_approach', -1.*TH_approach_dist),
('act_reg', -1.*act_mag),
('bonus', 1.*(key_pos>np.pi/2) + 1.*(key_pos>np.pi)),
('penalty', -1.*(IF_approach_dist>far_th/2)-1.*(TH_approach_dist>far_th/2) ),
# Must keys
('sparse', key_pos),
('solved', obs_dict['key_qpos']>self.goal_th),
('done', (IF_approach_dist>far_th) or (TH_approach_dist>far_th)),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self, reset_qpos=None, reset_qvel=None):
qpos = self.init_qpos.copy() if reset_qpos is None else reset_qpos
qvel = self.init_qvel.copy() if reset_qvel is None else reset_qvel
qpos[-1] = self.np_random.uniform(low=self.key_init_range[0], high=self.key_init_range[1])
if self.key_init_range[0]!=self.key_init_range[1]: # randomEnv
self.sim.model.body_pos[-1] = self.key_init_pos+self.np_random.uniform(low=np.array([-0.01, -0.01, -.01]), high=np.array([0.01, 0.01, 0.01]))
self.robot.reset(qpos, qvel)
return self.get_obs() | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/key_turn_v0.py | 0.659295 | 0.24701 | key_turn_v0.py | pypi |
import collections
import numpy as np
import gym
from mj_envs.envs.myo.base_v0 import BaseV0
class ObjHoldFixedEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['hand_qpos', 'hand_qvel', 'obj_pos', 'obj_err']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"goal_dist": 100.0,
"bonus": 4.0,
"penalty": 10,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
self.object_sid = self.sim.model.site_name2id("object")
self.goal_sid = self.sim.model.site_name2id("goal")
self.object_init_pos = self.sim.data.site_xpos[self.object_sid].copy()
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
**kwargs,
)
self.init_qpos[:-7] *= 0 # Use fully open as init pos
self.init_qpos[0] = -1.5 # place palm up
def get_obs_vec(self):
self.obs_dict['t'] = np.array([self.sim.data.time])
self.obs_dict['hand_qpos'] = self.sim.data.qpos[:-7].copy()
self.obs_dict['hand_qvel'] = self.sim.data.qvel[:-6].copy()*self.dt
self.obs_dict['obj_pos'] = self.sim.data.site_xpos[self.object_sid]
self.obs_dict['obj_err'] = self.sim.data.site_xpos[self.goal_sid] - self.sim.data.site_xpos[self.object_sid]
if self.sim.model.na>0:
self.obs_dict['act'] = self.sim.data.act[:].copy()
t, obs = self.obsdict2obsvec(self.obs_dict, self.obs_keys)
return obs
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_qpos'] = sim.data.qpos[:-7].copy()
obs_dict['hand_qvel'] = sim.data.qvel[:-6].copy()*self.dt
obs_dict['obj_pos'] = sim.data.site_xpos[self.object_sid]
obs_dict['obj_err'] = sim.data.site_xpos[self.goal_sid] - sim.data.site_xpos[self.object_sid]
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
goal_dist = np.abs(np.linalg.norm(self.obs_dict['obj_err'], axis=-1)) #-0.040)
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
gaol_th = .010
drop = goal_dist > 0.300
rwd_dict = collections.OrderedDict((
# Optional Keys
('goal_dist', -1.*goal_dist),
('bonus', 1.*(goal_dist<2*gaol_th) + 1.*(goal_dist<gaol_th)),
('act_reg', -1.*act_mag),
('penalty', -1.*drop),
# Must keys
('sparse', -goal_dist),
('solved', goal_dist<gaol_th),
('done', drop),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
class ObjHoldRandomEnvV0(ObjHoldFixedEnvV0):
def reset(self):
# randomize target pos
self.sim.model.site_pos[self.goal_sid] = self.object_init_pos + self.np_random.uniform(high=np.array([0.030, 0.030, 0.030]), low=np.array([-.030, -.030, -.030]))
# randomize object
size = self.np_random.uniform(high=np.array([0.030, 0.030, 0.030]), low=np.array([.020, .020, .020]))
self.sim.model.geom_size[-1] = size
self.sim.model.site_size[self.goal_sid] = size
self.robot.sync_sims(self.sim, self.sim_obsd)
obs = super().reset()
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/obj_hold_v0.py | 0.61855 | 0.305639 | obj_hold_v0.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs.myo.base_v0 import BaseV0
from mj_envs.envs.env_base import get_sim
class PoseEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['qpos', 'qvel', 'pose_err']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"pose": 1.0,
"bonus": 4.0,
"act_reg": 1.0,
"penalty": 50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
viz_site_targets:tuple = None, # site to use for targets visualization []
target_jnt_range:dict = None, # joint ranges as tuples {name:(min, max)}_nq
target_jnt_value:list = None, # desired joint vector [des_qpos]_nq
reset_type = "init", # none; init; random
target_type = "generate", # generate; switch; fixed
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:dict = DEFAULT_RWD_KEYS_AND_WEIGHTS,
pose_thd = 0.35,
weight_bodyname = None,
weight_range = None,
**kwargs,
):
self.reset_type = reset_type
self.target_type = target_type
self.pose_thd = pose_thd
self.weight_bodyname = weight_bodyname
self.weight_range = weight_range
# resolve joint demands
if target_jnt_range:
self.target_jnt_ids = []
self.target_jnt_range = []
for jnt_name, jnt_range in target_jnt_range.items():
self.target_jnt_ids.append(self.sim.model.joint_name2id(jnt_name))
self.target_jnt_range.append(jnt_range)
self.target_jnt_range = np.array(self.target_jnt_range)
self.target_jnt_value = np.mean(self.target_jnt_range, axis=1) # pseudo targets for init
else:
self.target_jnt_value = target_jnt_value
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
sites=viz_site_targets,
**kwargs,
)
def get_obs_vec(self):
self.obs_dict['t'] = np.array([self.sim.data.time])
self.obs_dict['qpos'] = self.sim.data.qpos[:].copy()
self.obs_dict['qvel'] = self.sim.data.qvel[:].copy()*self.dt
if self.sim.model.na>0:
self.obs_dict['act'] = self.sim.data.act[:].copy()
self.obs_dict['pose_err'] = self.target_jnt_value - self.obs_dict['qpos']
t, obs = self.obsdict2obsvec(self.obs_dict, self.obs_keys)
return obs
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['qpos'] = sim.data.qpos[:].copy()
obs_dict['qvel'] = sim.data.qvel[:].copy()*self.dt
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
obs_dict['pose_err'] = self.target_jnt_value - obs_dict['qpos']
return obs_dict
def get_reward_dict(self, obs_dict):
pose_dist = np.linalg.norm(obs_dict['pose_err'], axis=-1)
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
far_th = 4*np.pi/2
rwd_dict = collections.OrderedDict((
# Optional Keys
('pose', -1.*pose_dist),
('bonus', 1.*(pose_dist<self.pose_thd) + 1.*(pose_dist<1.5*self.pose_thd)),
('penalty', -1.*(pose_dist>far_th)),
('act_reg', -1.*act_mag),
# Must keys
('sparse', -1.0*pose_dist),
('solved', pose_dist<self.pose_thd),
('done', pose_dist>far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
# generate a valid target pose
def get_target_pose(self):
if self.target_type == "fixed":
return self.target_jnt_value
elif self.target_type == "generate":
return self.np_random.uniform(low=self.target_jnt_range[:,0], high=self.target_jnt_range[:,1])
else:
raise TypeError("Unknown Target type: {}".format(self.target_type))
# update sim with a new target pose
def update_target(self, restore_sim=False):
if restore_sim:
qpos = self.sim.data.qpos[:].copy()
qvel = self.sim.data.qvel[:].copy()
# generate targets
self.target_jnt_value = self.get_target_pose()
# update finger-tip target viz
self.sim.data.qpos[:] = self.target_jnt_value.copy()
self.sim.forward()
for isite in range(len(self.tip_sids)):
self.sim.model.site_pos[self.target_sids[isite]] = self.sim.data.site_xpos[self.tip_sids[isite]].copy()
if restore_sim:
self.sim.data.qpos[:] = qpos[:]
self.sim.data.qvel[:] = qvel[:]
self.sim.forward()
# reset_type = none; init; random
# target_type = generate; switch
def reset(self):
# udpate wegith
if self.weight_bodyname is not None:
bid = self.sim.model.body_name2id(self.weight_bodyname)
gid = self.sim.model.body_geomadr[bid]
weight = self.np_random.uniform(low=self.weight_range[0], high=self.weight_range[1])
self.sim.model.body_mass[bid] = weight
self.sim_obsd.model.body_mass[bid] = weight
# self.sim_obsd.model.geom_size[gid] = self.sim.model.geom_size[gid] * weight/10
self.sim.model.geom_size[gid][0] = 0.01 + 2.5*weight/100
# self.sim_obsd.model.geom_size[gid][0] = weight/10
# update target
if self.target_type == "generate":
# use target_jnt_range to generate targets
self.update_target(restore_sim=True)
elif self.target_type == "switch":
# switch between given target choices
# TODO: Remove hard-coded numbers
if self.target_jnt_value[0] != -0.145125:
self.target_jnt_value = np.array([-0.145125, 0.92524251, 1.08978337, 1.39425813, -0.78286243, -0.77179383, -0.15042819, 0.64445902])
self.sim.model.site_pos[self.target_sids[0]] = np.array([-0.11000209, -0.01753063, 0.20817679])
self.sim.model.site_pos[self.target_sids[1]] = np.array([-0.1825131, 0.07417956, 0.11407256])
self.sim.forward()
else:
self.target_jnt_value = np.array([-0.12756566, 0.06741454, 1.51352705, 0.91777418, -0.63884237, 0.22452487, 0.42103326, 0.4139465])
self.sim.model.site_pos[self.target_sids[0]] = np.array([-0.11647777, -0.05180014, 0.19044284])
self.sim.model.site_pos[self.target_sids[1]] = np.array([-0.17728016, 0.01489491, 0.17953786])
elif self.target_type == "fixed":
self.update_target(restore_sim=True)
else:
print("{} Target Type not found ".format(self.target_type))
# update init state
if self.reset_type is None or self.reset_type == "none":
# no reset; use last state
obs = self.get_obs()
elif self.reset_type == "init":
# reset to init state
obs = super().reset()
elif self.reset_type == "random":
# reset to random state
jnt_init = self.np_random.uniform(high=self.sim.model.jnt_range[:,1], low=self.sim.model.jnt_range[:,0])
obs = super().reset(reset_qpos=jnt_init)
else:
print("Reset Type not found")
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/pose_v0.py | 0.68721 | 0.24899 | pose_v0.py | pypi |
import collections
from traceback import print_tb
import numpy as np
import gym
from mj_envs.envs.myo.base_v0 import BaseV0
from mj_envs.utils.quat_math import mat2euler, euler2quat
class ReorientEnvV0(BaseV0):
DEFAULT_OBS_KEYS = ['hand_qpos', 'hand_qvel', 'obj_pos', 'goal_pos', 'pos_err', 'obj_rot', 'goal_rot', 'rot_err']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"pos_dist": 100.0,
"rot_dist": 1.0,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# Two step construction (init+setup) is required for pickling to work correctly.
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
goal_pos = (0.0, 0.0), # goal position range (relative to initial pos)
goal_rot = (.785, .785), # goal rotation range (relative to initial rot)
obj_size_change = 0, # object size change (relative to initial size)
obj_mass_range = (.108,.108), # object size change (relative to initial size)
obj_friction_change = (0,0,0), # object friction change (relative to initial size)
pos_th = .025, # position error threshold
rot_th = 0.262, # rotation error threshold
drop_th = .200, # drop height threshold
**kwargs,
):
self.object_sid = self.sim.model.site_name2id("object_o")
self.goal_sid = self.sim.model.site_name2id("target_o")
self.success_indicator_sid = self.sim.model.site_name2id("target_ball")
self.goal_bid = self.sim.model.body_name2id("target")
self.goal_init_pos = self.sim.data.site_xpos[self.goal_sid].copy()
self.goal_obj_offset = self.sim.data.site_xpos[self.goal_sid]-self.sim.data.site_xpos[self.object_sid] # visualization offset between target and object
self.goal_pos = goal_pos
self.goal_rot = goal_rot
self.pos_th = pos_th
self.rot_th = rot_th
self.drop_th = drop_th
# setup for object randomization
self.target_gid = self.sim.model.geom_name2id('target_dice')
self.target_default_size = self.sim.model.geom_size[self.target_gid].copy()
self.object_bid = self.sim.model.body_name2id('Object')
self.object_gid0 = self.sim.model.body_geomadr[self.object_bid]
self.object_gidn = self.object_gid0 + self.sim.model.body_geomnum[self.object_bid]
self.object_default_size = self.sim.model.geom_size[self.object_gid0:self.object_gidn].copy()
self.object_default_pos = self.sim.model.geom_pos[self.object_gid0:self.object_gidn].copy()
self.obj_mass_range = {'low':obj_mass_range[0], 'high':obj_mass_range[1]}
self.obj_size_range = {'low':-obj_size_change, 'high':obj_size_change}
self.obj_friction_range = {'low':self.sim.model.geom_friction[self.object_gid0:self.object_gidn] - obj_friction_change,
'high':self.sim.model.geom_friction[self.object_gid0:self.object_gidn] + obj_friction_change}
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
**kwargs,
)
self.init_qpos[:-7] *= 0 # Use fully open as init pos
self.init_qpos[0] = -1.5 # Palm up
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_qpos'] = sim.data.qpos[:-7].copy()
obs_dict['hand_qvel'] = sim.data.qvel[:-6].copy()*self.dt
obs_dict['obj_pos'] = sim.data.site_xpos[self.object_sid]
obs_dict['goal_pos'] = sim.data.site_xpos[self.goal_sid]
obs_dict['pos_err'] = obs_dict['goal_pos'] - obs_dict['obj_pos'] - self.goal_obj_offset # correct for visualization offset between target and object
obs_dict['obj_rot'] = mat2euler(np.reshape(sim.data.site_xmat[self.object_sid],(3,3)))
obs_dict['goal_rot'] = mat2euler(np.reshape(sim.data.site_xmat[self.goal_sid],(3,3)))
obs_dict['rot_err'] = obs_dict['goal_rot'] - obs_dict['obj_rot']
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
pos_dist = np.abs(np.linalg.norm(self.obs_dict['pos_err'], axis=-1))
rot_dist = np.abs(np.linalg.norm(self.obs_dict['rot_err'], axis=-1))
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
drop = pos_dist > self.drop_th
rwd_dict = collections.OrderedDict((
# Perform reward tuning here --
# Update Optional Keys section below
# Update reward keys (DEFAULT_RWD_KEYS_AND_WEIGHTS) accordingly to update final rewards
# Examples: Env comes pre-packaged with two keys pos_dist and rot_dist
# Optional Keys
('pos_dist', -1.*pos_dist),
('rot_dist', -1.*rot_dist),
# Must keys
('act_reg', -1.*act_mag),
('sparse', -rot_dist-10.0*pos_dist),
('solved', (pos_dist<self.pos_th) and (rot_dist<self.rot_th) and (not drop) ),
('done', drop),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
# Sucess Indicator
self.sim.model.site_rgba[self.success_indicator_sid, :2] = np.array([0, 2]) if rwd_dict['solved'] else np.array([2, 0])
return rwd_dict
def get_metrics(self, paths, successful_steps=5):
"""
Evaluate paths and report metrics
"""
num_success = 0
num_paths = len(paths)
# average sucess over entire env horizon
for path in paths:
# record success if solved for provided successful_steps
if np.sum(path['env_infos']['rwd_dict']['solved'] * 1.0) > successful_steps:
num_success += 1
score = num_success/num_paths
# average activations over entire trajectory (can be shorter than horizon, if done) realized
effort = -1.0*np.mean([np.mean(p['env_infos']['rwd_dict']['act_reg']) for p in paths])
metrics = {
'score': score,
'effort':effort,
}
return metrics
def reset(self, reset_qpos=None, reset_qvel=None):
self.sim.model.body_pos[self.goal_bid] = self.goal_init_pos + \
self.np_random.uniform( high=self.goal_pos[1], low=self.goal_pos[0], size=3)
self.sim.model.body_quat[self.goal_bid] = \
euler2quat(self.np_random.uniform(high=self.goal_rot[1], low=self.goal_rot[0], size=3))
# Die friction changes
self.sim.model.geom_friction[self.object_gid0:self.object_gidn] = self.np_random.uniform(**self.obj_friction_range)
# Die mass changes
self.sim.model.body_mass[self.object_bid] = self.np_random.uniform(**self.obj_mass_range) # call to mj_setConst(m,d) is being ignored. Derive quantities wont be updated. Die is simple shape. So this is reasonable approximation.
# Die and Target size changes
del_size = self.np_random.uniform(**self.obj_size_range)
# adjust size of target
self.sim.model.geom_size[self.target_gid] = self.target_default_size + del_size
# adjust size of die
self.sim.model.geom_size[self.object_gid0:self.object_gidn-3][:,1] = self.object_default_size[:-3][:,1] + del_size
self.sim.model.geom_size[self.object_gidn-3:self.object_gidn] = self.object_default_size[-3:] + del_size
# adjust boundary of die
object_gpos = self.sim.model.geom_pos[self.object_gid0:self.object_gidn]
self.sim.model.geom_pos[self.object_gid0:self.object_gidn] = object_gpos/abs(object_gpos+1e-16) * (abs(self.object_default_pos) + del_size)
obs = super().reset(reset_qpos, reset_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/myochallenge/reorient_v0.py | 0.581303 | 0.284065 | reorient_v0.py | pypi |
from gym.envs.registration import register
import os
curr_dir = os.path.dirname(os.path.abspath(__file__))
import numpy as np
# MyoChallenge Die: Trial env
register(id='myoChallengeDieReorientDemo-v0',
entry_point='mj_envs.envs.myo.myochallenge.reorient_v0:ReorientEnvV0',
max_episode_steps=150,
kwargs={
'model_path': curr_dir+'/../assets/hand/myo_hand_die.xml',
'normalize_act': True,
'frame_skip': 5,
'pos_th': np.inf, # ignore position error threshold
'goal_pos': (0, 0), # 0 cm
'goal_rot': (-.785, .785) # +-45 degrees
}
)
# MyoChallenge Die: Phase1 env
register(id='myoChallengeDieReorientP1-v0',
entry_point='mj_envs.envs.myo.myochallenge.reorient_v0:ReorientEnvV0',
max_episode_steps=150,
kwargs={
'model_path': curr_dir+'/../assets/hand/myo_hand_die.xml',
'normalize_act': True,
'frame_skip': 5,
'goal_pos': (-.010, .010), # +- 1 cm
'goal_rot': (-1.57, 1.57) # +-90 degrees
}
)
# MyoChallenge Die: Phase2 env
register(id='myoChallengeDieReorientP2-v0',
entry_point='mj_envs.envs.myo.myochallenge.reorient_v0:ReorientEnvV0',
max_episode_steps=150,
kwargs={
'model_path': curr_dir+'/../assets/hand/myo_hand_die.xml',
'normalize_act': True,
'frame_skip': 5,
# Randomization in goals
'goal_pos': (-.020, .020), # +- 2 cm
'goal_rot': (-3.14, 3.14), # +-180 degrees
# Randomization in physical properties of the die
'obj_size_change': 0.007, # +-7mm delta change in object size
'obj_mass_range': (0.050, 0.250),# 50gms to 250 gms
'obj_friction_change': (0.2, 0.001, 0.00002) # nominal: 1.0, 0.005, 0.0001
}
)
# MyoChallenge Baoding: Phase1 env
register(id='myoChallengeBaodingP1-v1',
entry_point='mj_envs.envs.myo.myochallenge.baoding_v1:BaodingEnvV1',
max_episode_steps=200,
kwargs={
'model_path': curr_dir+'/../assets/hand/myo_hand_baoding.xml',
'normalize_act': True,
'goal_time_period': (5, 5),
'goal_xrange': (0.025, 0.025),
'goal_yrange': (0.028, 0.028),
}
)
# MyoChallenge Baoding: Phase1 env
register(id='myoChallengeBaodingP2-v1',
entry_point='mj_envs.envs.myo.myochallenge.baoding_v1:BaodingEnvV1',
max_episode_steps=200,
kwargs={
'model_path': curr_dir+'/../assets/hand/myo_hand_baoding.xml',
'normalize_act': True,
'goal_time_period': (4, 6),
'goal_xrange': (0.020, 0.030),
'goal_yrange': (0.022, 0.032),
# Randomization in physical properties of the baoding balls
'obj_size_range': (0.018, 0.024), # Object size range. Nominal 0.022
'obj_mass_range': (0.030, 0.300), # Object weight range. Nominal 43 gms
'obj_friction_change': (0.2, 0.001, 0.00002), # nominal: 1.0, 0.005, 0.0001
'task_choice': 'random'
}
) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/myo/myochallenge/__init__.py | 0.500977 | 0.162247 | __init__.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.utils.quat_math import euler2quat
from mj_envs.envs.relay_kitchen.multi_task_base_v1 import KitchenBase
class FrankaSlideFixed(KitchenBase):
OBJ_INTERACTION_SITES = (
"slide_site",
)
OBJ_JNT_NAMES = (
"slidedoor_joint",
)
ROBOT_JNT_NAMES = (
"panda0_joint1",
"panda0_joint2",
"panda0_joint3",
"panda0_joint4",
"panda0_joint5",
"panda0_joint6",
"panda0_joint7",
"panda0_finger_joint1",
"panda0_finger_joint2",
)
def __init__(
self,
model_path,
robot_jnt_names=ROBOT_JNT_NAMES,
obj_jnt_names=OBJ_JNT_NAMES,
obj_interaction_sites=OBJ_INTERACTION_SITES,
goal=None,
interact_site="end_effector",
obj_init=None,
**kwargs,
):
KitchenBase.__init__(
self,
model_path=model_path,
robot_jnt_names=robot_jnt_names,
obj_jnt_names=obj_jnt_names,
obj_interaction_sites=obj_interaction_sites,
goal=goal,
interact_site=interact_site,
obj_init=obj_init,
**kwargs,
)
class FrankaSlideRandom(FrankaSlideFixed):
def reset(self, reset_qpos=None, reset_qvel=None):
bid = self.sim.model.body_name2id('slidecabinet')
r = self.np_random.uniform(low=.4, high=.7)
theta = self.np_random.uniform(low=-1.57, high=1.57)
self.sim.model.body_pos[17][0] = r*np.sin(theta)
self.sim.model.body_pos[17][1] = 0.5 + r*np.cos(theta)
self.sim.model.body_quat[17] = euler2quat([0, 0, -theta])
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qpos[self.robot_dofs] += (
0.05
* (self.np_random.uniform(size=len(self.robot_dofs)) - 0.5)
* (self.robot_ranges[:, 1] - self.robot_ranges[:, 0])
)
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/relay_kitchen/franka_slide_v1.py | 0.473414 | 0.308711 | franka_slide_v1.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs.relay_kitchen.multi_task_base_v1 import KitchenBase
# ToDo: Get these details from key_frame
DEMO_RESET_QPOS = np.array(
[
1.01020992e-01,
-1.76349747e00,
1.88974607e00,
-2.47661710e00,
3.25189114e-01,
8.29094410e-01,
1.62463629e00,
3.99760380e-02,
3.99791002e-02,
2.45778156e-05,
2.95590127e-07,
2.45777410e-05,
2.95589217e-07,
2.45777410e-05,
2.95589217e-07,
2.45777410e-05,
2.95589217e-07,
2.16196258e-05,
5.08073663e-06,
0.00000000e00,
0.00000000e00,
0.00000000e00,
0.00000000e00,
-2.68999994e-01,
3.49999994e-01,
1.61928391e00,
6.89039584e-19,
-2.26122120e-05,
-8.87580375e-19,
]
)
DEMO_RESET_QVEL = np.array(
[
-1.24094905e-02,
3.07730486e-04,
2.10558046e-02,
-2.11170651e-02,
1.28676305e-02,
2.64535546e-02,
-7.49515183e-03,
-1.34369839e-04,
2.50969693e-04,
1.06229627e-13,
7.14243539e-16,
1.06224762e-13,
7.19794728e-16,
1.06224762e-13,
7.21644648e-16,
1.06224762e-13,
7.14243539e-16,
-1.19464428e-16,
-1.47079926e-17,
0.00000000e00,
0.00000000e00,
0.00000000e00,
0.00000000e00,
2.93530267e-09,
-1.99505748e-18,
3.42031125e-14,
-4.39396125e-17,
6.64174740e-06,
3.52969879e-18,
]
)
class KitchenFrankaFixed(KitchenBase):
OBJ_INTERACTION_SITES = (
"knob1_site",
"knob2_site",
"knob3_site",
"knob4_site",
"light_site",
"slide_site",
"leftdoor_site",
"rightdoor_site",
"microhandle_site",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
)
OBJ_JNT_NAMES = (
"knob1_joint",
"knob2_joint",
"knob3_joint",
"knob4_joint",
"lightswitch_joint",
"slidedoor_joint",
"leftdoorhinge",
"rightdoorhinge",
"micro0joint",
"kettle0:Tx",
"kettle0:Ty",
"kettle0:Tz",
"kettle0:Rx",
"kettle0:Ry",
"kettle0:Rz",
)
ROBOT_JNT_NAMES = (
"panda0_joint1",
"panda0_joint2",
"panda0_joint3",
"panda0_joint4",
"panda0_joint5",
"panda0_joint6",
"panda0_joint7",
"panda0_finger_joint1",
"panda0_finger_joint2",
)
def _setup(
self,
robot_jnt_names=ROBOT_JNT_NAMES,
obj_jnt_names=OBJ_JNT_NAMES,
obj_interaction_sites=OBJ_INTERACTION_SITES,
**kwargs,
):
super()._setup(
robot_jnt_names=robot_jnt_names,
obj_jnt_names=obj_jnt_names,
obj_interaction_sites=obj_interaction_sites,
**kwargs,
)
class KitchenFrankaDemo(KitchenFrankaFixed):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
super()._setup(**kwargs)
def reset(self, reset_qpos=None, reset_qvel=None):
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qvel = self.init_qvel.copy()
reset_qpos[self.robot_dofs] = DEMO_RESET_QPOS[self.robot_dofs]
reset_qvel[self.robot_dofs] = DEMO_RESET_QVEL[self.robot_dofs]
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel)
class KitchenFrankaRandom(KitchenFrankaFixed):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
super()._setup(**kwargs)
def reset(self, reset_qpos=None, reset_qvel=None):
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qpos[self.robot_dofs] += (
0.05
* (self.np_random.uniform(size=len(self.robot_dofs)) - 0.5)
* (self.robot_ranges[:, 1] - self.robot_ranges[:, 0])
)
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/relay_kitchen/franka_kitchen_v1.py | 0.476336 | 0.412412 | franka_kitchen_v1.py | pypi |
import os
from gym.envs.registration import register
CURR_DIR = os.path.dirname(os.path.abspath(__file__))
# Kitchen-V3 ============================================================================
# In this version of the environment, the observations consist of the
# distance between end effector and all relevent objects in the scene
print("RS:> Registering Multi-Task (9 subtasks) Envs")
from mj_envs.envs.multi_task.common.franka_kitchen_v1 import KitchenFrankaFixed, KitchenFrankaRandom, KitchenFrankaDemo
MODEL_PATH = CURR_DIR + "/../common/kitchen/franka_kitchen.xml"
CONFIG_PATH = CURR_DIR + "/../common/kitchen/franka_kitchen.config"
DEMO_ENTRY_POINT = "mj_envs.envs.multi_task.common.franka_kitchen_v1:KitchenFrankaDemo"
RANDOM_ENTRY_POINT = "mj_envs.envs.multi_task.common.franka_kitchen_v1:KitchenFrankaRandom"
FIXED_ENTRY_POINT = "mj_envs.envs.multi_task.common.franka_kitchen_v1:KitchenFrankaFixed"
ENTRY_POINT = RANDOM_ENTRY_POINT
obs_keys_wt = {"robot_jnt": 1.0, "objs_jnt": 1.0, "obj_goal": 1.0, "end_effector": 1.0}
for site in KitchenFrankaFixed.OBJ_INTERACTION_SITES:
obs_keys_wt[site + "_err"] = 1.0
# Kitchen (close everything)
register(
id="kitchen_openall-v3",
entry_point=ENTRY_POINT,
max_episode_steps=50,
kwargs={
"model_path": MODEL_PATH,
"config_path": CONFIG_PATH,
"obj_init": {},
"obj_goal": {
"knob1_joint": -1.57,
"knob2_joint": -1.57,
"knob3_joint": -1.57,
"knob4_joint": -1.57,
"lightswitch_joint": -0.7,
"slidedoor_joint": 0.44,
"micro0joint": -1.25,
"rightdoorhinge": 1.57,
"leftdoorhinge": -1.25,
},
"obs_keys_wt": obs_keys_wt,
},
)
register(
id="kitchen_closeall-v3",
entry_point=ENTRY_POINT,
max_episode_steps=50,
kwargs={
"model_path": MODEL_PATH,
"config_path": CONFIG_PATH,
"obj_goal": {},
"obj_init": {
"knob1_joint": -1.57,
"knob2_joint": -1.57,
"knob3_joint": -1.57,
"knob4_joint": -1.57,
"lightswitch_joint": -0.7,
"slidedoor_joint": 0.44,
"micro0joint": -1.25,
"rightdoorhinge": 1.57,
"leftdoorhinge": -1.25,
},
"obs_keys_wt": obs_keys_wt,
},
) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/multi_task/substeps9/__init__.py | 0.47658 | 0.319652 | __init__.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs.multi_task.multi_task_base_v1 import KitchenBase
# ToDo: Get these details from key_frame
DEMO_RESET_QPOS = np.array(
[
1.01020992e-01,
-1.76349747e00,
1.88974607e00,
-2.47661710e00,
3.25189114e-01,
8.29094410e-01,
1.62463629e00,
3.99760380e-02,
3.99791002e-02,
2.45778156e-05,
2.95590127e-07,
2.45777410e-05,
2.95589217e-07,
2.45777410e-05,
2.95589217e-07,
2.45777410e-05,
2.95589217e-07,
2.16196258e-05,
5.08073663e-06,
0.00000000e00,
0.00000000e00,
0.00000000e00,
0.00000000e00,
-2.68999994e-01,
3.49999994e-01,
1.61928391e00,
6.89039584e-19,
-2.26122120e-05,
-8.87580375e-19,
]
)
DEMO_RESET_QVEL = np.array(
[
-1.24094905e-02,
3.07730486e-04,
2.10558046e-02,
-2.11170651e-02,
1.28676305e-02,
2.64535546e-02,
-7.49515183e-03,
-1.34369839e-04,
2.50969693e-04,
1.06229627e-13,
7.14243539e-16,
1.06224762e-13,
7.19794728e-16,
1.06224762e-13,
7.21644648e-16,
1.06224762e-13,
7.14243539e-16,
-1.19464428e-16,
-1.47079926e-17,
0.00000000e00,
0.00000000e00,
0.00000000e00,
0.00000000e00,
2.93530267e-09,
-1.99505748e-18,
3.42031125e-14,
-4.39396125e-17,
6.64174740e-06,
3.52969879e-18,
]
)
class KitchenFrankaFixed(KitchenBase):
OBJ_INTERACTION_SITES = (
"knob1_site",
"knob2_site",
"knob3_site",
"knob4_site",
"light_site",
"slide_site",
"leftdoor_site",
"rightdoor_site",
"microhandle_site",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
"kettle_site0",
)
OBJ_JNT_NAMES = (
"knob1_joint",
"knob2_joint",
"knob3_joint",
"knob4_joint",
"lightswitch_joint",
"slidedoor_joint",
"leftdoorhinge",
"rightdoorhinge",
"micro0joint",
"kettle0:Tx",
"kettle0:Ty",
"kettle0:Tz",
"kettle0:Rx",
"kettle0:Ry",
"kettle0:Rz",
)
ROBOT_JNT_NAMES = (
"panda0_joint1",
"panda0_joint2",
"panda0_joint3",
"panda0_joint4",
"panda0_joint5",
"panda0_joint6",
"panda0_joint7",
"panda0_finger_joint1",
"panda0_finger_joint2",
)
def _setup(
self,
robot_jnt_names=ROBOT_JNT_NAMES,
obj_jnt_names=OBJ_JNT_NAMES,
obj_interaction_site=OBJ_INTERACTION_SITES,
**kwargs,
):
super()._setup(
robot_jnt_names=robot_jnt_names,
obj_jnt_names=obj_jnt_names,
obj_interaction_site=obj_interaction_site,
**kwargs,
)
class KitchenFrankaDemo(KitchenFrankaFixed):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
super()._setup(**kwargs)
def reset(self, reset_qpos=None, reset_qvel=None):
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qvel = self.init_qvel.copy()
reset_qpos[self.robot_dofs] = DEMO_RESET_QPOS[self.robot_dofs]
reset_qvel[self.robot_dofs] = DEMO_RESET_QVEL[self.robot_dofs]
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel)
class KitchenFrankaRandom(KitchenFrankaFixed):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
super()._setup(**kwargs)
def reset(self, reset_qpos=None, reset_qvel=None):
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qpos[self.robot_dofs] += (
0.05
* (self.np_random.uniform(size=len(self.robot_dofs)) - 0.5)
* (self.robot_ranges[:, 1] - self.robot_ranges[:, 0])
)
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/multi_task/common/franka_kitchen_v1.py | 0.45302 | 0.423279 | franka_kitchen_v1.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.utils.quat_math import euler2quat
from mj_envs.envs.multi_task.multi_task_base_v1 import KitchenBase
class FrankaAppliance(KitchenBase):
ROBOT_JNT_NAMES = (
"panda0_joint1",
"panda0_joint2",
"panda0_joint3",
"panda0_joint4",
"panda0_joint5",
"panda0_joint6",
"panda0_joint7",
"panda0_finger_joint1",
"panda0_finger_joint2",
)
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(
self,
obj_body_randomize=None,
robot_jnt_names=ROBOT_JNT_NAMES,
**kwargs,
):
self.obj_body_randomize = obj_body_randomize
super()._setup(
robot_jnt_names=robot_jnt_names,
**kwargs,
)
def reset(self, reset_qpos=None, reset_qvel=None):
# randomize object bodies, if requested
if self.obj_body_randomize:
for body_name in self.obj_body_randomize:
bid = self.sim.model.body_name2id(body_name)
r = self.np_random.uniform(low=.4, high=.7)
theta = self.np_random.uniform(low=-1.57, high=1.57)
self.sim.model.body_pos[bid][0] = r*np.sin(theta)
self.sim.model.body_pos[bid][1] = 0.5 + r*np.cos(theta)
self.sim.model.body_quat[bid] = euler2quat([0, 0, -theta])
# resample init and goal state
self.set_obj_init(self.input_obj_init)
self.set_obj_goal(obj_goal=self.input_obj_goal, interact_site=self.interact_sid)
# Noisy robot reset
if reset_qpos is None:
reset_qpos = self.init_qpos.copy()
reset_qpos[self.robot_dofs] += (
0.05
* (self.np_random.uniform(size=len(self.robot_dofs)) - 0.5)
* (self.robot_ranges[:, 1] - self.robot_ranges[:, 0])
)
return super().reset(reset_qpos=reset_qpos, reset_qvel=reset_qvel) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/multi_task/common/franka_appliance_v1.py | 0.716219 | 0.320462 | franka_appliance_v1.py | pypi |
import gym
import numpy as np
from mj_envs.envs import env_base
from mj_envs.utils.xml_utils import reassign_parent
import os
import collections
class FrankaRobotiqData(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
't' # dummy key
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"none": -0.0, # dummy key
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
curr_dir = os.path.dirname(os.path.abspath(__file__))
# Process model to use DManus as end effector
raw_sim = env_base.get_sim(model_path=curr_dir+model_path)
raw_xml = raw_sim.model.get_xml()
processed_xml = reassign_parent(xml_str=raw_xml, receiver_node="panda0_link7", donor_node="ee_mount")
processed_model_path = curr_dir+model_path[:-4]+"_processed.xml"
with open(processed_model_path, 'w') as file:
file.write(processed_xml)
# Process model to use DManus as end effector
if obsd_model_path == model_path:
processed_obsd_model_path = processed_model_path
elif obsd_model_path:
raw_sim = env_base.get_sim(model_path=curr_dir+obsd_model_path)
raw_xml = raw_sim.model.get_xml()
processed_xml = reassign_parent(xml_str=raw_xml, receiver_node="panda0_link7", donor_node="ee_mount")
processed_obsd_model_path = curr_dir+obsd_model_path[:-4]+"_processed.xml"
with open(processed_obsd_model_path, 'w') as file:
file.write(processed_xml)
else:
processed_obsd_model_path = None
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=processed_model_path, obsd_model_path=processed_obsd_model_path, seed=seed)
os.remove(processed_model_path)
if processed_obsd_model_path and processed_obsd_model_path!=processed_model_path:
os.remove(processed_obsd_model_path)
self._setup(**kwargs)
def _setup(self,
nq_arm,
nq_ee,
name_ee,
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs):
self.nq_arm = nq_arm
self.nq_ee = nq_ee
self.ee_sid = self.sim.model.site_name2id(name_ee)
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
frame_skip=40,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos[:self.nq_arm+self.nq_ee].copy()
obs_dict['qp_arm'] = sim.data.qpos[:self.nq_arm].copy()
obs_dict['qv_arm'] = sim.data.qvel[:self.nq_arm].copy()
obs_dict['qp_ee'] = sim.data.qpos[self.nq_arm:self.nq_arm+self.nq_ee].copy()
obs_dict['qv_ee'] = sim.data.qvel[self.nq_arm:self.nq_arm+self.nq_ee].copy()
obs_dict['pos_ee'] = sim.data.site_xpos[self.ee_sid]
obs_dict['rot_ee'] = sim.data.site_xmat[self.ee_sid]
return obs_dict
def get_reward_dict(self, obs_dict):
rwd_dict = collections.OrderedDict((
# Optional Keys
('none', 0.0),
# Must keys
('sparse', 0.0),
('solved', False),
('done', False),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/fm/franka_robotiq_data_v0.py | 0.436142 | 0.184584 | franka_robotiq_data_v0.py | pypi |
import gym
import numpy as np
from mj_envs.envs import env_base
from mj_envs.utils.xml_utils import reassign_parent
import os
import collections
class FrankaEEPose(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'pose_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"pose": -1.0,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
curr_dir = os.path.dirname(os.path.abspath(__file__))
# Process model to use DManus as end effector
raw_sim = env_base.get_sim(model_path=curr_dir+model_path)
raw_xml = raw_sim.model.get_xml()
processed_xml = reassign_parent(xml_str=raw_xml, receiver_node="panda0_link7", donor_node="ee_mount")
processed_model_path = curr_dir+model_path[:-4]+"_processed.xml"
with open(processed_model_path, 'w') as file:
file.write(processed_xml)
# Process model to use DManus as end effector
if obsd_model_path == model_path:
processed_obsd_model_path = processed_model_path
elif obsd_model_path:
raw_sim = env_base.get_sim(model_path=curr_dir+obsd_model_path)
raw_xml = raw_sim.model.get_xml()
processed_xml = reassign_parent(xml_str=raw_xml, receiver_node="panda0_link7", donor_node="ee_mount")
processed_obsd_model_path = curr_dir+obsd_model_path[:-4]+"_processed.xml"
with open(processed_obsd_model_path, 'w') as file:
file.write(processed_xml)
else:
processed_obsd_model_path = None
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=processed_model_path, obsd_model_path=processed_obsd_model_path, seed=seed)
os.remove(processed_model_path)
if processed_obsd_model_path and processed_obsd_model_path!=processed_model_path:
os.remove(processed_obsd_model_path)
self._setup(**kwargs)
def _setup(self,
target_pose,
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs):
if isinstance(target_pose,np.ndarray):
self.target_type = 'fixed'
self.target_pose = target_pose
elif target_pose == 'random':
self.target_type = 'random'
self.target_pose = self.get_target_pose() # fake target for setup
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
frame_skip=40,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['pose_err'] = obs_dict['qp'] - self.target_pose
return obs_dict
def get_reward_dict(self, obs_dict):
pose_dist = np.linalg.norm(obs_dict['pose_err'], axis=-1)
far_th = 10
rwd_dict = collections.OrderedDict((
# Optional Keys
('pose', pose_dist),
('bonus', (pose_dist<1) + (pose_dist<2)),
('penalty', (pose_dist>far_th)),
# Must keys
('sparse', -1.0*pose_dist),
('solved', pose_dist<.5),
('done', pose_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def get_target_pose(self):
if self.target_type == 'fixed':
return self.target_pose
elif self.target_type == 'random':
return self.np_random.uniform(low=self.sim.model.actuator_ctrlrange[:,0], high=self.sim.model.actuator_ctrlrange[:,1])
def reset(self, reset_qpos=None, reset_qvel=None):
self.target_pose = self.get_target_pose()
obs = super().reset(reset_qpos, reset_qvel)
return obs
class FrankaRobotiqPose(FrankaEEPose):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
self.nqp = 8
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed, **kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos[:self.nqp].copy()
obs_dict['qv'] = sim.data.qvel[:self.nqp].copy()
obs_dict['pose_err'] = obs_dict['qp'] - self.target_pose
return obs_dict
def get_target_pose(self):
if self.target_type == 'fixed':
return self.target_pose
elif self.target_type == 'random':
return self.np_random.uniform(low=self.sim.model.actuator_ctrlrange[:self.nqp,0], high=self.sim.model.actuator_ctrlrange[:self.nqp,1]) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/fm/franka_ee_pose_v0.py | 0.539469 | 0.232397 | franka_ee_pose_v0.py | pypi |
from gym.envs.registration import register
import numpy as np
import os
curr_dir = os.path.dirname(os.path.abspath(__file__))
# Reach to fixed target
# register(
# id='DManusReachFixed-v0',
# entry_point='mj_envs.envs.fm.franka_ee_pose_v0:FMReachEnvFixed',
# max_episode_steps=50, #50steps*40Skip*2ms = 4s
# kwargs={
# 'model_path': '/assets/dmanus.xml',
# 'config_path': curr_dir+'/assets/dmanus.config',
# 'target_pose': np.array([0, 1, 1, 0, 1, 1, 0, 1, 1]),
# }
# )
# Pose to fixed target
register(
id='rpFrankaDmanusPoseFixed-v0',
entry_point='mj_envs.envs.fm.franka_ee_pose_v0:FrankaEEPose',
max_episode_steps=50, #50steps*40Skip*2ms = 4s
kwargs={
'model_path': '/assets/franka_dmanus.xml',
'config_path': curr_dir+'/assets/franka_dmanus.config',
'target_pose': np.array([0, 0, 0, -1.57, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1]),
}
)
# Pose to random target
register(
id='rpFrankaDmanusPoseRandom-v0',
entry_point='mj_envs.envs.fm.franka_ee_pose_v0:FrankaEEPose',
max_episode_steps=50, #50steps*40Skip*2ms = 4s
kwargs={
'model_path': '/assets/franka_dmanus.xml',
'config_path': curr_dir+'/assets/franka_dmanus.config',
'target_pose': 'random'
}
)
# Pose to fixed target
register(
id='rpFrankaRobotiqPoseFixed-v0',
entry_point='mj_envs.envs.fm.franka_ee_pose_v0:FrankaRobotiqPose',
max_episode_steps=50, #50steps*40Skip*2ms = 4s
kwargs={
'model_path': '/assets/franka_robotiq.xml',
'config_path': curr_dir+'/assets/franka_robotiq.config',
'target_pose': np.array([0, 0, 0, -1.57, 0, 0, 0, 0]),
}
)
# Pose to fixed target
register(
id='rpFrankaRobotiqPoseRandom-v0',
entry_point='mj_envs.envs.fm.franka_ee_pose_v0:FrankaRobotiqPose',
max_episode_steps=50, #50steps*40Skip*2ms = 4s
kwargs={
'model_path': '/assets/franka_robotiq.xml',
'config_path': curr_dir+'/assets/franka_robotiq.config',
'target_pose': 'random',
}
)
# Pose to fixed target
encoder_type = "2d"
# img_res="480x640"
img_res="240x424"
register(
id='rpFrankaRobotiqData-v0',
entry_point='mj_envs.envs.fm.franka_robotiq_data_v0:FrankaRobotiqData',
max_episode_steps=50, #50steps*40Skip*2ms = 4s
kwargs={
'model_path': '/assets/franka_robotiq.xml',
'config_path': curr_dir+'/assets/franka_robotiq.config',
'nq_arm':7,
'nq_ee':1,
'name_ee':'end_effector',
'obs_keys':[
# customize the visual keys
"rgb:left_cam:{}:{}".format(img_res, encoder_type),
"rgb:right_cam:{}:{}".format(img_res, encoder_type),
"rgb:top_cam:{}:{}".format(img_res, encoder_type),
"rgb:Franka_wrist_cam:{}:{}".format(img_res, encoder_type),
"d:left_cam:{}:{}".format(img_res, encoder_type),
"d:right_cam:{}:{}".format(img_res, encoder_type),
"d:top_cam:{}:{}".format(img_res, encoder_type),
"d:Franka_wrist_cam:{}:{}".format(img_res, encoder_type),
]
}
) | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/fm/__init__.py | 0.453988 | 0.208985 | __init__.py | pypi |
import gym
import numpy as np
from mj_envs.envs import env_base
from mj_envs.utils.xml_utils import reassign_parent
import os
import collections
class TrackEnv(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'pose_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"pose": -1.0,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, object_name, model_path, obsd_model_path=None, seed=None, **kwargs):
curr_dir = os.path.dirname(os.path.abspath(__file__))
# Process model_path to import the right object
with open(curr_dir+model_path, 'r') as file:
processed_xml = file.read()
processed_xml = processed_xml.replace('OBJECT_NAME', object_name)
processed_model_path = curr_dir+model_path[:-4]+"_processed.xml"
with open(processed_model_path, 'w') as file:
file.write(processed_xml)
# Process obsd_model_path to import the right object
if obsd_model_path == model_path:
processed_obsd_model_path = processed_model_path
elif obsd_model_path:
with open(curr_dir+obsd_model_path, 'r') as file:
processed_xml = file.read()
processed_xml = processed_xml.replace('OBJECT_NAME', object_name)
processed_obsd_model_path = curr_dir+model_path[:-4]+"_processed.xml"
with open(processed_obsd_model_path, 'w') as file:
file.write(processed_xml)
else:
processed_obsd_model_path = None
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, object_name, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=processed_model_path, obsd_model_path=processed_obsd_model_path, seed=seed)
os.remove(processed_model_path)
if processed_obsd_model_path and processed_obsd_model_path!=processed_model_path:
os.remove(processed_obsd_model_path)
self._setup(**kwargs)
def _setup(self,
target_pose,
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs):
if isinstance(target_pose, np.ndarray):
self.target_type = 'fixed'
self.target_pose = target_pose
elif target_pose == 'random':
self.target_type = 'random'
self.target_pose = self.sim.data.qpos.copy() # fake target for setup
elif target_pose == None:
self.target_type = 'track'
self.target_pose = self.sim.data.qpos.copy() # fake target until we get the reference motion setup
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
frame_skip=40,
**kwargs)
if self.sim.model.nkey>0:
self.init_qpos[:] = self.sim.model.key_qpos[0,:]
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['pose_err'] = obs_dict['qp'] - self.target_pose
return obs_dict
def get_reward_dict(self, obs_dict):
pose_dist = np.linalg.norm(obs_dict['pose_err'], axis=-1)
far_th = 10
rwd_dict = collections.OrderedDict((
# Optional Keys
('pose', pose_dist),
('bonus', (pose_dist<1) + (pose_dist<2)),
('penalty', (pose_dist>far_th)),
# Must keys
('sparse', -1.0*pose_dist),
('solved', pose_dist<.5),
('done', pose_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def get_target_pose(self):
if self.target_type == 'fixed':
return self.target_pose
elif self.target_type == 'random':
return self.np_random.uniform(low=self.sim.model.actuator_ctrlrange[:,0], high=self.sim.model.actuator_ctrlrange[:,1])
elif self.target_type == 'track':
return self.target_pose # ToDo: Update the target pose as per the tracking trajectory
def reset(self):
self.target_pose = self.get_target_pose()
obs = super().reset(self.init_qpos, self.init_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/dexman/track.py | 0.436142 | 0.250317 | track.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs import env_base
from mj_envs.utils.quat_math import mat2euler, euler2quat
class ReorientBaseV0(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'reach_pos_err', 'reach_rot_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"reach_pos": -1.0,
"reach_rot": -0.01,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
object_site_name,
target_site_name,
target_xyz_range,
target_euler_range,
frame_skip = 40,
reward_mode = "dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
# ids
self.object_sid = self.sim.model.site_name2id(object_site_name)
self.target_sid = self.sim.model.site_name2id(target_site_name)
self.target_xyz_range = target_xyz_range
self.target_euler_range = target_euler_range
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['reach_pos_err'] = sim.data.site_xpos[self.target_sid]-sim.data.site_xpos[self.object_sid]
obs_dict['reach_rot_err'] = mat2euler(np.reshape(sim.data.site_xmat[self.target_sid],(3,3))) - mat2euler(np.reshape(sim.data.site_xmat[self.object_sid],(3,3)))
return obs_dict
def get_reward_dict(self, obs_dict):
reach_pos_dist = np.linalg.norm(obs_dict['reach_pos_err'], axis=-1)
reach_rot_dist = np.linalg.norm(obs_dict['reach_rot_err'], axis=-1)
far_th = 1.0
rwd_dict = collections.OrderedDict((
# Optional Keys
('reach_pos', reach_pos_dist),
('reach_rot', reach_rot_dist),
('bonus', (reach_pos_dist<.1) + (reach_pos_dist<.05) + (reach_rot_dist<.3) + (reach_rot_dist<.1)),
('penalty', (reach_pos_dist>far_th)),
# Must keys
('sparse', -1.0*reach_pos_dist-1.0*reach_rot_dist),
('solved', (reach_pos_dist<.050) and (reach_rot_dist<.1)),
('done', reach_pos_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self):
desired_pos = self.np_random.uniform(high=self.target_xyz_range['high'], low=self.target_xyz_range['low'])
self.sim.model.site_pos[self.target_sid] = desired_pos
self.sim_obsd.model.site_pos[self.target_sid] = desired_pos
desired_orien = np.zeros(3)
desired_orien = self.np_random.uniform(high=self.target_euler_range['high'], low=self.target_euler_range['low'])
self.sim.model.site_quat[self.target_sid] = euler2quat(desired_orien)
self.sim_obsd.model.site_quat[self.target_sid] = euler2quat(desired_orien)
obs = super().reset(self.init_qpos, self.init_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/claws/reorient_v0.py | 0.64646 | 0.362715 | reorient_v0.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs import env_base
class ReachBaseV0(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'reach_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"reach": -1.0,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
robot_site_name,
target_site_name,
target_xyz_range,
frame_skip = 40,
reward_mode = "dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
# ids
self.grasp_sid = self.sim.model.site_name2id(robot_site_name)
self.target_sid = self.sim.model.site_name2id(target_site_name)
self.target_xyz_range = target_xyz_range
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['reach_err'] = sim.data.site_xpos[self.target_sid]-sim.data.site_xpos[self.grasp_sid]
return obs_dict
def get_reward_dict(self, obs_dict):
reach_dist = np.linalg.norm(obs_dict['reach_err'], axis=-1)
far_th = 1.0
rwd_dict = collections.OrderedDict((
# Optional Keys
('reach', reach_dist),
('bonus', (reach_dist<.1) + (reach_dist<.05)),
('penalty', (reach_dist>far_th)),
# Must keys
('sparse', -1.0*reach_dist),
('solved', reach_dist<.050),
('done', reach_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self, reset_qpos=None, reset_qvel=None):
self.sim.model.site_pos[self.target_sid] = self.np_random.uniform(high=self.target_xyz_range['high'], low=self.target_xyz_range['low'])
self.sim_obsd.model.site_pos[self.target_sid] = self.sim.model.site_pos[self.target_sid]
obs = super().reset(reset_qpos, reset_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/arms/reach_base_v0.py | 0.546738 | 0.266375 | reach_base_v0.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs import env_base
class PushBaseV0(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'object_err', 'target_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"object_dist": -1.0,
"target_dist": -1.0,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
robot_site_name,
object_site_name,
target_site_name,
target_xyz_range,
frame_skip=40,
reward_mode="dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
# ids
self.grasp_sid = self.sim.model.site_name2id(robot_site_name)
self.object_sid = self.sim.model.site_name2id(object_site_name)
self.target_sid = self.sim.model.site_name2id(target_site_name)
self.target_xyz_range = target_xyz_range
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['object_err'] = sim.data.site_xpos[self.object_sid]-sim.data.site_xpos[self.grasp_sid]
obs_dict['target_err'] = sim.data.site_xpos[self.target_sid]-sim.data.site_xpos[self.object_sid]
return obs_dict
def get_reward_dict(self, obs_dict):
object_dist = np.linalg.norm(obs_dict['object_err'], axis=-1)
target_dist = np.linalg.norm(obs_dict['target_err'], axis=-1)
far_th = 1.25
rwd_dict = collections.OrderedDict((
# Optional Keys
('object_dist', object_dist),
('target_dist', target_dist),
('bonus', (object_dist<.1) + (target_dist<.1) + (target_dist<.05)),
('penalty', (object_dist>far_th)),
# Must keys
('sparse', -1.0*target_dist),
('solved', target_dist<.050),
('done', object_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self):
self.sim.model.site_pos[self.target_sid] = self.np_random.uniform(high=self.target_xyz_range['high'], low=self.target_xyz_range['low'])
self.sim_obsd.model.site_pos[self.target_sid] = self.sim.model.site_pos[self.target_sid]
obs = super().reset(self.init_qpos, self.init_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/arms/push_base_v0.py | 0.610686 | 0.238495 | push_base_v0.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs import env_base
from mj_envs.utils.quat_math import euler2quat
class PickPlaceV0(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'qp', 'qv', 'object_err', 'target_err'
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
"object_dist": -1.0,
"target_dist": -1.0,
"bonus": 4.0,
"penalty": -50,
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
robot_site_name,
object_site_name,
target_site_name,
target_xyz_range,
frame_skip=40,
reward_mode="dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
randomize=False,
geom_sizes={'high':[.05, .05, .05], 'low':[.2, 0.2, 0.2]},
**kwargs,
):
# ids
self.grasp_sid = self.sim.model.site_name2id(robot_site_name)
self.object_sid = self.sim.model.site_name2id(object_site_name)
self.target_sid = self.sim.model.site_name2id(target_site_name)
self.target_xyz_range = target_xyz_range
self.randomize = randomize
self.geom_sizes = geom_sizes
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([self.sim.data.time])
obs_dict['qp'] = sim.data.qpos.copy()
obs_dict['qv'] = sim.data.qvel.copy()
obs_dict['object_err'] = sim.data.site_xpos[self.object_sid]-sim.data.site_xpos[self.grasp_sid]
obs_dict['target_err'] = sim.data.site_xpos[self.target_sid]-sim.data.site_xpos[self.object_sid]
return obs_dict
def get_reward_dict(self, obs_dict):
object_dist = np.linalg.norm(obs_dict['object_err'], axis=-1)
target_dist = np.linalg.norm(obs_dict['target_err'], axis=-1)
far_th = 1.25
rwd_dict = collections.OrderedDict((
# Optional Keys
('object_dist', object_dist),
('target_dist', target_dist),
('bonus', (object_dist<.1) + (target_dist<.1) + (target_dist<.05)),
('penalty', (object_dist>far_th)),
# Must keys
('sparse', -1.0*target_dist),
('solved', target_dist<.050),
('done', object_dist > far_th),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self):
if self.randomize:
# target location
self.sim.model.site_pos[self.target_sid] = self.np_random.uniform(high=self.target_xyz_range['high'], low=self.target_xyz_range['low'])
self.sim_obsd.model.site_pos[self.target_sid] = self.sim.model.site_pos[self.target_sid]
# object shapes and locations
for body in ["obj0", "obj1", "obj2"]:
bid = self.sim.model.body_name2id(body)
self.sim.model.body_pos[bid] += self.np_random.uniform(low=[-.010, -.010, -.010], high=[-.010, -.010, -.010])# random pos
self.sim.model.body_quat[bid] = euler2quat(self.np_random.uniform(low=(-np.pi/2, -np.pi/2, -np.pi/2), high=(np.pi/2, np.pi/2, np.pi/2)) ) # random quat
for gid in range(self.sim.model.body_geomnum[bid]):
gid+=self.sim.model.body_geomadr[bid]
self.sim.model.geom_type[gid]=self.np_random.randint(low=2, high=7) # random shape
self.sim.model.geom_size[gid]=self.np_random.uniform(low=self.geom_sizes['low'], high=self.geom_sizes['high']) # random size
self.sim.model.geom_pos[gid]=self.np_random.uniform(low=-1*self.sim.model.geom_size[gid], high=self.sim.model.geom_size[gid]) # random pos
self.sim.model.geom_quat[gid]=euler2quat(self.np_random.uniform(low=(-np.pi/2, -np.pi/2, -np.pi/2), high=(np.pi/2, np.pi/2, np.pi/2)) ) # random quat
self.sim.model.geom_rgba[gid]=self.np_random.uniform(low=[.2, .2, .2, 1], high=[.9, .9, .9, 1]) # random color
self.sim.forward()
obs = super().reset(self.init_qpos, self.init_qvel)
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/arms/pick_place_v0.py | 0.579995 | 0.282495 | pick_place_v0.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.utils.quat_math import *
from mj_envs.envs import env_base
# OBS_KEYS = ['hand_jnt', 'obj_vel', 'palm_pos', 'obj_pos', 'obj_rot', 'target_pos', 'nail_impact'] # DAPG
DEFAULT_OBS_KEYS = ['hand_jnt', 'obj_vel', 'palm_pos', 'obj_pos', 'obj_rot', 'target_pos', 'nail_impact', 'tool_pos', 'goal_pos', 'hand_vel']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
'palm_obj': 1.0,
'tool_target': 1.0,
'target_goal': 1.0,
'smooth': 1.0,
'bonus': 1.0
} # DAPG
class HammerEnvV1(env_base.MujocoEnv):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
frame_skip=5,
reward_mode="dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs
):
# ids
sim = self.sim
self.target_obj_sid = sim.model.site_name2id('S_target')
self.S_grasp_sid = sim.model.site_name2id('S_grasp')
self.obj_bid = sim.model.body_name2id('object')
self.tool_sid = sim.model.site_name2id('tool')
self.goal_sid = sim.model.site_name2id('nail_goal')
self.target_bid = sim.model.body_name2id('nail_board')
self.nail_rid = sim.model.sensor_name2id('S_nail')
# change actuator sensitivity
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([10, 0, 0])
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([1, 0, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([0, -10, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([0, -1, 0])
# scales
self.act_mid = np.mean(sim.model.actuator_ctrlrange, axis=1)
self.act_rng = 0.5*(sim.model.actuator_ctrlrange[:,1]-sim.model.actuator_ctrlrange[:,0])
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_reward_dict(self, obs_dict):
# get to hammer
palm_obj_dist = np.linalg.norm(obs_dict['palm_pos'] - obs_dict['obj_pos'], axis=-1)
# take hammer head to nail
tool_target_dist = np.linalg.norm(obs_dict['tool_pos'] - obs_dict['target_pos'], axis=-1)
# make nail go inside
target_goal_dist = np.linalg.norm(obs_dict['target_pos'] - obs_dict['goal_pos'], axis=-1)
# vel magnitude (handled differently in DAPG)
hand_vel_mag = np.linalg.norm(obs_dict['hand_vel'], axis=-1)
obj_vel_mag = np.linalg.norm(obs_dict['obj_vel'], axis=-1)
# lifting tool
obj_pos = obs_dict['obj_pos'][:,:,2] if obs_dict['obj_pos'].ndim==3 else obs_dict['obj_pos'][2]
lifted = (obj_pos > 0.04) * (obj_pos > 0.04)
rwd_dict = collections.OrderedDict((
# Optional Keys
('palm_obj', - 0.1 * palm_obj_dist),
('tool_target', -1.0 * tool_target_dist),
('target_goal', -10.0 * target_goal_dist),
('smooth', -1e-2 * (hand_vel_mag + obj_vel_mag)),
('bonus', 2.0*lifted + 25.0*(target_goal_dist<0.020) + 75.0*(target_goal_dist<0.010)),
# Must keys
('sparse', -1.0*target_goal_dist),
('solved', target_goal_dist<0.010),
('done', palm_obj_dist > 1.0),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def get_obs_dict(self, sim):
# qpos for hand, xpos for obj, xpos for target
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_jnt'] = sim.data.qpos[:-6].copy()
obs_dict['obj_vel'] = np.clip(sim.data.qvel[-6:].copy(), -1.0, 1.0)
obs_dict['palm_pos'] = sim.data.site_xpos[self.S_grasp_sid].copy()
obs_dict['obj_pos'] = sim.data.body_xpos[self.obj_bid].copy()
obs_dict['obj_rot'] = quat2euler(sim.data.body_xquat[self.obj_bid].copy())
obs_dict['target_pos'] = sim.data.site_xpos[self.target_obj_sid].copy()
obs_dict['nail_impact'] = np.clip(sim.data.sensordata[self.nail_rid], [-1.0], [1.0])
# keys missing from DAPG-env but needed for rewards calculations
obs_dict['tool_pos'] = sim.data.site_xpos[self.tool_sid].copy()
obs_dict['goal_pos'] = sim.data.site_xpos[self.goal_sid].copy()
obs_dict['hand_vel'] = np.clip(sim.data.qvel[:-6].copy(), -1.0, 1.0)
return obs_dict
def reset_model(self):
self.sim.reset()
self.sim.model.body_pos[self.target_bid,2] = self.np_random.uniform(low=0.1, high=0.25)
self.sim.forward()
return self.get_obs()
def get_env_state(self):
"""
Get state of hand as well as objects and targets in the scene
"""
qpos = self.sim.data.qpos.ravel().copy()
qvel = self.sim.data.qvel.ravel().copy()
board_pos = self.sim.model.body_pos[self.sim.model.body_name2id('nail_board')].copy()
target_pos = self.sim.data.site_xpos[self.target_obj_sid].ravel().copy()
return dict(qpos=qpos, qvel=qvel, board_pos=board_pos, target_pos=target_pos)
def set_env_state(self, state_dict):
"""
Set the state which includes hand as well as objects and targets in the scene
"""
qp = state_dict['qpos']
qv = state_dict['qvel']
board_pos = state_dict['board_pos']
self.set_state(qp, qv)
self.sim.model.body_pos[self.sim.model.body_name2id('nail_board')] = board_pos
self.sim.forward() | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/hand_manipulation_suite/hammer_v1.py | 0.465145 | 0.255634 | hammer_v1.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.utils.vector_math import calculate_cosine
from mj_envs.utils.quat_math import euler2quat
from mj_envs.envs import env_base
DEFAULT_OBS_KEYS = ['hand_jnt', 'obj_pos', 'obj_vel', 'obj_rot', 'obj_des_rot', 'obj_err_pos', 'obj_err_rot']
DEFAULT_RWD_KEYS_AND_WEIGHTS = {'pos_align':1.0, 'rot_align':1.0, 'drop':1.0, 'bonus':1.0}
class PenEnvV1(env_base.MujocoEnv):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
frame_skip=5,
reward_mode="dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs):
# ids
sim = self.sim
self.target_obj_bid = sim.model.body_name2id("target")
self.S_grasp_sid = sim.model.site_name2id('S_grasp')
self.obj_bid = sim.model.body_name2id('Object')
self.eps_ball_sid = sim.model.site_name2id('eps_ball')
self.obj_t_sid = sim.model.site_name2id('object_top')
self.obj_b_sid = sim.model.site_name2id('object_bottom')
self.tar_t_sid = sim.model.site_name2id('target_top')
self.tar_b_sid = sim.model.site_name2id('target_bottom')
self.pen_length = np.linalg.norm(sim.model.site_pos[self.obj_t_sid] - sim.model.site_pos[self.obj_b_sid])
self.tar_length = np.linalg.norm(sim.model.site_pos[self.tar_t_sid] - sim.model.site_pos[self.tar_b_sid])
# change actuator sensitivity
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([10, 0, 0])
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([1, 0, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([0, -10, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([0, -1, 0])
# scales
self.act_mid = np.mean(sim.model.actuator_ctrlrange, axis=1)
self.act_rng = 0.5*(sim.model.actuator_ctrlrange[:,1]-sim.model.actuator_ctrlrange[:,0])
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
# qpos for hand, xpos for obj, xpos for target
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_jnt'] = sim.data.qpos[:-6].copy()
obs_dict['obj_pos'] = sim.data.body_xpos[self.obj_bid].copy()
obs_dict['obj_des_pos'] = sim.data.site_xpos[self.eps_ball_sid].ravel()
obs_dict['obj_vel'] = sim.data.qvel[-6:].copy()
obs_dict['obj_rot'] = (sim.data.site_xpos[self.obj_t_sid] - sim.data.site_xpos[self.obj_b_sid])/self.pen_length
obs_dict['obj_des_rot'] = (sim.data.site_xpos[self.tar_t_sid] - sim.data.site_xpos[self.tar_b_sid])/self.tar_length
obs_dict['obj_err_pos'] = obs_dict['obj_pos']-obs_dict['obj_des_pos']
obs_dict['obj_err_rot'] = obs_dict['obj_rot']-obs_dict['obj_des_rot']
return obs_dict
def get_reward_dict(self, obs_dict):
pos_err = obs_dict['obj_err_pos']
pos_align = np.linalg.norm(pos_err, axis=-1)
rot_align = calculate_cosine(obs_dict['obj_rot'], obs_dict['obj_des_rot'])
# dropped = obs_dict['obj_pos'][:,:,2] < 0.075
obj_pos = obs_dict['obj_pos'][:,:,2] if obs_dict['obj_pos'].ndim==3 else obs_dict['obj_pos'][2]
dropped = obj_pos < 0.075
rwd_dict = collections.OrderedDict((
# Optional Keys
('pos_align', -1.0*pos_align),
('rot_align', 1.0*rot_align),
('drop', -5.0*dropped),
('bonus', 10.0*(rot_align > 0.9)*(pos_align<0.075) + 50.0*(rot_align > 0.95)*(pos_align<0.075) ),
# Must keys
('sparse', -1.0*pos_align+rot_align),
('solved', (rot_align > 0.95)*(~dropped)),
('done', dropped),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset_model(self):
qp = self.init_qpos.copy()
qv = self.init_qvel.copy()
self.set_state(qp, qv)
desired_orien = np.zeros(3)
desired_orien[0] = self.np_random.uniform(low=-1, high=1)
desired_orien[1] = self.np_random.uniform(low=-1, high=1)
self.sim.model.body_quat[self.target_obj_bid] = euler2quat(desired_orien)
self.sim.forward()
return self.get_obs()
def get_env_state(self):
"""
Get state of hand as well as objects and targets in the scene
"""
qp = self.sim.data.qpos.ravel().copy()
qv = self.sim.data.qvel.ravel().copy()
desired_orien = self.sim.model.body_quat[self.target_obj_bid].ravel().copy()
return dict(qpos=qp, qvel=qv, desired_orien=desired_orien)
def set_env_state(self, state_dict):
"""
Set the state which includes hand as well as objects and targets in the scene
"""
qp = state_dict['qpos']
qv = state_dict['qvel']
desired_orien = state_dict['desired_orien']
self.set_state(qp, qv)
self.sim.model.body_quat[self.target_obj_bid] = desired_orien
self.sim.forward() | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/hand_manipulation_suite/pen_v1.py | 0.478773 | 0.249299 | pen_v1.py | pypi |
import collections
import gym
import numpy as np
from mj_envs.envs import env_base
# NOTES:
# 1. why is qpos[0] not a part of the obs? ==> Hand translation isn't consistent due to randomization. Palm pos is a good substitute
# OBS_KEYS = ['hand_jnt', 'latch_pos', 'door_pos', 'palm_pos', 'handle_pos', 'reach_err', 'door_open'] # DAPG
DEFAULT_OBS_KEYS = ['hand_jnt', 'latch_pos', 'door_pos', 'palm_pos', 'handle_pos', 'reach_err']
# RWD_KEYS = ['reach', 'open', 'smooth', 'bonus'] # DAPG
DEFAULT_RWD_KEYS_AND_WEIGHTS = {'reach':1.0, 'open':1.0, 'bonus':1.0}
class DoorEnvV1(env_base.MujocoEnv):
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
frame_skip=5,
reward_mode="dense",
obs_keys=DEFAULT_OBS_KEYS,
weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs
):
# ids
sim = self.sim
self.door_hinge_did = sim.model.jnt_dofadr[sim.model.joint_name2id('door_hinge')]
self.grasp_sid = sim.model.site_name2id('S_grasp')
self.handle_sid = sim.model.site_name2id('S_handle')
self.door_bid = sim.model.body_name2id('frame')
# change actuator sensitivity
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([10, 0, 0])
sim.model.actuator_gainprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([1, 0, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_WRJ1'):sim.model.actuator_name2id('A_WRJ0')+1,:3] = np.array([0, -10, 0])
sim.model.actuator_biasprm[sim.model.actuator_name2id('A_FFJ3'):sim.model.actuator_name2id('A_THJ0')+1,:3] = np.array([0, -1, 0])
# scales
self.act_mid = np.mean(sim.model.actuator_ctrlrange, axis=1)
self.act_rng = 0.5*(sim.model.actuator_ctrlrange[:,1]-sim.model.actuator_ctrlrange[:,0])
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
reward_mode=reward_mode,
frame_skip=frame_skip,
**kwargs)
def get_obs_dict(self, sim):
# qpos for hand, xpos for obj, xpos for target
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_jnt'] = sim.data.qpos[1:-2].copy()
obs_dict['hand_vel'] = sim.data.qvel[:-2].copy()
obs_dict['handle_pos'] = sim.data.site_xpos[self.handle_sid].copy()
obs_dict['palm_pos'] = sim.data.site_xpos[self.grasp_sid].copy()
obs_dict['reach_err'] = obs_dict['palm_pos']-obs_dict['handle_pos']
obs_dict['door_pos'] = np.array([sim.data.qpos[self.door_hinge_did]])
obs_dict['latch_pos'] = np.array([sim.data.qpos[-1]])
obs_dict['door_open'] = 2.0*(obs_dict['door_pos'] > 1.0) -1.0
return obs_dict
def get_reward_dict(self, obs_dict):
reach_dist = np.linalg.norm(self.obs_dict['reach_err'], axis=-1)
door_pos = obs_dict['door_pos'][:,:,0] if obs_dict['door_pos'].ndim==3 else obs_dict['door_pos'][0]
rwd_dict = collections.OrderedDict((
# Optional Keys
('reach', -0.1* reach_dist),
('open', -0.1*(door_pos - 1.57)*(door_pos - 1.57)),
('bonus', 2*(door_pos > 0.2) + 8*(door_pos > 1.0) + 10*(door_pos > 1.35)),
# Must keys
('sparse', door_pos),
('solved', door_pos > 1.35),
('done', reach_dist > 1.0),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset_model(self):
qp = self.init_qpos.copy()
qv = self.init_qvel.copy()
self.set_state(qp, qv)
self.sim.model.body_pos[self.door_bid,0] = self.np_random.uniform(low=-0.3, high=-0.2)
self.sim.model.body_pos[self.door_bid, 1] = self.np_random.uniform(low=0.25, high=0.35)
self.sim.model.body_pos[self.door_bid,2] = self.np_random.uniform(low=0.252, high=0.35)
self.sim.forward()
return self.get_obs()
def get_env_state(self):
"""
Get state of hand as well as objects and targets in the scene
"""
qp = self.sim.data.qpos.ravel().copy()
qv = self.sim.data.qvel.ravel().copy()
door_body_pos = self.sim.model.body_pos[self.door_bid].ravel().copy()
return dict(qpos=qp, qvel=qv, door_body_pos=door_body_pos)
def set_env_state(self, state_dict):
"""
Set the state which includes hand as well as objects and targets in the scene
"""
qp = state_dict['qpos']
qv = state_dict['qvel']
self.set_state(qp, qv)
self.sim.model.body_pos[self.door_bid] = state_dict['door_body_pos']
self.sim.forward() | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/hand_manipulation_suite/door_v1.py | 0.475849 | 0.364042 | door_v1.py | pypi |
import collections
import enum
import gym
import numpy as np
from mj_envs.envs import env_base
# Define the task enum
class Task(enum.Enum):
MOVE_TO_LOCATION = 0
BAODING_CW = 1
BAODING_CCW = 2
# Choose task
WHICH_TASK = Task.BAODING_CCW
class BaodingFixedEnvV1(env_base.MujocoEnv):
DEFAULT_OBS_KEYS = [
'hand_pos',
'object1_pos', 'object1_velp',
'object2_pos', 'object2_velp',
'target1_pos', 'target2_pos', # can be removed since we are adding err
'target1_err', 'target2_err', # New in V1 (not tested for any adverse effects)
]
DEFAULT_RWD_KEYS_AND_WEIGHTS = {
'pos_dist_1':5.0,
'pos_dist_2':5.0,
'drop_penalty':500.0,
'wrist_angle':0.5, # V0: had -10 if wrist>0.15, V1 has contineous rewards (not tested for any adverse effects)
'act_reg':1.0, # new in V1
'bonus':1.0 # new in V1
}
MOVE_TO_LOCATION_RWD_KEYS_AND_WEIGHTS = {
'pos_dist_1':5.0,
'drop_penalty':500.0,
'act_reg':1.0, # new in V1
'bonus':1.0 # new in V1
}
def __init__(self, model_path, obsd_model_path=None, seed=None, **kwargs):
# EzPickle.__init__(**locals()) is capturing the input dictionary of the init method of this class.
# In order to successfully capture all arguments we need to call gym.utils.EzPickle.__init__(**locals())
# at the leaf level, when we do inheritance like we do here.
# kwargs is needed at the top level to account for injection of __class__ keyword.
# Also see: https://github.com/openai/gym/pull/1497
gym.utils.EzPickle.__init__(self, model_path, obsd_model_path, seed, **kwargs)
# This two step construction is required for pickling to work correctly. All arguments to all __init__
# calls must be pickle friendly. Things like sim / sim_obsd are NOT pickle friendly. Therefore we
# first construct the inheritance chain, which is just __init__ calls all the way down, with env_base
# creating the sim / sim_obsd instances. Next we run through "setup" which relies on sim / sim_obsd
# created in __init__ to complete the setup.
super().__init__(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
self._setup(**kwargs)
def _setup(self,
frame_skip:int=40,
n_shifts_per_period:int=-1, # n_shifts/ rotation for target update (-1 contineous)
obs_keys:list = DEFAULT_OBS_KEYS,
weighted_reward_keys:list = DEFAULT_RWD_KEYS_AND_WEIGHTS,
**kwargs,
):
# user parameters
self.which_task = Task(WHICH_TASK)
self.drop_height_threshold = 0.06
self.proximity_threshold = 0.015
self.n_shifts_per_period = n_shifts_per_period
# balls start at these angles
# 1= yellow = bottom right
# 2= pink = top left
self.ball_1_starting_angle = 3.*np.pi/4.0
self.ball_2_starting_angle = -1.*np.pi/4.0
# init desired trajectory, for rotations
if self.which_task!=Task.MOVE_TO_LOCATION:
self.x_radius = 0.03
self.y_radius = 0.02 * 1.5 * 1.2
self.center_pos = [0.005, 0.06]
self.counter=0
self.goal = self.create_goal_trajectory(time_step=frame_skip*self.sim.model.opt.timestep, time_period=6)
# init target and body sites
self.object1_sid = self.sim.model.site_name2id('ball1_site')
self.object2_sid = self.sim.model.site_name2id('ball2_site')
self.target1_sid = self.sim.model.site_name2id('target1_site')
self.target2_sid = self.sim.model.site_name2id('target2_site')
# configure envs
if self.which_task==Task.MOVE_TO_LOCATION:
# move baoding targets out of the way
for sim in [self.sim, self.sim_obsd]:
sim.model.site_pos[self.target1_sid, 0] = 0
sim.model.site_pos[self.target1_sid, 2] = 0.05
sim.model.site_pos[self.target1_sid, 1] = 0
sim.model.site_pos[self.target2_sid, 0] = 0
sim.model.site_pos[self.target2_sid, 2] = 0.05
sim.model.site_pos[self.target2_sid, 1] = 0
# make 2nd object invisible
object2_gid = self.sim.model.geom_name2id('ball2')
sim.model.geom_rgba[object2_gid,3] = 0
sim.model.site_rgba[self.object2_sid,3] = 0
# update target1 to be move target
self.target1_sid = self.sim.model.site_name2id('move_target_site')
# update rewards to be move rewards
weighted_reward_keys = self.MOVE_TO_LOCATION_RWD_KEYS_AND_WEIGHTS
super()._setup(obs_keys=obs_keys,
weighted_reward_keys=weighted_reward_keys,
frame_skip=frame_skip,
**kwargs,
)
# reset position
self.init_qpos = self.sim.model.key_qpos[0].copy()
# self.init_qpos[:-14] *= 0 # Use fully open as init pos
# V0: Centered the action space around key_qpos[0]. Not sure if it matter.
# self.act_mid = self.init_qpos[:self.n_jnt].copy()
# self.upper_rng = 0.9*(self.sim.model.actuator_ctrlrange[:,1]-self.act_mid)
# self.lower_rng = 0.9*(self.act_mid-self.sim.model.actuator_ctrlrange[:,0])
# V0: Used strict pos and velocity bounds (This matter and needs to implemented ???)
# pos_bounds=[[-40, 40]] * self.n_dofs, #dummy
# vel_bounds=[[-3, 3]] * self.n_dofs,
def step(self, a):
if self.which_task==Task.MOVE_TO_LOCATION:
desired_pos = self.goal[self.counter].copy()
# update both sims with desired targets
for sim in [self.sim, self.sim_obsd]:
# update target 1
sim.model.site_pos[self.target1_sid, 0] = desired_pos[0]
sim.model.site_pos[self.target1_sid, 1] = desired_pos[1]
sim.model.site_pos[self.target1_sid, 2] = desired_pos[2]+0.02
else :
desired_angle_wrt_palm = self.goal[self.counter].copy()
desired_angle_wrt_palm[0] = desired_angle_wrt_palm[0] + self.ball_1_starting_angle
desired_angle_wrt_palm[1] = desired_angle_wrt_palm[1] + self.ball_2_starting_angle
desired_positions_wrt_palm = [0,0,0,0]
desired_positions_wrt_palm[0] = self.x_radius*np.cos(desired_angle_wrt_palm[0]) + self.center_pos[0]
desired_positions_wrt_palm[1] = self.y_radius*np.sin(desired_angle_wrt_palm[0]) + self.center_pos[1]
desired_positions_wrt_palm[2] = self.x_radius*np.cos(desired_angle_wrt_palm[1]) + self.center_pos[0]
desired_positions_wrt_palm[3] = self.y_radius*np.sin(desired_angle_wrt_palm[1]) + self.center_pos[1]
# update both sims with desired targets
for sim in [self.sim, self.sim_obsd]:
sim.model.site_pos[self.target1_sid, 0] = desired_positions_wrt_palm[0]
sim.model.site_pos[self.target1_sid, 2] = desired_positions_wrt_palm[1]
sim.model.site_pos[self.target2_sid, 0] = desired_positions_wrt_palm[2]
sim.model.site_pos[self.target2_sid, 2] = desired_positions_wrt_palm[3]
# move upward, to be seen
sim.model.site_pos[self.target1_sid, 1] = -0.07
sim.model.site_pos[self.target2_sid, 1] = -0.07
self.counter +=1
# V0: mean center and scaled differently
# a[a>0] = self.act_mid[a>0] + a[a>0]*self.upper_rng[a>0]
# a[a<=0] = self.act_mid[a<=0] + a[a<=0]*self.lower_rng[a<=0]
return super().step(a)
def get_obs_dict(self, sim):
obs_dict = {}
obs_dict['t'] = np.array([sim.data.time])
obs_dict['hand_pos'] = sim.data.qpos[:-14].copy() # 7*2 for ball's free joint, rest should be hand
# object positions
obs_dict['object1_pos'] = sim.data.site_xpos[self.object1_sid].copy()
obs_dict['object2_pos'] = sim.data.site_xpos[self.object2_sid].copy()
# object translational velocities (V0 didn't normalize with dt)
obs_dict['object1_velp'] = sim.data.qvel[-12:-9].copy()*self.dt
obs_dict['object2_velp'] = sim.data.qvel[-6:-3].copy()*self.dt # there was a bug in V0 here
# site locations in world frame, populated after the step/forward call
obs_dict['target1_pos'] = sim.data.site_xpos[self.target1_sid].copy() # V0 was <x,y>, no z dim
obs_dict['target2_pos'] = sim.data.site_xpos[self.target2_sid].copy() # V0 was <x,y>, no z dim
# object position error
obs_dict['target1_err'] = obs_dict['target1_pos'] - obs_dict['object1_pos'] # this wasn't a part of V0
obs_dict['target2_err'] = obs_dict['target2_pos'] - obs_dict['object2_pos'] # this wasn't a part of V0
# muscle activations
if sim.model.na>0:
obs_dict['act'] = sim.data.act[:].copy()
return obs_dict
def get_reward_dict(self, obs_dict):
# tracking error
target1_dist = np.linalg.norm(obs_dict['target1_err'], axis=-1)
target2_dist = np.linalg.norm(obs_dict['target2_err'], axis=-1)
target_dist = target1_dist if self.which_task==Task.MOVE_TO_LOCATION else (target1_dist+target2_dist)
act_mag = np.linalg.norm(self.obs_dict['act'], axis=-1)/self.sim.model.na if self.sim.model.na !=0 else 0
# wrist pose err (New in V1)
hand_pos = obs_dict['hand_pos'][:,:,:3] if obs_dict['hand_pos'].ndim==3 else obs_dict['hand_pos'][:3]
wrist_pose_err = np.linalg.norm(hand_pos*np.array([5,0.5,1]), axis=-1)
# V0: penalize wrist angle for lifting up (positive) too much
# wrist_threshold = 0.15
# wrist_too_high = zeros
# wrist_too_high[wrist_angle>wrist_threshold] = 1
# self.reward_dict['wrist_angle'] = -10 * wrist_too_high
# detect fall
object1_pos = obs_dict['object1_pos'][:,:,2] if obs_dict['object1_pos'].ndim==3 else obs_dict['object1_pos'][2]
object2_pos = obs_dict['object2_pos'][:,:,2] if obs_dict['object2_pos'].ndim==3 else obs_dict['object2_pos'][2]
is_fall_1 = object1_pos < self.drop_height_threshold
is_fall_2 = object2_pos < self.drop_height_threshold
is_fall = is_fall_1 if self.which_task==Task.MOVE_TO_LOCATION else np.logical_or(is_fall_1, is_fall_2) # keep single/ both balls up
rwd_dict = collections.OrderedDict((
# Optional Keys
('pos_dist_1', -1.*target1_dist), # V0 had only xy, V1 has xyz
('pos_dist_2', -1.*target2_dist), # V0 had only xy, V1 has xyz
('drop_penalty', -1.*is_fall),
('wrist_angle', -1.*wrist_pose_err), # V0 had -10 if wrist>0.15
('act_reg', -1.*act_mag),
('bonus', 1.*(target1_dist < self.proximity_threshold)+1.*(target2_dist < self.proximity_threshold)+4.*(target1_dist < self.proximity_threshold)*(target2_dist < self.proximity_threshold)),
# Must keys
('sparse', -target_dist), # V0 had only xy, V1 has xyz
('solved', (target1_dist < self.proximity_threshold)*(target2_dist < self.proximity_threshold)*(~is_fall)),
('done', is_fall),
))
rwd_dict['dense'] = np.sum([wt*rwd_dict[key] for key, wt in self.rwd_keys_wt.items()], axis=0)
return rwd_dict
def reset(self, reset_pose=None, reset_vel=None, reset_goal=None, time_period=6):
# reset counters
self.counter=0
# reset goal
self.goal = self.create_goal_trajectory(time_period=time_period) if reset_goal is None else reset_goal.copy()
# reset scene
obs = super().reset(reset_qpos=reset_pose, reset_qvel=reset_vel)
return obs
def create_goal_trajectory(self, time_step=.1, time_period=6):
len_of_goals = 1000 # assumes that its greator than env horizon
# populate go-to task with a target location
if self.which_task==Task.MOVE_TO_LOCATION:
goal_pos = np.random.randint(4)
desired_position = []
if goal_pos==0:
desired_position.append(0.01) #x
desired_position.append(0.04) #y
desired_position.append(0.2) #z
elif goal_pos==1:
desired_position.append(0)
desired_position.append(-0.06)
desired_position.append(0.24)
elif goal_pos==2:
desired_position.append(-0.02)
desired_position.append(-0.02)
desired_position.append(0.2)
else:
desired_position.append(0.03)
desired_position.append(-0.02)
desired_position.append(0.2)
goal_traj = np.tile(desired_position, (len_of_goals, 1))
# populate baoding task with a trajectory of goals to hit
else:
goal_traj = []
if self.which_task==Task.BAODING_CW:
sign = -1
if self.which_task==Task.BAODING_CCW:
sign = 1
# Target updates in continuous circle
if self.n_shifts_per_period==-1:
t = 0
while t < len_of_goals:
angle_before_shift = sign * 2 * np.pi * (t * time_step / time_period)
goal_traj.append(np.array([angle_before_shift, angle_before_shift]))
t += 1
# Target updates in shifts (e.g. half turns, quater turns)
else:
t = 0
angle_before_shift = 0
steps_per_shift = np.ceil(time_period/(self.n_shifts_per_period*time_step)) # V0 had steps_per_fourth=7, which was twice faster than the period=60step
while t < len_of_goals:
if(t>0 and t%steps_per_shift==0):
angle_before_shift += 2.0*np.pi/self.n_shifts_per_period
goal_traj.append(np.array([angle_before_shift, angle_before_shift]))
t += 1
goal_traj = np.array(goal_traj)
return goal_traj
class BaodingRandomEnvV1(BaodingFixedEnvV1):
def reset(self):
obs = super().reset(time_period = self.np_random.uniform(high=5, low=7))
return obs | /robohive-0.3.0-py3-none-any.whl/mj_envs/envs/hand_manipulation_suite/baoding_v1.py | 0.590425 | 0.262109 | baoding_v1.py | pypi |
import requests
import os
import sseclient
from datetime import datetime
import urllib.parse
class RKClient:
"""Main class for interacting with IDM. See Documentation for details."""
def __init__(
self, creds: dict, server: str = "https://idm.robokami.com", **kwargs
) -> None:
"""Initializes the client.
Args:
creds: Dictionary containing credentials. Required keys are: username and password.
server: Server address. Defaults to https://idm.robokami.com.
initiate_login: If True, initiates login upon initialization. Defaults to True.
initiate_stream: If True, initiates stream upon initialization. Defaults to False.
"""
self.server = server
self.creds = creds
if kwargs.get("initiate_login", True):
self.authorize()
if kwargs.get("initiate_stream", False) and self.session_token is not None:
self.stream()
def authorize(self):
"""
Authorizes the client to IDM. If successful, session_token is set. If not, session_token is set to None.
"""
res = requests.get(
os.path.join(self.server, "login"),
json={"credentials": self.creds},
timeout=15,
)
if res.status_code == 200:
self.session_token = res.json()["session_token"]
self.iat = datetime.now().timestamp()
else:
print("Login failed")
self.session_token = None
def renew_session(self):
"""
Renews the session token.
"""
self.authorize()
def place_order(self, d: dict) -> dict:
"""
Places an order to IDM. Order details are given as a dictionary. Wrapper to trade_command function.
Args:
d: Dictionary containing order details. Required keys are: c (contract), position ('bid' or 'ask'), price, and lots. order_status and order_note are optional.
"""
d["order_status"] = d.get("order_status", "active")
d["order_note"] = d.get("order_note", "RK-TRADER")
return self.trade_command("place_order", d)
def update_order(self, d) -> dict:
"""
Updates an order existing in IDM. Update details are given as a dictionary. Wrapper to trade_command function.
Args:
d: Dictionary containing order update details. Required keys are: order_id and c.
"""
if "order_id" not in d.keys():
return {"status": "error", "message": "order_id is required"}
d["contract_type"] = "hourly" if d["c"].startswith("PH") else "block"
d["order_note"] = d.get("order_note", "RK-TRADER")
return self.trade_command("update_order", d)
def get_net_positions(self, is_block: bool = False) -> dict:
"""
Gets the net positions of the user. Wrapper to trade_command function.
Args:
is_block: If True, returns block positions. If False, returns hourly positions. Defaults to False.
"""
return self.trade_command(
"net_positions", {"contract_type": "block" if is_block else "hourly"}
)
def trade_command(self, command: str, d: dict) -> dict:
"""
Main trade commands function. See Documentation for details.
Args:
command: Name of the command.
d: Dictionary containing command details.
Returns:
Dictionary containing the response from IDM.
"""
command_phrase = urllib.parse.urljoin(self.server, ("trade/" + command))
res = requests.post(
command_phrase,
headers={"Authorization": self.session_token},
json=d,
)
if res.status_code == 200:
return res.json()
else:
print("Failed response code: " + str(res.status_code))
print(res.json())
return res.json()
def stream(self):
"""
Initiates a stream connection to IDM. Stream is available at self.stream_client.
"""
response = requests.get(
os.path.join(self.server, "stream"),
headers={"Authorization": self.session_token},
stream=True,
)
self.stream_client = sseclient.SSEClient(response) | /robokami_py-0.12-py3-none-any.whl/robokami/main.py | 0.797478 | 0.222151 | main.py | pypi |
import copy
from datetime import datetime
from robokami.main import RKClient
def prepare_order(
d: dict,
price: int | float = None,
lots: int = None,
demo: bool = True,
order_note: str = None,
order_status: str = None,
):
"""Wrapper function for sending trade order.
Args:
d (dict): Order dictionary
price (int | float, optional): Price of the order. Defaults to None.
lots (int, optional): Number of lots. Defaults to None.
demo (bool, optional): If True, order is placed in demo mode. Defaults to True.
order_note (str, optional): Order note. Defaults to None.
order_status (str, optional): Order status. Defaults to None.
Returns:
dict: Order dictionary
str: Command phrase
"""
d2 = copy.deepcopy(d)
trade_command_phrase = "pass"
order_id = d2.get("order_id", None)
if demo or order_id is None or order_id == "DEMO":
d2["order_id"] = "DEMO" if demo else None
d2["price"] = price
d2["lots"] = lots
d2["order_status"] = "active" if order_status is None else order_status
if demo:
d2["ts"] = datetime.now().timestamp()
else:
trade_command_phrase = "place_order"
else:
update_order = False
if price != d["price"] and price is not None:
d2["price"] = price
update_order = True
if lots != d["lots"] and lots is not None:
d2["lots"] = lots
update_order = True
if order_status != d["order_status"] and order_status is not None:
d2["order_status"] = order_status
update_order = True
if update_order:
trade_command_phrase = "update_order"
if order_note is not None:
d["order_note"] = order_note
return d2, trade_command_phrase
def send_trade_order(client: RKClient, d: dict, command_phrase: str):
"""Wrapper function for sending trade order.
Args:
client (Client): Client object
d (dict): Order dictionary
command_phrase (str): Command phrase
"""
if command_phrase == "place_order":
res = client.place_order(d)
if res["status"] == "success":
d["order_id"] = res["order_id"]
d["ts"] = datetime.now().timestamp()
else:
raise Exception("Order could not be placed")
elif command_phrase == "update_order":
res = client.update_order(
d=d,
)
if res["status"] == "success":
# d["order_id"] = res[0]["response"]
d["ts"] = datetime.now().timestamp()
elif res["detail"] == "partial_match_occured":
d["order_status"] = "cancelled"
res = client.update_order(d=d)
if res["status"] == "success":
d["order_id"] = None
elif res["detail"] == "order_cannot_be_updated":
d["order_id"] = None
else:
raise Exception("Order could not be placed")
elif command_phrase == "cancel_order":
d["order_status"] = "cancelled"
res = client.update_order(d=d)
if res["status"] == "success":
d["order_id"] = None
else:
raise Exception("Invalid command phrase")
return res, d | /robokami_py-0.12-py3-none-any.whl/robokami/order.py | 0.68763 | 0.259268 | order.py | pypi |
from greent.node_types import node_types, DRUG_NAME, DISEASE_NAME, UNSPECIFIED
from greent.util import Text
from greent.program import Program
from greent.program import QueryDefinition
class Transition:
def __init__(self, last_type, next_type, min_path_length, max_path_length):
self.in_type = last_type
self.out_type = next_type
self.min_path_length = min_path_length
self.max_path_length = max_path_length
self.in_node = None
self.out_node = None
def generate_reverse(self):
return Transition(self.out_type, self.in_type, self.min_path_length, self.max_path_length)
@staticmethod
def get_fstring(ntype):
if ntype == DRUG_NAME or ntype == DISEASE_NAME:
return 'n{0}{{name:"{1}"}}'
if ntype is None:
return 'n{0}'
else:
return 'n{0}:{1}'
def generate_concept_cypher_pathstring(self, t_number):
end = f'(c{t_number+1}:Concept {{name: "{self.out_type}" }})'
pstring = ''
if t_number == 0:
start = f'(c{t_number}:Concept {{name: "{self.in_type}" }})\n'
pstring += start
if self.max_path_length > 1:
pstring += f'-[:translation*{self.min_path_length}..{self.max_path_length}]-\n'
else:
pstring += '--\n'
pstring += f'{end}\n'
return pstring
class UserQuery:
"""This is the class that the rest of builder uses to interact with a query."""
def __init__(self, start_values, start_type, lookup_node):
"""Create an instance of UserQuery. Takes a starting value and the type of that value"""
self.query = None
self.definition = QueryDefinition()
# Value for the original node
self.definition.start_values = start_values
self.definition.start_type = start_type
self.definition.end_values = None
# The term used to create the initial point
self.definition.start_lookup_node = lookup_node
# List of user-level types that we must pass through
self.add_node(start_type)
def add_node(self, node_type):
"""Add a node to the node list, validating the type
20180108: node_type may be None"""
# Our start node is more specific than this... Need to have another validation method
if node_type is not None and node_type not in node_types:
raise Exception('node type must be one of greent.node_types')
self.definition.node_types.append(node_type)
def add_transition(self, next_type, min_path_length=1, max_path_length=1, end_values=None):
"""Add another required node type to the path.
When a new node is added to the user query, the user is asserting that
the returned path must go through a node of this type. The default is
that the next node should be directly related to the previous. That is,
no other node types should be between the previous node and the current
node. There may be other nodes, but they will represent synonyms of
the previous or current node. This is defined using the
max_path_length input, which defaults to 1. On the other hand, a user
may wish to define that some number of other node types must be between
one node and another. This can be specified by the min_path_length,
which also defaults to 1. If indirect edges are demanded, this
parameter is set higher. If this is the final transition, a value for
the terminal node may be added. Attempting to add more transitions
after setting an end value will result in an exception. If this is the
terminal node, but it does not have a specified value, then no
end_value needs to be specified.
arguments: next_type: type of the output node from the transition.
Must be an element of reasoner.node_types.
min_path_length: The minimum number of non-synonym transitions
to get from the previous node to the added node
max_path_length: The maximum number of non-synonym transitions to get
from the previous node to the added node
end_value: Value of this node (if this is the terminal node, otherwise None)
"""
# validate some inputs
# TODO: subclass Exception
if min_path_length > max_path_length:
raise Exception('Maximum path length cannot be shorter than minimum path length')
if self.definition.end_values is not None:
raise Exception('Cannot add more transitions to a path with a terminal node')
# Add the node to the type list
self.add_node(next_type)
# Add the transition
t = Transition(self.definition.node_types[-2], next_type, min_path_length, max_path_length)
self.definition.transitions.append(t)
# Add the end_value
if end_values is not None:
self.definition.end_values = end_values
def add_end_lookup_node(self, lookup_node):
self.definition.end_lookup_node = lookup_node
def generate_cypher(self):
"""Generate a cypher query to find paths through the concept-level map."""
cypherbuffer = ['MATCH p=\n']
paths_parts = []
for t_number, transition in enumerate(self.definition.transitions):
paths_parts.append(transition.generate_concept_cypher_pathstring(t_number))
cypherbuffer.append( ''.join(paths_parts) )
last_node_i = len(self.definition.transitions)
cypherbuffer.append('FOREACH (n in relationships(p) | SET n.marked = TRUE)\n')
cypherbuffer.append(f'WITH p,c0,c{last_node_i}\n')
if self.definition.end_values is None:
cypherbuffer.append(f'MATCH q=(c0:Concept)-[*0..{last_node_i} {{marked:True}}]->(c{last_node_i}:Concept)\n')
else:
cypherbuffer.append(f'MATCH q=(c0:Concept)-[*0..{last_node_i} {{marked:True}}]->()<-[*0..{last_node_i} {{marked:True}}]-(c{last_node_i}:Concept)\n')
cypherbuffer.append('WHERE p=q\n')
#This is to make sure that we don't get caught up in is_a and other funky relations.:
cypherbuffer.append('AND ALL( r in relationships(p) WHERE EXISTS(r.op) )')
cypherbuffer.append('FOREACH (n in relationships(p) | SET n.marked = FALSE)\n')
cypherbuffer.append('RETURN p, EXTRACT( r in relationships(p) | startNode(r) ) \n')
return ''.join(cypherbuffer)
def compile_query(self, rosetta):
self.cypher = self.generate_cypher()
print(self.cypher)
plans = rosetta.type_graph.get_transitions(self.cypher)
self.programs = [Program(plan, self.definition, rosetta, i) for i,plan in enumerate(plans)]
return len(self.programs) > 0
def get_programs(self):
return self.programs
def get_terminal_nodes(self):
starts = set()
ends = set()
return self.query.get_terminal_types() | /robokop-interfaces-0.91.tar.gz/robokop-interfaces-0.91/greent/userquery.py | 0.645455 | 0.265953 | userquery.py | pypi |
import json
import graphene
from graphene import resolve_only_args
from datetime import datetime, timedelta
from dateutil.parser import parse as parse_date
from greent.core import GreenT
from greent.translator import Translation
# http://graphql.org/learn/introspection/
'''
class ExposureInterface (graphene.Interface):
start_time = graphene.String ()
end_time = graphene.String ()
exposure_type = graphene.String ()
latitude = graphene.String ()
longitude = graphene.String ()
units = graphene.String ()
value = graphene.String ()
'''
class ExposureInterface (graphene.Interface):
date_time = graphene.String ()
latitude = graphene.String ()
longitude = graphene.String ()
value = graphene.String ()
class ExposureScore (graphene.ObjectType):
class Meta:
interfaces = (ExposureInterface, )
class ExposureValue (graphene.ObjectType):
class Meta:
interfaces = (ExposureInterface, )
class ExposureCondition (graphene.ObjectType):
chemical = graphene.String ()
gene = graphene.String ()
pathway = graphene.String ()
pathName = graphene.String ()
pathID = graphene.String ()
human = graphene.String ()
class Drug(graphene.ObjectType):
generic_name = graphene.String ()
class GenePath(graphene.ObjectType):
uniprot_gene = graphene.String ()
kegg_path = graphene.String ()
path_name = graphene.String ()
human = graphene.String ()
# No map type in GraphQL: https://github.com/facebook/graphql/issues/101
class PatientVisit(graphene.ObjectType):
date = graphene.String ()
visit_type = graphene.String ()
class Location(graphene.ObjectType):
latitude = graphene.String ()
longitude = graphene.String ()
class Prescription(graphene.ObjectType):
medication = graphene.String ()
date = graphene.String ()
class Diagnosis(graphene.ObjectType):
diagnosis = graphene.String ()
visit = graphene.Field (PatientVisit)
class Patient(graphene.ObjectType):
birth_date = graphene.String ()
race = graphene.String ()
sex = graphene.String ()
patient_id = graphene.String ()
geo_code = graphene.Field (Location)
prescriptions = graphene.List (Prescription)
diagnoses = graphene.List (Diagnosis)
class DrugToDisease (graphene.ObjectType):
drug_name = graphene.String ()
target_name = graphene.String ()
disease_name = graphene.String ()
class Attribute(graphene.ObjectType):
key = graphene.String ()
value = graphene.String ()
class Thing(graphene.ObjectType):
value = graphene.String ()
attributes = graphene.List (of_type=Attribute)
greenT = GreenT (config='greent.conf', override={
"clinical_url" : "http://localhost:5000/patients"
})
class GreenQuery (graphene.ObjectType):
endotype = graphene.List(of_type=graphene.String,
query=graphene.String ())
exposure_score = graphene.List (of_type=ExposureScore,
exposureType = graphene.String (),
startDate = graphene.String (),
endDate = graphene.String (),
exposurePoint = graphene.String ())
exposure_value = graphene.List (of_type=ExposureValue,
exposureType = graphene.String (),
startDate = graphene.String (),
endDate = graphene.String (),
exposurePoint = graphene.String ())
patients = graphene.List (of_type=Patient,
age=graphene.Int (),
race=graphene.String (),
sex=graphene.String ())
exposure_conditions = graphene.List (of_type=ExposureCondition,
chemicals = graphene.List(graphene.String))
drugs_by_condition = graphene.List(of_type=Drug,
conditions = graphene.List(graphene.String))
gene_paths_by_disease = graphene.List (of_type=GenePath,
diseases = graphene.List(graphene.String))
drug_gene_disease = graphene.List (of_type=DrugToDisease,
drug_name = graphene.String (),
disease_name = graphene.String ())
translate = graphene.List (of_type = Thing,
thing = graphene.String (),
domainA = graphene.String (),
domainB = graphene.String ())
'''
class Attribute(graphene.ObjectType):
key = graphene.String ()
value = graphene.String ()
class Thing(graphene.ObjectType):
value = graphene.String ()
attributes = graphene.List (of_type=Attribute)
'''
def resolve_endotype (obj, args, context, info):
return greenT.endotype.get_endotype (json.loads (args.get("query")))
def resolve_translate (obj, args, context, info):
translation = Translation (obj=args.get("thing"),
type_a=args.get("domainA"),
type_b=args.get("domainB"))
return list (map (lambda v : Thing (value=v), greenT.translator.translate (translation)))
def resolve_exposure_score (obj, args, context, info):
result = None
result = greenT.get_exposure_scores (
exposure_type = args.get ("exposureType"),
start_date = args.get ("startDate"),
end_date = args.get ("endDate"),
exposure_point = args.get ("exposurePoint"))
out = []
for r in result['scores']:
latitude, longitude = r['latLon'].split (",")
out.append (ExposureValue (date_time = datetime.strftime (r['dateTime'], "%Y-%m-%d"),
latitude = latitude,
longitude = longitude,
value = r['score']))
return out
def resolve_exposure_value (obj, args, context, info):
result = None
result = greenT.get_exposure_values (
exposure_type = args.get ("exposureType"),
start_date = args.get ("startDate"),
end_date = args.get ("endDate"),
exposure_point = args.get ("exposurePoint"))
out = []
for r in result['values']:
latitude, longitude = r['latLon'].split (",")
out.append (ExposureValue (date_time = datetime.strftime (r['dateTime'], "%Y-%m-%d"),
latitude = latitude,
longitude = longitude,
value = r['value']))
return out
def resolve_patients (obj, args, context, info):
result = None
result = greenT.get_patients (
age = args.get ("age"),
sex = args.get ("sex"),
race = args.get ("race"),
location = args.get ("location"))
out = []
for r in result:
diagnoses = []
for key, value in r['diag'].items ():
visit_date = list (value)[0]
visit_type = value[visit_date]
diagnosis = list (list (value)[0])[1]
diagnoses.append (Diagnosis (
diagnosis = key,
visit = PatientVisit (date = visit_date,
visit_type = visit_type)))
prescriptions = []
for key, value in r['medList'].items ():
prescriptions.append (Prescription (
medication = key,
date = value))
out.append (Patient (
birth_date = r['birth_date'],
race = r['race'],
sex = r['sex'],
patient_id = r['patient_id'],
geo_code = Location (r['geoCode']['GEO:LAT'], r['geoCode']['GEO:LONG']),
prescriptions = prescriptions,
diagnoses = diagnoses))
return out
def resolve_exposure_conditions (obj, args, context, info):
obj = args.get ("chemicals")
result = greenT.get_exposure_conditions (chemicals = obj)
if result:
out = []
for r in result:
out.append (ExposureCondition (
chemical = r["chemical"],
gene = r["gene"],
pathway = r["pathway"],
pathName = r["pathName"],
pathID = r["pathID"],
human = r["human"] ))
result = out
return result
def resolve_drugs_by_condition (obj, args, context, info):
conditions = args.get ("conditions")
diseases = greenT.get_drugs_by_condition (conditions = conditions)
return list(map(lambda s : Drug(s), diseases))
def resolve_gene_paths_by_disease (obj, args, context, info):
diseases = args.get ("diseases")
gene_paths = greenT.get_genes_pathways_by_disease (diseases = diseases)
return list(map(lambda g : GenePath (
uniprot_gene = g['uniprotGene'],
kegg_path = g['keggPath'],
path_name = g['pathName'],
human = g['human']), gene_paths))
def resolve_drug_gene_disease (obj, args, context, info):
drug_name = args.get ("drug_name")
disease_name = args.get ("disease_name")
paths = greenT.get_drug_gene_disease (disease_name=disease_name, drug_name=drug_name)
return list(map(lambda dd : DrugToDisease (
drug_name = drug_name,
target_name = dd['uniprotSymbol'],
disease_name = disease_name), paths))
Schema = graphene.Schema(query=GreenQuery)
'''
{
translate (thing:"Imatinib",
domainA: "http://chem2bio2rdf.org/drugbank/resource/Generic_Name",
domainB: "http://chem2bio2rdf.org/uniprot/resource/gene")
{
type
value
}
}
{
translate (thing:"DOID:2841",
domainA: "http://identifiers.org/doid/",
domainB: "http://identifiers.org/mesh/disease/id")
{
type
value
}
}
{
translate (thing:"Asthma",
domainA : "http://identifiers.org/mesh/disease/name/",
domainB : "http://identifiers.org/mesh/drug/name/")
{
type
value
}
}
{
exposureValue(exposureType: "pm25",
startDate: "2010-01-06",
endDate: "2010-01-06",
exposurePoint: "35.9131996,-79.0558445") {
value
}
}
{
exposureScore(exposureType: "pm25",
startDate: "2010-01-06",
endDate: "2010-01-06",
exposurePoint: "35.9131996,-79.0558445") {
value
}
}
{
exposureConditions (chemicals: [ "D052638" ] ) {
chemical
gene
pathway
pathName
pathID
human
}
}
{
drugsByCondition (conditions: [ "d001249" ] ) {
genericName
}
}
{
genePathsByDisease (diseases: [ "d001249" ] ) {
uniprotGene
keggPath
pathName
}
}
{
patients {
birthDate
patientId
geoCode {
latitude
longitude
}
prescriptions {
date
medication
}
diagnoses {
diagnosis
visit {
date
visitType
}
}
}
}
{
__type(name: "Patient") {
name
fields {
name
type {
name
kind
}
}
}
}
''' | /robokop-interfaces-0.91.tar.gz/robokop-interfaces-0.91/greent/schema.py | 0.567218 | 0.195306 | schema.py | pypi |
import logging
import traceback
from collections import defaultdict
from greent.graph_components import KNode, KEdge
from greent.util import LoggingUtil
logger = LoggingUtil.init_logging(__file__, level=logging.DEBUG)
class QueryDefinition:
"""Defines a query"""
def __init__(self):
self.start_values = None
self.start_type = None
self.end_values = None
self.node_types = []
self.transitions = []
self.start_lookup_node = None
self.end_lookup_node = None
class Program:
def __init__(self, plan, query_definition, rosetta, program_number):
self.program_number = program_number
self.concept_nodes = plan[0]
self.transitions = plan[1]
self.rosetta = rosetta
self.unused_instance_nodes = set()
self.all_instance_nodes = set()
self.initialize_instance_nodes(query_definition)
self.linked_results = []
self.start_nodes = []
self.end_nodes = []
def initialize_instance_nodes(self, query_definition):
logger.debug("Initializing program {}".format(self.program_number))
t_node_ids = self.get_fixed_concept_nodes()
self.start_nodes = [KNode(start_identifier, self.concept_nodes[t_node_ids[0]]) for start_identifier in
query_definition.start_values]
self.add_instance_nodes(self.start_nodes, t_node_ids[0])
if len(t_node_ids) == 1:
if query_definition.end_values is not None:
raise Exception(
"We only have one set of fixed nodes in the query plan, but multiple sets of fixed instances")
return
if len(t_node_ids) == 2:
if query_definition.end_values is None:
raise Exception(
"We have multiple fixed nodes in the query plan but only one set of fixed instances")
self.end_nodes = [KNode(start_identifier, self.concept_nodes[t_node_ids[-1]]) for start_identifier in
query_definition.end_values]
self.add_instance_nodes(self.end_nodes, t_node_ids[-1])
return
raise Exception("We don't yet support more than 2 instance-specified nodes")
def get_fixed_concept_nodes(self):
"""Fixed concept nodes are those that only have outputs"""
nodeset = set(self.transitions.keys())
for transition in self.transitions.values():
nodeset.discard(transition['to']) #Discard doesn't raise error if 'to' not in nodeset
fixed_node_identifiers = list(nodeset)
fixed_node_identifiers.sort()
return fixed_node_identifiers
def add_instance_nodes(self, nodelist, context):
"""We've got a new set of nodes (either initial nodes or from a query). They are attached
to a particular concept in our query plan. We make sure that they're synonymized and then
add them to both all_instance_nodes as well as the unused_instance_nodes"""
for node in nodelist:
self.rosetta.synonymizer.synonymize(node)
node.add_context(self.program_number, context)
self.all_instance_nodes.update(nodelist)
self.unused_instance_nodes.update([(node, context) for node in nodelist])
def run_program(self):
"""Loop over unused nodes, send them to the appropriate operator, and collect the results.
Keep going until there's no nodes left to process."""
logger.debug(f"Running program {self.program_number}")
while len(self.unused_instance_nodes) > 0:
source_node, context = self.unused_instance_nodes.pop()
if context not in self.transitions:
continue
link = self.transitions[context]
next_context = link['to']
op_name = link['op']
key = f"{op_name}({source_node.identifier})"
log_text = " -- {key}"
try:
results = self.rosetta.cache.get (key)
if results is not None:
logger.info (f"cache hit: {key} size:{len(results)}")
else:
logger.info (f"exec op: {key}")
op = self.rosetta.get_ops(op_name)
results = op(source_node)
self.rosetta.cache.set (key, results)
newnodes = []
for r in results:
edge = r[0]
if isinstance(edge, KEdge):
edge.predicate = link['link']
edge.source_node = source_node
edge.target_node = r[1]
self.linked_results.append(edge)
newnodes.append(r[1])
logger.debug(f" {newnodes}")
logger.debug (f"cache.set-> {key} length:{len(results)}")
self.add_instance_nodes(newnodes,next_context)
except Exception as e:
traceback.print_exc()
logger.error("Error invoking> {0}".format(log_text))
logger.debug(" {} nodes remaining.".format(len(self.unused_instance_nodes)))
return self.linked_results
def get_results(self):
return self.linked_results
def get_path_descriptor(self):
"""Return a description of valid paths at the concept level. The point is to have a way to
find paths in the final graph. By starting at one end of this, you can get to the other end(s).
So it assumes an acyclic graph, which may not be valid in the future. What it should probably
return in the future (if we still need it) is a cypher query to find all the paths this program
might have made."""
path={}
used = set()
node_num = 0
used.add(node_num)
while len(used) != len(self.concept_nodes):
next = None
if node_num in self.transitions:
putative_next = self.transitions[node_num]['to']
if putative_next not in used:
next = putative_next
dir = 1
if next is None:
for putative_next in self.transitions:
ts = self.transitions[putative_next]
if ts['to'] == node_num:
next = putative_next
dir = -1
if next is None:
logger.error("How can this be? No path across the data?")
raise Exception()
path[node_num] = (next, dir)
node_num = next
used.add(node_num)
return path | /robokop-interfaces-0.91.tar.gz/robokop-interfaces-0.91/greent/program.py | 0.649467 | 0.223292 | program.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.