code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
# rokso-migrations
A NOT so simple database migrations utility for Postgresql based database migration in python.
Rokso for Postgresql supports multi-schema migrations of a single database.
## Features
* Create your migrations simply with CLI.
* Suitable for large projects because we maintain migration files under a dedicated directory of a database object
* Reverse engineer your migrations from existing database.
* Check database state like `git status`.
## Installation
**This is work in progress and the package is still not properly published.**
```
pip install roksopsql
```
or
```
pip3 install roksopsql
```
## Usage
To see what rokso can do:
```
➜ roksopsql --help
Usage: roksopsql [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
create ➕ create a database migration.
init 🚀 init your migration project. configures db connection
parameters
last-success ⤵️ last successful migration version number
migrate ⤴️ Apply all outstanding migrations to database.
remap 🔄 Reverse engineer your DB migrations from existing database.
rollback ⤵️ Rollback last applied migration
status ✅ checks the current state of database and pending migrations
```
### Setup
#### DB setup
Let's say for a database `tutorial` we have two schemas. `online` and `offline`, offline being primary schema. We'll create one table and one database function in `offline` schema and another table in `online` schema.
```
> psql
postgres=# create database tutorial;
postgres=# \c tutorial
tutorial=# create schema offline;
tutorial=# create schema online;
```
There are many ways to initiate your project.
To start create a directory where you want to create project
```
➜ mkdir tutorial
➜ cd tutorial
➜ tutorial ✗ roksopsql init
Enter path to setup project: .
Enter database hostname : /var/www/projects/python/rokso/tutorial
Enter database port [Default:5432] [5432]:
Enter database name : tutorial
Enter database username : pguser
Enter database password:
Enter a schema name [Default:public] [public]: offline
working directory:: /var/www/projects/python/rokso/tutorial
[*] Checking state of config file in CWD
[*] Config file has been created
[#] Generating required dir(s) if not exist
PostgreSQL server information
{'user': 'pguser', 'dbname': 'tutorial', 'host': 'localhost', 'port': '5432', 'tty': '', 'options': '', 'sslmode': 'prefer', 'sslcompression': '0', 'krbsrvname': 'postgres', 'target_session_attrs': 'any'}
You are connected to - PostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit
Executing>>
CREATE TABLE IF NOT EXISTS offline.rokso_db_version (
id serial PRIMARY KEY,
filename text NOT NULL,
version varchar(100) NOT NULL,
status VARCHAR(20) DEFAULT 'pending' NOT NULL,
scheduledAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
executedAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT filename_UNQ UNIQUE (filename)
);
query completed successfully..
>> Time taken: 0.0203 secs
```
The above command does following things:
- Creates a directory `migration` under the project directory. This directory holds the migration sqls for database.
- Creates a file `config.json` which holds the connection string to Database.
- Creates a version control table `rokso_db_version` in the database with given schema.
Check all contents now
```
➜ tutorial ✗ ll
-rw-r--r-- 1 user staff 192B 29 Mar 19:11 config.json
drwxr-xr-x 2 user staff 64B 29 Mar 19:11 migration
```
Check the table in database
```
psql>\d+ offline.rokso_db_version;
+-------------+--------------+------+-----+-------------------+-------------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+-------------------+-------------------+
| id | int | NO | PRI | NULL | auto_increment |
| filename | varchar(255) | NO | UNI | NULL | |
| version | varchar(100) | NO | | NULL | |
| status | varchar(20) | NO | | pending | |
| scheduledAt | timestamp | NO | | CURRENT_TIMESTAMP | DEFAULT_GENERATED |
| executedAt | timestamp | NO | | CURRENT_TIMESTAMP | DEFAULT_GENERATED |
+-------------+--------------+------+-----+-------------------+-------------------+
```
Now we are ready for creating our new migrations
### Create migrations
Rokso can generate migrations for your tables, materialized views, views, functions and custom data types. They all are organized under their respective directories.
#### NOTE: For a multi-schema migrations Rokso assumes that database exits with all schemas that you want to manage.
To create a new migration run following command:
```
➜ tutorial git:(master) ✗ roksopsql create
Enter the schema name [public]: offline
Do you want to create a
[T]able
[V]iew
[M]aterialized View
[F]unctions
[D]atabase Type: (T, V, M, F, D) [T]: T
Enter table/procedure/function name that you want to create this migration for.: user_master
Enter a file name for this migration.: create_table_user_master
creating a migration ...........
working directory:: /var/www/projects/python/rokso/tutorial
migration filepath:: /var/www/projects/python/rokso/tutorial/migration/offline/200.tables/user_master
[*] migration file 2021_03_29__19_14_28_create_table_user_master.py has been generated
```
Now you can see a new file under migration directory has been generated:
```
➜ tutorial git:(master) ✗ ll migration
total 0
drwxr-xr-x 3 user staff 29 Mar 19:14 offline
➜ tutorial git:(master) ✗ ll migration/offline
total 0
drwxr-xr-x 3 user staff 96B 29 Mar 19:14 200.tables
➜ tutorial git:(master) ✗ ll migration/offline/200.tables
total 0
drwxr-xr-x 3 user staff 96B 29 Mar 19:14 user_master
➜ tutorial git:(master) ✗ ll migration/offline/200.tables/user_master
total 8
-rw-r--r-- 1 user staff 171B 29 Mar 19:14 2021_03_29__19_14_28_create_table_user_master.py
➜ tutorial git:(master) ✗ cat migration/offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py
apply_sql = """
WRITE your DDL/DML query here
"""
rollback_sql = "WRITE your ROLLBACK query here."
migrations = {
"apply": apply_sql,
"rollback": rollback_sql
}
```
Now you can edit this file and add the DDL/INSERTS/UPDATES in `apply_sql` and its extremely important to write `rollback_sql`. However if you do not want a rollback statement then leave the `rollback_sql` empty and Rokso will not report an error while executing or rolling back migrations.
### Apply/Run migrations
After you have written your DDLs/DMLs in migration files, we are ready to carry out the migration, i.e. make a database change.
Let's create more migrations.
```
# create a migration for a database function.
➜ tutorial git:(master) ✗ roksopsql create
Enter the schema name [public]: offline
Do you want to create a
[T]able
[V]iew
[M]aterialized View
[F]unctions
[D]atabase Type: (T, V, M, F, D) [T]: F
Enter table/procedure/function name that you want to create this migration for.: generate_booking_number
Enter a file name for this migration: create_function_generate_booking_number
creating a migration ...........
working directory:: /var/www/projects/python/rokso/tutorial
migration filepath:: /var/www/projects/python/rokso/tutorial/migration/offline/500.functions/generate_booking_number
[*] migration file 2021_03_29__19_34_09_create_function_generate_booking_number.py has been generated
```
One more migration for another schema `online`
```
➜ tutorial git:(master) ✗ roksopsql create
Enter the schema name [public]: online
Do you want to create a
[T]able
[V]iew
[M]aterialized View
[F]unctions
[D]atabase Type: (T, V, M, F, D) [T]: T
Enter table/procedure/function name that you want to create this migration for: website_user
Enter a file name for this migration.: create_table_website_user
creating a migration ...........
working directory:: /var/www/projects/python/rokso/tutorial
migration filepath:: /var/www/projects/python/rokso/tutorial/migration/online/200.tables/website_user
[*] migration file 2021_03_29__19_37_31_create_table_website_user.py has been generated
```
After the files are generated, write your DDLs/DMLs into those files.
#### Now check database status
Rokso shows you last few successful migrations and also pending migrations if any.
```
➜ tutorial git:(master) ✗ roksopsql status
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.0012secs
Last few successful migrations:
id filename version status scheduledat executedat
---- ---------- --------- -------- ------------- ------------
Pending migrations for application:
filename version status
------------------------------------------------------------------------------------------------------------- --------- --------
offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py NA pending
offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py NA pending
online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py NA pending
```
In this case we don't have any prior migrations recorded in DB because we started with fresh database.
#### Applying single migration
```
➜ tutorial git:(master) ✗ roksopsql migrate --migration offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.0017secs
🌀Applying migration file: offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py
Executing>>
CREATE TABLE IF NOT EXISTS offline.user_master (
id serial PRIMARY KEY,
user_name varchar(255) NOT NULL,
email varchar(100) NOT NULL,
user_password VARCHAR(50) NOT NULL,
createdAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updatedAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT email_UNQ UNIQUE (email)
);
query completed successfully..
>> Time taken: 0.0321 secs
Executing>>
INSERT INTO offline.rokso_db_version
(filename, version, status, scheduledAt, executedAt)
VALUES('offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py', 'b23b4a2d', 'complete', CURRENT_TIMESTAMP, CURRENT_TIMESTAMP)
ON CONFLICT (filename) DO UPDATE SET status = 'complete', version = 'b23b4a2d', executedAt=CURRENT_TIMESTAMP;
query completed successfully..
>> Time taken: 0.0077 secs
✅ Your database is at revision# b23b4a2d
```
Checking status again:
```
➜ tutorial git:(master) ✗ roksopsql status
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.004secs
Last few successful migrations:
id filename version status scheduledat executedat
---- ------------------------------------------------------------------------------- --------- -------- -------------------------- --------------------------
1 offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py b23b4a2d complete 2021-03-29 20:04:01.691033 2021-03-29 20:04:01.691033
Pending migrations for application:
filename version status
------------------------------------------------------------------------------------------------------------- --------- --------
offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py NA pending
online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py NA pending
```
Now we have prior migration with revision number and rest of the pending migrations.
#### Applying all outstanding migrations
```
➜ tutorial git:(master) ✗ roksopsql migrate
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.0054secs
🌀Applying migration file: offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py
Executing>>
CREATE OR REPLACE FUNCTION offline.generate_booking_number()
RETURNS character varying
LANGUAGE plpgsql
AS $function$
declare
str_str varchar;
output_str varchar;
year_var integer;
day_var integer;
begin
SELECT array_to_string(ARRAY(SELECT chr((65 + round(random() * 25)) :: integer) into str_str
FROM generate_series(1,15)), '');
select substring(str_str, 2, 4) into str_str;
SELECT date_part('year', CURRENT_TIMESTAMP) into year_var;
SELECT 700 + date_part('doy', CURRENT_TIMESTAMP) into day_var;
select concat(year_var, '-', day_var, '-', str_str) into output_str;
return output_str;
END;
$function$
;
query completed successfully..
>> Time taken: 0.0823 secs
Executing>>
INSERT INTO offline.rokso_db_version
(filename, version, status, scheduledAt, executedAt)
VALUES('offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py', '22d0747c', 'complete', CURRENT_TIMESTAMP, CURRENT_TIMESTAMP)
ON CONFLICT (filename) DO UPDATE SET status = 'complete', version = '22d0747c', executedAt=CURRENT_TIMESTAMP;
query completed successfully..
>> Time taken: 0.0023 secs
🌀Applying migration file: online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py
Executing>>
CREATE TABLE IF NOT EXISTS online.website_user (
id serial PRIMARY KEY,
user_name varchar(255) NOT NULL,
email varchar(100) NOT NULL,
user_password VARCHAR(50) NOT NULL,
phone_number varchar(20) NOT NULL,
img_url varchar(250) NOT NULL,
createdAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updatedAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT email_UNQ UNIQUE (email)
);
query completed successfully..
>> Time taken: 0.0202 secs
Executing>>
INSERT INTO offline.rokso_db_version
(filename, version, status, scheduledAt, executedAt)
VALUES('online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py', '22d0747c', 'complete', CURRENT_TIMESTAMP, CURRENT_TIMESTAMP)
ON CONFLICT (filename) DO UPDATE SET status = 'complete', version = '22d0747c', executedAt=CURRENT_TIMESTAMP;
query completed successfully..
>> Time taken: 0.0011 secs
✅ Your database is at revision# 22d0747c
```
Check the status again
```
➜ tutorial git:(master) ✗ roksopsql status
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.0049secs
Last few successful migrations:
id filename version status scheduledat executedat
---- ------------------------------------------------------------------------------------------------------------- --------- -------- -------------------------- --------------------------
1 offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py b23b4a2d complete 2021-03-29 20:04:01.691033 2021-03-29 20:04:01.691033
2 offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py 22d0747c complete 2021-03-29 20:09:50.909159 2021-03-29 20:09:50.909159
3 online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py 22d0747c complete 2021-03-29 20:09:50.932331 2021-03-29 20:09:50.932331
No new migration to process.
```
If all migrations are already carried out and you run `migrate` command again then rokso will do nothing, very much like `git commit`. **Also note that the revision number will be same to all files which are applied together.**
```
➜ tutorial git:(master) ✗ roksopsql migrate
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.0021secs
Nothing to migrate ....
```
It may happen while executing a series of migrations an error can occur in-between. e.g. Let's say 5 migrations(files) were in pipeline and then while execution third migration failed, in this case Rokso will still mark first two files as successful and further migration will stop.
### Rollback migrations
For rolling back migrations, rokso support two modes: last successful migration and rolling back to a particular version, just like `git reset`. To ensure rolling back actually works, make sure all the rollback SQLs are properly written in migration files.
#### Rolling back last migration
This step is simple enough.
```
➜ tutorial git:(master) ✗ roksopsql rollback
Executing>> SELECT * from offline.rokso_db_version ORDER BY id DESC LIMIT 1;
>> Time taken: 0.0055secs
Executing>> SELECT * FROM offline.rokso_db_version WHERE version = '22d0747c' ORDER BY id desc
>> Time taken: 0.0157secs
Following files will be rolledback:
id filename version status scheduledat executedat
---- -------------------------------------------------------------------------------- --------- -------- -------------------------- --------------------------
3 online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py 22d0747c complete 2021-03-29 20:09:50.932331 2021-03-29 20:09:50.932331
Please confirm to proceed(y/yes):y
🔄 Rolling back file:: online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py
Executing>> DROP TABLE IF EXISTS online.website_user;
query completed successfully..
>> Time taken: 0.0318 secs
Executing>> DELETE FROM offline.rokso_db_version WHERE id = 3 ;
query completed successfully..
>> Time taken: 0.0023 secs
✅ Rollback complete.
```
#### Rolling back to a specific version
Get status to identify which version to rollback
```
➜ tutorial git:(master) ✗ roksopsql status
Executing>> SELECT * FROM offline.rokso_db_version
>> Time taken: 0.001secs
Last few successful migrations:
id filename version status scheduledat executedat
---- ------------------------------------------------------------------------------------------------------------- --------- -------- -------------------------- --------------------------
7 offline/200.tables/user_master/2021_03_29__19_14_28_create_table_user_master.py bc5c6eb7 complete 2021-03-29 20:34:42.248132 2021-03-29 20:34:42.248132
8 offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py 5fc1fec2 complete 2021-03-29 20:36:32.758463 2021-03-29 20:36:32.758463
9 online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py 5fc1fec2 complete 2021-03-29 20:36:32.765727 2021-03-29 20:36:32.765727
No new migration to process.
```
Choose a version number from output and supply it as argument.
```
➜ tutorial git:(master) ✗ roksopsql rollback --version bc5c6eb7
Executing>> SELECT * FROM offline.rokso_db_version WHERE scheduledAt > (SELECT scheduledAt FROM offline.rokso_db_version WHERE version = 'bc5c6eb7' ORDER BY id desc LIMIT 1) ORDER BY id DESC;
>> Time taken: 0.0058secs
Following files will be rolledback:
id filename version status scheduledat executedat
---- ------------------------------------------------------------------------------------------------------------- --------- -------- -------------------------- --------------------------
9 online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py 5fc1fec2 complete 2021-03-29 20:36:32.765727 2021-03-29 20:36:32.765727
8 offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py 5fc1fec2 complete 2021-03-29 20:36:32.758463 2021-03-29 20:36:32.758463
Please confirm to proceed(y/yes):y
🔄 Rolling back file:: online/200.tables/website_user/2021_03_29__19_37_31_create_table_website_user.py
Executing>> DROP TABLE IF EXISTS online.website_user;
query completed successfully..
>> Time taken: 0.0167 secs
Executing>> DELETE FROM offline.rokso_db_version WHERE id = 9 ;
query completed successfully..
>> Time taken: 0.0007 secs
🔄 Rolling back file:: offline/500.functions/generate_booking_number/2021_03_29__19_34_09_create_function_generate_booking_number.py
Executing>> DROP FUNCTION IF EXISTS offline.generate_booking_number;
query completed successfully..
>> Time taken: 0.0088 secs
Executing>> DELETE FROM offline.rokso_db_version WHERE id = 8 ;
query completed successfully..
>> Time taken: 0.0051 secs
✅ Rollback complete.
```
### Reverse engineer your migrations
## Troubleshooting
**This code is not tested on windows machine.**
Some times when you run `rokso` over ssh or in some linux system you may get an error as follows:
```
$ roksopsql init --help
Traceback (most recent call last):
File "/usr/local/bin/roksopsql", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/site-packages/rokso/roksopsql.py", line 102, in main
return cli()
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 760, in main
_verify_python3_env()
File "/usr/local/lib/python3.6/site-packages/click/_unicodefun.py", line 130, in _verify_python3_env
" mitigation steps.{}".format(extra)
RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/python3/ for mitigation steps.
This system lists a couple of UTF-8 supporting locales that you can pick from. The following suitable locales were discovered: aa_DJ.utf8, aa_ER.utf8, aa_ET.utf8, af_ZA.utf8, am_ET.utf8, an_ES.utf8, ar_AE.utf8, ar_BH.utf8,
..............
..............
Click discovered that you exported a UTF-8 locale but the locale system could not pick up from it because it does not exist. The exported locale is 'en_US.UTF-8' but it is not supported
```
An easy fix could be set proper locale. Check available locales in system:
```
locale -a
```
or
```
locale -a |grep 'en_.*utf'
```
For us `en_US.utf8` worked. This can not be configured as below:
```
export LC_ALL=en_US.utf8
export LANG=en_US.utf8
```
| /roksopsql-0.2.4.tar.gz/roksopsql-0.2.4/README.md | 0.511717 | 0.819244 | README.md | pypi |
import socket
from collections import ChainMap
from typing import Dict, List
from .custom_types import DiscoveryData, SocketConnection
class Scanner:
"""
Handles device discovery and socket connection data parsing.
*Attributes:
discovery_timeout (int): Timeout for each device's discovery ping return.
discovered_devices (list[DiscoveryData]): List of any discovered devices data.
search_target (str): Determines whether M:Search will search for only Roku or any device UPnP capable. See Note
*Note:
only rokus: roku:ecp
all devices: upnp:rootdevice
"""
def __init__(self, discovery_timeout: int = 2, search_target: str = 'roku:ecp'):
self.discovery_timeout: int = discovery_timeout
self.discovered_devices: list = []
self.search_target: str = search_target
def discover(self, verbose: bool = False) -> List[DiscoveryData]:
"""
Sets up socket connection for SSDP discovery and handles formatting responses into a list of DiscoveryData
*Returns:
list[DiscoveryData] : A list of any discovered devices data
"""
ssdp_message: str = f'M-SEARCH * HTTP/1.1\r\n' \
f'HOST:239.255.255.250:1900\r\n' \
f'ST:{self.search_target}\r\n' \
f'MX:2\r\n' \
f'MAN:"ssdp:discover"\r\n' \
f'\r\n' \
socket_connection: SocketConnection = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
socket_connection.settimeout(self.discovery_timeout)
socket_connection.sendto(bytes(ssdp_message, 'utf8'), ('239.255.255.250', 1900))
try:
while True:
raw_data: tuple = socket_connection.recvfrom(65507)
data: bytes = raw_data[0]
device_data: DiscoveryData = self.parse_data(data=data)
if verbose:
print(f'Found Device {device_data["LOCATION"]}')
self.discovered_devices.append(device_data)
except socket.timeout:
pass
socket_connection.close()
return self.discovered_devices
def parse_data(self, data: bytes) -> Dict[str, str]:
"""
Parses raw byte data from socket connection headers into a dictionary. Does not add connection status code,
line 1 data example.
*Args:
data (bytes): raw bytes data from connection
*Returns:
dict (str, str)
*Example:
input:
b'HTTP/1.1 200 OK\r\n
Cache-Control: max-age=3600\r\n
ST: roku:ecp\r\n
USN: uuid:roku:ecp:YN00XF7876856\r\n
Ext:\r\n
Server: Roku/9.2.0 UPnP/1.0 Roku/9.2.0\r\n
LOCATION: http://127.0.0.1:8060/\r\n
device-group.roku.com: DD45456B11E45456E51\r\n
WAKEUP: MAC=e6-48-b0-c7-42-5c;Timeout=10\r\n
\r\n
\r\n'
output: {
'Cache-Control': 'max-age=3600',
'ST': 'roku:ecp',
'USN': 'uuid:roku:ecp:YN00XF7876856',
'Ext': '',
'Server': 'Roku/9.2.0 UPnP/1.0 Roku/9.2.0'
'LOCATION': 'http://127.0.0.1:8060/'
'device-group.roku.com': 'DD45456B11E45456E51'
'WAKEUP': 'MAC=e6-48-b0-c7-42-5c;Timeout=10'
}
"""
decoded_data: str = data.decode('utf8')
header_list: list = decoded_data.split('\n')
formatted_headers = [self.header_str_to_header_dict(header_str=header_str) for header_str in header_list[1:-2]]
return dict(ChainMap(*formatted_headers))
@staticmethod
def header_str_to_header_dict(header_str: str) -> Dict[str, str]:
"""
Formats socket connection header into a dictionary split at the :.
*Args:
header_str (str): header from connection response
*Returns:
dict (str, str): dictionary from header str
"""
key, value = header_str.split(':', 1)
return {key: value.strip()} | /roku-scanner-1.0.4.tar.gz/roku-scanner-1.0.4/roku_scanner/scanner.py | 0.830525 | 0.234002 | scanner.py | pypi |
import json
import copy
from src.utils.file_to_string_utils import FileToStringUtils
from src.driver.element import Element
from src.driver.by import By
from src.exceptions.no_such_element_exception import NoSuchElementException
class Finder:
session = None
http_client = None
def __init__(self, http_client, session):
self.session = session
self.http_client = http_client
"""
Searches for the existence of a locator within the device screen.
:param by: The locator to search for.
:returns: Element - An Element object containing details about the elements location and contents.
:raises NoSuchElementException: If the locator can not be found on the screen.
"""
def find_element(self, by):
by = FileToStringUtils().prepare_locator(by)
session = copy.deepcopy(self.session)
session['action'] = 'find'
session['element_locator'] = str(by)
element_json = self.__handler(self.http_client.post_to_server('element', session))
return Element(element_json)
"""
Searches for the existence of a locator within the device sub screen starting at
subScreenX, subScreenY, and with subScreenWidth and subScreenHeight.
:param by: The locator to search for in the device subscreen.
:param sub_screen_x: int - The x coordinate starting point of the subscreen.
:param sub_screen_y: int - The y coordinate starting point of the subscreen.
:param sub_screen_width: int - The subscreen width ending point.
:param sub_screen_height: int - The subscreen height ending point.
:returns: Element - An Element object containing details about the elements location and contents.
:raises NoSuchElementException: If the locator can not be found on the screen.
"""
def find_element_sub_screen(self, by, sub_screen_x, sub_screen_y, sub_screen_width, sub_screen_height):
by = FileToStringUtils().prepare_locator(by)
session = copy.deepcopy(self.session)
session['action'] = 'find'
session['element_locator'] = str(by)
session['sub_screen_x'] = int(sub_screen_x)
session['sub_screen_y'] = int(sub_screen_y)
session['sub_screen_width'] = int(sub_screen_width)
session['sub_screen_height'] = int(sub_screen_height)
element_json = self.__handler(self.http_client.post_to_server('element', session))
return Element(element_json)
"""
Searches for the existence of a locator within the device screen and
will return a collection of all matches. A NoSuchElementException will
NOT be thrown if an element is not found.
:param by: The locator to search for.
:returns: Element - A collection of Element objects containing details about the elements location and contents.
"""
def find_elements(self, by):
by = FileToStringUtils().prepare_locator(by)
session = copy.deepcopy(self.session)
session['action'] = 'find_all'
session['element_locator'] = str(by)
element_json = self.__handler(self.http_client.post_to_server('element', session))
return self.__get_elements(element_json)
def __handler(self, element_json):
if element_json['results'] != 'success':
raise NoSuchElementException(element_json['results'])
return element_json
def __get_elements(self, element_json):
if not 'all_elements' in element_json.keys():
return {}
element = element_json['all_elements']
all_elements = []
i = 0
count = len(element)
while i < count:
value = element[i]
value['session_id'] = self.session['session_id']
all_elements.append(Element(value))
i = i + 1
return all_elements | /rokuality-python-1.5.3.tar.gz/rokuality-python-1.5.3/src/driver/finder.py | 0.715921 | 0.214897 | finder.py | pypi |
import json
import copy
from src.exceptions.screen_exception import ScreenException
from src.utils.file_to_string_utils import FileToStringUtils
from src.driver.screen_text import ScreenText
from src.driver.screen_size import ScreenSize
class Screen:
session = None
http_client = None
def __init__(self, http_client, session):
self.session = session
self.http_client = http_client
"""
Gets the device screen image and saves it to a temporary file on your machine.
:returns: Path to the saved file of the device screen image at the time of capture.
:raises ScreenException: If the device screen fails to capture.
"""
def get_image(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_image'
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return FileToStringUtils().convert_to_file(screen_json["screen_image"], screen_json["screen_image_extension"])
"""
Gets the device sub screen image and saves it to a temporary file on your machine.
:param sub_screen_x: int - The x coordinate starting point of the subscreen to capture.
:param sub_screen_y: int - The y coordinate starting point of the subscreen to capture.
:param sub_screen_width: int - The subscreen width ending point to capture.
:param sub_screen_height: - int The subscreen height ending point to capture.
:returns: Path to the saved file of the device sub screen image at the time of capture.
:raises ScreenException: If the device sub screen fails to capture.
"""
def get_image_sub_screen(self, sub_screen_x, sub_screen_y, sub_screen_width, sub_screen_height):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_image'
session['sub_screen_x'] = int(sub_screen_x)
session['sub_screen_y'] = int(sub_screen_y)
session['sub_screen_width'] = int(sub_screen_width)
session['sub_screen_height'] = int(sub_screen_height)
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return FileToStringUtils().convert_to_file(screen_json["screen_image"], screen_json["screen_image_extension"])
"""
Gets the device screen text as a ScreenText collection with details about each found word on the screen.
:returns: ScreenText - A collection of ScreenText objects containing details of every found word on the device screen.
:raises ScreenException: If the device screen text fails to capture.
"""
def get_text(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_text'
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return self.__get_screen_texts(screen_json)
"""
Gets the device screen text from the identified device sub screen as a ScreenText
collection with details about each found word on the screen.
:param sub_screen_x: int - The x coordinate starting point of the subscreen to capture.
:param sub_screen_y: int - The y coordinate starting point of the subscreen to capture.
:param sub_screen_width: int - The subscreen width ending point to capture.
:param sub_screen_height: - int The subscreen height ending point to capture.
:returns: ScreenText - A collection of ScreenText objects containing details of every found word on the device sub screen.
:raises ScreenException: If the device sub screen text fails to capture.
"""
def get_text_sub_screen(self, sub_screen_x, sub_screen_y, sub_screen_width, sub_screen_height):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_text'
session['sub_screen_x'] = int(sub_screen_x)
session['sub_screen_y'] = int(sub_screen_y)
session['sub_screen_width'] = int(sub_screen_width)
session['sub_screen_height'] = int(sub_screen_height)
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return self.__get_screen_texts(screen_json)
"""
Gets the device screen text as a complete String.
:returns: String - A complete string of every word found on the screen.
:raises ScreenException: If the device screen text fails to capture.
"""
def get_text_as_string(self):
screen_texts = self.get_text()
constructed_screen_text = ""
for screen_text in screen_texts:
constructed_screen_text = constructed_screen_text + screen_text.get_text() + " "
return constructed_screen_text
"""
Gets the device sub screen text as a complete String.
:param sub_screen_x: int - The x coordinate starting point of the subscreen to capture.
:param sub_screen_y: int - The y coordinate starting point of the subscreen to capture.
:param sub_screen_width: int - The subscreen width ending point to capture.
:param sub_screen_height: - int The subscreen height ending point to capture.
:returns: String - A complete string of every word found on the sub screen.
:raises ScreenException: If the device screen text fails to capture.
"""
def get_text_as_string_sub_screen(self, sub_screen_x, sub_screen_y, sub_screen_width, sub_screen_height):
screen_texts = self.get_text_sub_screen(sub_screen_x, sub_screen_y, sub_screen_width, sub_screen_height)
constructed_screen_text = ""
for screen_text in screen_texts:
constructed_screen_text = constructed_screen_text + screen_text.get_text() + " "
return constructed_screen_text
"""
Gets the device screen size.
:returns: ScreenSize - The size of the device under test.
:raises ScreenException: If the device screen size is not determined.
"""
def get_screen_size(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_size'
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return ScreenSize(screen_json)
"""
Gets the device screen recording from the driver start to current. Note the recording is generated
in .mp4 format but is done through stitching the collected device screenshots together from the
start of the driver seesion - and the quality of the capture won't be the best. But very useful
for reporting and debugging.
:returns: Video - An .mp4 video of the driver session from start until current.
:raises ScreenException: If the video recording cannot be captured.
"""
def get_recording(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_screen_recording'
screen_json = self.__handler(self.http_client.post_to_server('screen', session))
return FileToStringUtils().convert_to_file(screen_json["screen_video"], screen_json["screen_video_extension"])
def __get_screen_texts(self, screen_json):
screen_text_json = screen_json['screen_text']
all_screen_text = []
json_dict = json.loads(screen_text_json)
i = 0
count = len(json_dict)
while i < count:
value = json_dict[i]
all_screen_text.append(ScreenText(value))
i = i + 1
return all_screen_text
def __handler(self, element_json):
if element_json['results'] != 'success':
raise ScreenException(element_json['results'])
return element_json | /rokuality-python-1.5.3.tar.gz/rokuality-python-1.5.3/src/driver/screen.py | 0.745769 | 0.202818 | screen.py | pypi |
class Element:
element_y = None
element_x = None
element_width = None
element_height = None
element_confidence = None
element_text = None
session_id = None
element_id = None
def __init__(self, element_json):
self.element_x = element_json['element_x']
self.element_y = element_json["element_y"]
self.element_width = int(element_json['element_width'])
self.element_height = int(element_json['element_height'])
self.element_confidence = element_json['element_confidence']
self.element_text = element_json['element_text']
if 'session_id' in element_json:
self.session_id = element_json['session_id']
self.element_id = element_json['element_id']
"""
Gets the session id that the element belongs to.
:returns: String - the session id
"""
def get_session_id(self):
return self.session_id
"""
Gets the element id.
:returns: String - the element id
"""
def get_element_id(self):
return self.element_id
"""
Gets the text of the element. If an image based
locator it will be any found text within the matched element.
:returns: String - the text of the element
"""
def get_text(self):
return self.element_text
"""
Gets the width of the element.
:returns: int - the width of the element
"""
def get_width(self):
return self.element_width
"""
Gets the height of the element.
:returns: int - the height of the element
"""
def get_height(self):
return self.element_height
"""
Gets the starting x of the element.
:returns: int - the starting x position of the element
"""
def get_x(self):
return self.element_x
"""
Gets the starting y of the element.
:returns: int - the starting x position of the element
"""
def get_y(self):
return self.element_y
"""
Gets the confidence score of the element match.
:returns: float - the confidence of the match
"""
def get_confidence(self):
return self.element_confidence | /rokuality-python-1.5.3.tar.gz/rokuality-python-1.5.3/src/driver/element.py | 0.813794 | 0.358493 | element.py | pypi |
import copy
from src.exceptions.server_failure_exception import ServerFailureException
from src.enums.session_status import SessionStatus
class Options:
session = None
http_client = None
def __init__(self, http_client, session):
self.session = session
self.http_client = http_client
"""
Overrides the default Image Match Similarity value for all Image based elements. It will last for the duration
of the driver session, or until a new value is set. A lower value
will increase the likelihood that your image locator will find a match, but too low a value
and you can introduce false positives.
:param image_match_similarity: float - the image match similarity to apply to all image elements i.e. '0.95'
:raises ServerFailureException: If the image match similarity cannot be applied.
"""
def set_image_match_similarity(self, image_match_similarity):
session = copy.deepcopy(self.session)
session['action'] = 'image_match_similarity'
session['image_match_similarity'] = str(image_match_similarity)
element_json = self.__handler(self.http_client.post_to_server('settings', session))
"""
Sets an implicit wait for all elements in milliseconds. It will last for the duration
of the driver session, or until a new value is set. By default, when performing a finder().findElement
command, the locator find will be evaluated immediately and throw an exception if the element is not immediately found.
By setting this value, the server will search for the element repeatedly until the element is found, or will throw
the NoSuchElementException if the element is not found after the duration expires. Setting this timeout is recommended
but setting too high a value can result in increased test time.
:param timeout_in_milliseconds: long - the timeout in milliseconds to wait for an element before throwing an exception.
:raises ServerFailureException: If the timeout cannot be applied.
"""
def set_element_timeout(self, timeout_in_milliseconds):
session = copy.deepcopy(self.session)
session['action'] = 'element_find_timeout'
session['element_find_timeout'] = str(timeout_in_milliseconds)
element_json = self.__handler(self.http_client.post_to_server('settings', session))
"""
Sets a dely in milliseconds for remote control interactions. By default there is no pause between
remote control commands so remote interactions can happen very fast and may lead to test flake
depending on the test scenario. This option allows you to throttle those remote control commands.
It will last for the duration of the driver session, or until a new value is set.
:param delay_in_milliseconds: long - The pause between remote commands in milliseconds. i.e. '1000'.
:raises ServerFailureException: If the interact delay cannot be applied.
"""
def set_remote_interact_delay(self, delay_in_milliseconds):
session = copy.deepcopy(self.session)
session['action'] = 'remote_interact_delay'
session['remote_interact_delay'] = str(delay_in_milliseconds)
element_json = self.__handler(self.http_client.post_to_server('settings', session))
"""
Sets a poll interval for all elements in milliseconds. It will last for the duration
of the driver session, or until a new value is set. Only applicable if an element timeout has been applied. By
default the element poll interval is 250 milliseconds.
:param poll_interval_in_milliseconds: long - the poll interval in milliseconds.
:raises ServerFailureException: If the poll interval cannot be applied.
"""
def set_element_poll_interval(self, poll_interval_in_milliseconds):
session = copy.deepcopy(self.session)
session['action'] = 'element_polling_interval'
session['element_polling_interval'] = str(poll_interval_in_milliseconds)
element_json = self.__handler(self.http_client.post_to_server('settings', session))
"""
Sets a session status that can be later retrieved during the course of a session. By default the session status is 'In Progress'.
Useful if you want to set a pass/fail/broken status during the course of a test run and then later retrieve the status
for communicating with a 3rd party service. The status will last only so long as the session is active and will be lost
once the user stops the session.
:param status: SessionStatus - the session status.
:raises ServerFailureException: If the session status cannot be applied.
"""
def set_session_status(self, status):
session = copy.deepcopy(self.session)
session['action'] = 'set_session_status'
session['session_status'] = status.value
self.__handler(self.http_client.post_to_server('settings', session))
"""
Gets the session status.
:returns: SessionStatus - the session status as set by the user during the course of the session.
:raises ServerFailureException: If the session status cannot be retrieved.
"""
def get_session_status(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_session_status'
option_json = self.__handler(self.http_client.post_to_server('settings', session))
return option_json['session_status']
"""
Gets the session server logs for the session under test.
:returns: The session from test start to now.
:raises ServerFailureException: If the session logs cannot be retrieved.
"""
def get_session_logs(self):
session = copy.deepcopy(self.session)
session['action'] = 'get_server_logs'
option_json = self.__handler(self.http_client.post_to_server('info', session))
return option_json['log_content']
def __handler(self, element_json):
if element_json['results'] != 'success':
raise ServerFailureException(element_json['results'])
return element_json | /rokuality-python-1.5.3.tar.gz/rokuality-python-1.5.3/src/driver/options.py | 0.68437 | 0.248919 | options.py | pypi |
import os
import sys
import glob
# returns the path of the executed file
def get_dir():
""" returns the directory path of the python file executed """
return os.path.dirname(sys.argv[0])
def get_file_paths_from_wildcard(wildcard):
""" returns all file paths which fit the wildcard """
if not os.path.isabs(wildcard):
wildcard = os.path.join(get_dir(), wildcard)
return glob.glob(wildcard)
def get_file_names_from_wildcard(wildcard):
""" returns only the file names which fit the wildcard """
paths = get_file_paths_from_wildcard(wildcard)
return [os.path.basename(p) for p in paths]
def replace_file_name(path, newName):
""" replace the file name in a path
Args:
path: path or filename to replace
newName: the new file name
Returns:
the new path
Raises:
ValueError: path does not contain a file name
"""
name = os.path.basename(path)
if name == '' or name == None:
raise ValueError('path does not contain a file name')
else:
return path.replace(name, newName)
def process_files(func, file_wildcard):
""" quick way to call a function for every file specified by a wildcard
eg:
```
def f(path):
with open(path, 'r') as file:
print(file.read())
process_files(f, '*.py')
```
Args:
func: the processing function. Must take a path as argument (func(path) -> None)
file_wildcard: files specified here will be processed
"""
files = get_file_paths_from_wildcard(file_wildcard)
for path in files:
func(path)
def mkdir_if_nonexisting(path):
""" Will create the given directory if it does not exist already
Args:
path: the path for the directory in question
Returns:
True if the directory was created else False
"""
if os.path.isdir(path) == False:
os.mkdir(path)
return True
return False
def remove_if_existing(path):
""" Will delete the given file if it exists
Args:
path: the path for the file in question
Returns:
True if the file was deleted else False
"""
if os.path.isfile(path):
os.remove(path)
return True
return False
def rename_if_existing(old_name, new_name):
""" Will rename the given file if it exists
Args:
old_name: the current name of the file in question
new_name: the new name
Returns:
True if the file was renamed else False
"""
if os.path.isfile(old_name):
os.rename(old_name, new_name)
return True
return False | /rol1510_utility-0.2.2-py3-none-any.whl/utility/file_io.py | 0.444806 | 0.305192 | file_io.py | pypi |
from collections import deque
import copy
def rotate_list(data, amount):
""" rotates a list by the specified amount
eg: rotate_list([1,2,3], 1) -> returns [3,1,2]
Args:
data: the list to rotate
amount: how many times
Returns:
the rotated list
"""
d = deque(data)
d.rotate(amount)
return list(d)
def pad_1d_list(data, element, thickness=1):
""" Adds padding at the start and end of a list
This will make a shallow copy of the original
eg: pad_1d_list([1,2], 0) -> returns [0,1,2,0]
Args:
data: the list to pad
element: gets added as padding (if its an object, it won't be instanced, just referenced in the lists)
thickness: how many layers of padding
Returns:
the padded list
"""
# shallow copy
data = copy.copy(data)
for i in range(thickness):
data.insert(0, element)
data.append(element)
return data
def pad_2d_list(data, element, thickness=1):
""" Adds padding "around" a list of lists
List should be a "rectangle".
For the result, every list will be a copied singel instance. Contants will be the same reference
eg: pad_2d_list([[1]], 0) -> returns [[0,0,0], [0,1,0], [0,0,0]]
Args:
data: the list to pad
element: gets added as padding (if its an object, it won't be instanced, just referenced in the lists)
thickness: how many layers of padding
Returns:
the padded list
"""
res = []
# pad sides
for i in range(len(data)):
res.append(pad_1d_list(data[i], element, thickness))
# pad top and bottom
width = len(res[0])
padding = [element] * width
# copy the padding first, else both sides will reference the same list
return pad_1d_list(res, padding.copy(), thickness)
def pad_3d_list(data, element, thickness=1):
""" adds padding "around" a list of lists of lists
List should be a "box".
For the result, every list will be a copied singel instance. Contants will be the same reference
Args:
data: the list to pad
element: gets added as padding (if its an object, it won't be instanced, just referenced in the lists)
thickness: how many layers of padding
Returns:
the padded list
"""
res = []
# pad sides
for i in range(len(data)):
res.append(pad_2d_list(data[i], copy.copy(element), thickness))
# pad top and bottom
width = len(res[0])
height = len(res[0][0])
padding = [[element] * height] * width
# copy the padding first, else both sides will reference the same list
x = pad_1d_list(res, padding.copy(), thickness)
return x | /rol1510_utility-0.2.2-py3-none-any.whl/utility/misc.py | 0.896126 | 0.599602 | misc.py | pypi |
from dataclasses import InitVar, dataclass, field
import logging
import mido
from .roland_instruments import Instruments
from .roland_address_map import RolandAddressMap
from .roland_utils import (
ROLAND_ID_BYTES,
InvalidMessageException,
RolandCmd,
get_checksum,
)
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
@dataclass
class RolandMessageRequest:
register: RolandAddressMap
cmd: RolandCmd
data_as_int: int = None
@staticmethod
def int_to_byte(num):
return num.to_bytes(1, byteorder="big")
@property
def data_as_bytes(self):
parsers = {
"sequencerTempoWO": lambda x: self.int_to_byte((x & 0xFF80) >> 7) + self.int_to_byte(x & 0x7F),
"keyTransposeRO": lambda x: self.int_to_byte(x + 64),
"toneForSingle": lambda x: self.int_to_byte((x & 0xFF000) >> 16) + b"\00" + self.int_to_byte(x & 0xFF),
}
if self.register.name in parsers:
ret = parsers[self.register.name](self.data_as_int)
logger.debug(ret, self.data_as_int)
return ret
else:
return self.int_to_byte(self.data_as_int)
def __post_init__(self):
if self.cmd == RolandCmd.WRITE:
if self.data_as_int is None:
logger.error(f'Data for {RolandCmd.write} message cannot be "None"')
self.data = self.data_as_bytes
else:
self.data = self.register.size
self.checksum = get_checksum(self.register, self.data)
@property
def as_mido_message(self) -> mido.Message:
return mido.Message(
"sysex",
data=bytearray(ROLAND_ID_BYTES + self.cmd.value + self.register.address + self.data + self.checksum),
)
@dataclass
class RolandMessageResponse:
message: InitVar[mido.Message]
register: RolandAddressMap = field(init=False)
data: bytes = field(init=False)
def parse_data(self):
parsers = {
"sequencerTempoRO": lambda data: (data[1] & b"\x7F"[0]) | ((data[0] & b"\x7F"[0]) << 7),
"keyTransposeRO": lambda x: x[0] - 64,
"toneForSingle": lambda x: Instruments((x[0], x[2])),
"uptime": lambda x: x[0] << 64
| x[1] << 56
| x[2] << 48
| x[3] << 40
| x[4] << 32
| x[5] << 24
| x[5] << 16
| x[6] << 8
| x[7] << 0,
}
if self.register.name in parsers:
ret = parsers[self.register.name](self.data)
logger.debug(f"{self.register.name=}, {ret=}, {self.data=}")
return ret
else:
return int.from_bytes(self.data, byteorder="big")
def __post_init__(self, message: mido.Message):
roland_id = b"".join(map(lambda i: i.to_bytes(1, "little"), message.data[0:6]))
# TODO: check if message starts with F07E10060241, this indicates model id
if roland_id != ROLAND_ID_BYTES:
raise InvalidMessageException(f"id bytes mismatch from roland_id: {roland_id=} {message=} {message.hex()}")
# cmd = RolandCmd(message.data[6].to_bytes(1, "big"))
self.register = RolandAddressMap(bytes(message.data[7 : 7 + 4]))
msg_len = len(message.data)
self.data = b"".join(d.to_bytes(1, "big") for d in message.data[11 : msg_len - 1])
received_checksum = message.data[msg_len - 1].to_bytes(1, "big")
checksum = get_checksum(self.register, self.data)
if received_checksum != checksum:
raise InvalidMessageException(f"Checksum mismatch, {received_checksum=} {self.checksum}")
pass | /roland_piano-0.4.0.tar.gz/roland_piano-0.4.0/src/roland_piano/roland_messages.py | 0.516595 | 0.296756 | roland_messages.py | pypi |
from enum import Enum
class RolandAddressMap(Enum):
# 010000xx
serverSetupFileName = b"\x01\x00\x00\x00"
# 010001xx
songToneLanguage = b"\x01\x00\x01\x00"
keyTransposeRO = b"\x01\x00\x01\x01"
songTransposeRO = b"\x01\x00\x01\x02"
sequencerStatus = b"\x01\x00\x01\x03"
sequencerMeasure = b"\x01\x00\x01\x05"
sequencerTempoNotation = b"\x01\x00\x01\x07"
sequencerTempoRO = b"\x01\x00\x01\x08"
sequencerBeatNumerator = b"\x01\x00\x01\x0A"
sequencerBeatDenominator = b"\x01\x00\x01\x0B"
sequencerPartSwAccomp = b"\x01\x00\x01\x0C"
sequencerPartSwLeft = b"\x01\x00\x01\x0D"
sequencerPartSwRight = b"\x01\x00\x01\x0E"
metronomeStatus = b"\x01\x00\x01\x0F"
headphonesConnection = b"\x01\x00\x01\x10"
# 010002xx
keyBoardMode = b"\x01\x00\x02\x00"
splitPoint = b"\x01\x00\x02\x01"
splitOctaveShift = b"\x01\x00\x02\x02"
splitBalance = b"\x01\x00\x02\x03"
dualOctaveShift = b"\x01\x00\x02\x04"
dualBalance = b"\x01\x00\x02\x05"
twinPianoMode = b"\x01\x00\x02\x06"
toneForSingle = b"\x01\x00\x02\x07"
toneForSplit = b"\x01\x00\x02\x0A"
toneForDual = b"\x01\x00\x02\x0D"
songNumber = b"\x01\x00\x02\x10"
masterVolume = b"\x01\x00\x02\x13"
masterVolumeLimit = b"\x01\x00\x02\x14"
allSongPlayMode = b"\x01\x00\x02\x15"
splitRightOctaveShift = b"\x01\x00\x02\x16"
dualTone1OctaveShift = b"\x01\x00\x02\x17"
masterTuning = b"\x01\x00\x02\x18"
ambience = b"\x01\x00\x02\x1A"
headphones3DAmbience = b"\x01\x00\x02\x1B"
brilliance = b"\x01\x00\x02\x1C"
keyTouch = b"\x01\x00\x02\x1D"
transposeMode = b"\x01\x00\x02\x1E"
metronomeBeat = b"\x01\x00\x02\x1F"
metronomePattern = b"\x01\x00\x02\x20"
metronomeVolume = b"\x01\x00\x02\x21"
metronomeTone = b"\x01\x00\x02\x22"
metronomeDownBeat = b"\x01\x00\x02\x23"
# 010003xx
applicationMode = b"\x01\x00\x03\x00"
scorePageTurn = b"\x01\x00\x03\x02"
arrangerPedalFunction = b"\x01\x00\x03\x03"
arrangerBalance = b"\x01\x00\x03\x05"
connection = b"\x01\x00\x03\x06"
keyTransposeWO = b"\x01\x00\x03\x07"
songTransposeWO = b"\x01\x00\x03\x08"
sequencerTempoWO = b"\x01\x00\x03\x09"
tempoReset = b"\x01\x00\x03\x0B"
# 010004xx
soundEffect = b"\x01\x00\x04\x00"
soundEffectStopAll = b"\x01\x00\x04\x02"
# 010005xx
sequencerREW = b"\x01\x00\x05\x00"
sequencerFF = b"\x01\x00\x05\x01"
sequencerReset = b"\x01\x00\x05\x02"
sequencerTempoDown = b"\x01\x00\x05\x03"
sequencerTempoUp = b"\x01\x00\x05\x04"
sequencerPlayStopToggle = b"\x01\x00\x05\x05"
sequencerAccompPartSwToggle = b"\x01\x00\x05\x06"
sequencerLeftPartSwToggle = b"\x01\x00\x05\x07"
sequencerRightPartSwToggle = b"\x01\x00\x05\x08"
metronomeSwToggle = b"\x01\x00\x05\x09"
sequencerPreviousSong = b"\x01\x00\x05\x0A"
sequencerNextSong = b"\x01\x00\x05\x0B"
# 010006xx
pageTurnPreviousPage = b"\x01\x00\x06\x00"
pageTurnNextPage = b"\x01\x00\x06\x01"
# 010007xx
uptime = b"\x01\x00\x07\x00"
# 010008xx
addressMapVersion = b"\x01\x00\x08\x00"
@property
def size(self) -> bytes:
size_map = { # consider implementing this to read all registers
"serverSetupFileName": 32,
"sequencerMeasure": 2,
"sequencerTempoRO": 2,
"toneForSingle": 3,
"toneForSplit": 3,
"toneForDual": 3,
"songNumber": 3,
"masterTuning": 2,
"arrangerPedalFunction": 2,
"sequencerTempoWO": 2,
"uptime": 8,
}
if self.name in size_map:
size = size_map[self.name]
else:
size = 1
return b"\x00\x00\x00" + size.to_bytes(1, byteorder="big")
@property
def address(self) -> bytes:
return self.value | /roland_piano-0.4.0.tar.gz/roland_piano-0.4.0/src/roland_piano/roland_address_map.py | 0.436382 | 0.392453 | roland_address_map.py | pypi |
***********
Atmospheres
***********
Using Model Atmospheres in Roland
=================================
|roland| has an :class:`.Abund` class for single abundance values and an :class:`.Abundances` class that holds the abundances for
all elements. These are handy for modifying abundances when running synthesis.
Let's import the :class:`.Abundances` class from the :mod:`.abundance` module::
>>> from roland.abundance import Abundances
Internal Values and Hydrogen/Helium Abundances
----------------------------------------------
Internally, an :class:`.Abundances` object stores the abundances values in linear `N(X)/N(H)` format.
The Hydrogen and Helium abundances are special and are held constant at `N(H)/N(H)=1.0` and `N(He)/N(H)=0.0851138`.
Solar Abundance Values
----------------------
The :class:`.Abundances` object stored internal values for solar abundances. The type of values can be
selected at initialization time. The options are:
- "asplund: Asplund, Grevesse and Sauval (2005)
- "husser": Asplund et al. 2009 chosen for the Husser et al. (2013) Phoenix model atmospheres
- "kurucz": abundances for the Kurucz model atmospheres (Grevesse+Sauval 1998)
- "marcs": abundances for the MARCS model atmospheres (Grevesse+2007 values with CNO abundances from Grevesse+Sauval 2008)
The "asplund" values are used by default.
Initializing
------------
You can initialize an :class:`.Abundances` object using 1) a list of abundances, 2) a dictionary of abundance values, 3) a single
metallicity or 4) just with the default solar values (no input).
This will use the default solar values::
>>> abu = Abundances()
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Let's start with `[M/H]=-1.0` instead this time:
>>> abu = Abundances(-1.0)
>>> abu
Abundances([M/H]=-1.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-12 0.050 [-1.000]
4 Be 2.398833e-12 0.380 [-1.000]
5 B 5.011872e-11 1.700 [-1.000]
6 C 2.454709e-05 7.390 [-1.000]
...
96 Cm 1.023293e-23 -10.990 [-1.000]
97 Bk 1.023293e-23 -10.990 [-1.000]
98 Cf 1.023293e-23 -10.990 [-1.000]
99 Es 1.023293e-23 -10.990 [-1.000]
Now, let's give it a dictionary of abundances values:
>>> abu = Abundances({"MG_H":-0.5,"SI_H":0.24})
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 4.570882e-04 8.660 [ 0.000]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 6.918310e-05 7.840 [ 0.000]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 5.623413e-05 7.750 [ 0.240]
...
Finally, we can give an entire array or list of abundances values. You have give the type of abundance
values you are giving in the second parameter. The options are `linear`, `log`, `logeps`, or `x_h`::
>>> abu = Abundances([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78,
8.66, 4.56, 7.84, 6.17, 7.03, 6.37, 7.75,
5.36, 7.14, 5.5 , 6.18],'logeps')
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Modifying an Abundances Object
------------------------------
You can always modify an :class:`.Abundances` object `in place`::
>>> abu['O_H'] = -0.5
Abundances([M/H]=-0.17,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Or change the metallicity::
>>> abu['M_H'] = -0.5
Abundances([M/H]=-0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-12 0.550 [-0.500]
4 Be 7.585776e-12 0.880 [-0.500]
5 B 1.584893e-10 2.200 [-0.500]
6 C 7.762471e-05 7.890 [-0.500]
7 N 1.905461e-05 7.280 [-0.500]
8 O 1.445440e-04 8.160 [-0.500]
...
95 Am 3.235937e-23 -10.490 [-0.500]
96 Cm 3.235937e-23 -10.490 [-0.500]
97 Bk 3.235937e-23 -10.490 [-0.500]
98 Cf 3.235937e-23 -10.490 [-0.500]
99 Es 3.235937e-23 -10.490 [-0.500]
You can also change the entire metallicity by an increment amount::
>>> abu += 0.5
Abundances([M/H]=0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-11 1.550 [ 0.500]
4 Be 7.585776e-11 1.880 [ 0.500]
5 B 1.584893e-09 3.200 [ 0.500]
6 C 7.762471e-04 8.890 [ 0.500]
7 N 1.905461e-04 8.280 [ 0.500]
...
96 Cm 3.235937e-22 -9.490 [ 0.500]
97 Bk 3.235937e-22 -9.490 [ 0.500]
98 Cf 3.235937e-22 -9.490 [ 0.500]
99 Es 3.235937e-22 -9.490 [ 0.500]
Or the alpha abundances::
>>> abu['alpha'] -= 0.5
Abundances([M/H]=-0.25,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 2.187762e-05 7.340 [-0.500]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 1.023293e-05 7.010 [-0.500]
15 P 2.290868e-07 5.360 [ 0.000]
16 S 4.365158e-06 6.640 [-0.500]
17 Cl 3.162278e-07 5.500 [ 0.000]
18 Ar 4.786301e-07 5.680 [-0.500]
19 K 1.202264e-07 5.080 [ 0.000]
20 Ca 6.456542e-07 5.810 [-0.500]
21 Sc 1.122018e-09 3.050 [ 0.000]
22 Ti 2.511886e-08 4.400 [-0.500]
23 V 1.000000e-08 4.000 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Creating a New, Modified Abundances Object
------------------------------------------
You can also `call` the object and create a new, modified object.
Create a new :class:`.Abundances` object with a metallicity of -1.5::
>>> abu2 = abu(-1.5)
>>> abu2
Abundances([M/H]=-1.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-13 -0.450 [-1.500]
4 Be 7.585776e-13 -0.120 [-1.500]
5 B 1.584893e-11 1.200 [-1.500]
6 C 7.762471e-06 6.890 [-1.500]
...
96 Cm 3.235937e-24 -11.490 [-1.500]
97 Bk 3.235937e-24 -11.490 [-1.500]
98 Cf 3.235937e-24 -11.490 [-1.500]
99 Es 3.235937e-24 -11.490 [-1.500]
You can also input a dictionary of abundances values::
>>> abu2 = abu({"c_h":-1.5})
>>> abu2
Abundances([M/H]=-0.12,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 7.762471e-06 6.890 [-1.500]
7 N 6.025596e-05 7.780 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Abundances Output
-----------------
The :class:`.Abundances` class can output the information in several ways.
If you select a single element (by element name or index), an :class:`.Abund` object will be returned.::
>>> abu['Ca']
Abund(20 Ca N(Ca)/N(H)=2.042e-06 log(eps)=6.310)
>>> abu[10]
Abund(11 Na N(Na)/N(H)=1.479e-06 log(eps)=6.170)
Selecting values in bracket notation will only return the value. Abundance versus H::
>>> abu['Ca_H']
0.0
Abundance versus M::
>>> abu['Ca_M']
-0.0003221051142099841
There are several useful properties that will print out **all** of the abundances.
Print the linear or `N(X)/N(H)` values with `linear`.::
>>> abu.linear
array([1.00000000e+00, 8.51138038e-02, 1.12201845e-11, 2.39883292e-11,
5.01187234e-10, 2.45470892e-04, 6.02559586e-05, 4.57088190e-04,
3.63078055e-08, 6.91830971e-05, 1.47910839e-06, 1.07151931e-05,
...
1.02329299e-22, 1.14815362e-12, 1.02329299e-22, 3.01995172e-13,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22, 1.02329299e-22,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22])
Or you can also use `log`, `logeps`, `xh`, or `xm`. The `log` abundances::
>>> abu.log
array([ 0. , -1.07, -10.95, -10.62, -9.3 , -3.61, -4.22, -3.34,
-7.44, -4.16, -5.83, -4.97, -5.63, -4.25, -6.64, -4.86,
-6.5 , -5.82, -6.92, -5.69, -8.95, -7.1 , -8. , -6.36,
...
-11.1 , -10. , -11.35, -21.99, -21.99, -21.99, -21.99, -21.99,
-21.99, -11.94, -21.99, -12.52, -21.99, -21.99, -21.99, -21.99,
-21.99, -21.99, -21.99])
Abundances in `log(eps)` notation, or `log(N(X)/N(H))+12.0`::
>>> abu.logeps
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.03, 6.37, 7.75, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Bracket notation, relative to H::
>>> abu.xh
array([ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , -0.5 , 0. , 0.24, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
...
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
Bracket notation, relative to M::
>>> abu.xm
array([-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -5.00322105e-01,
...
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04])
You can also return the abundances in formats that are useful for model atmospheres.
Return abundance values in the format for Kurucz model atmospheres::
>>> abu.to_kurucz()
array([ 0.92075543, 0.07836899, -10.98585571, -10.65585571,
-9.33585571, -3.64585571, -4.25585571, -3.37585571,
-7.47585571, -4.19585571, -5.86585571, -4.50585571,
-6.64585571, -4.58585571, -7.11585571, -5.80585571,
...
-20. , -11.97585571, -20. , -12.55585571,
-20. , -20. , -20. , -20. ,
-20. , -20. , -20. ]
Return abundance values in the format for MARCS model atmospheres::
>>> abu.to_marcs()
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Other useful properties and methods::
# Return the metallicity as [M/H]
>>> abu.metallicity
0.0
# Return the metallicity as Sum(N(X)/N(H) over all metals
>>> abu.metals
0.0009509329459494126
# Return all of the element symbols
>>> abu.symbol
['H','He','Li','Be','B','C','N','O','F',
...
'U','Np','Pu','Am','Cm','Bk','Cf','Es']
# Return all of the element mass values (in amu).
>>> abu.mass
[1.00794, 4.0026, 6.941, 9.01218, 10.811, 12.0107,
...
244.0, 243.0, 247.0, 247.0, 251.0, 252.0]
| /roland-1.0.0.tar.gz/roland-1.0.0/docs/atmosphere.rst | 0.830594 | 0.807461 | atmosphere.rst | pypi |
**********
Abundances
**********
Using Abundances in Roland
==========================
|roland| has an :class:`.Abund` class for single abundance values and an :class:`.Abundances` class that holds the abundances for
all elements. These are handy for modifying abundances when running synthesis.
Let's import the :class:`.Abundances` class from the :mod:`.abundance` module::
>>> from roland.abundance import Abundances
Internal Values and Hydrogen/Helium Abundances
----------------------------------------------
Internally, an :class:`.Abundances` object stores the abundances values in linear `N(X)/N(H)` format.
The Hydrogen and Helium abundances are special and are held constant at `N(H)/N(H)=1.0` and `N(He)/N(H)=0.0851138`.
Solar Abundance Values
----------------------
The :class:`.Abundances` object stored internal values for solar abundances. The type of values can be
selected at initialization time. The options are:
- "asplund: Asplund, Grevesse and Sauval (2005)
- "husser": Asplund et al. 2009 chosen for the Husser et al. (2013) Phoenix model atmospheres
- "kurucz": abundances for the Kurucz model atmospheres (Grevesse+Sauval 1998)
- "marcs": abundances for the MARCS model atmospheres (Grevesse+2007 values with CNO abundances from Grevesse+Sauval 2008)
The "asplund" values are used by default.
Initializing
------------
You can initialize an :class:`.Abundances` object using 1) a list of abundances, 2) a dictionary of abundance values, 3) a single
metallicity or 4) just with the default solar values (no input).
This will use the default solar values::
>>> abu = Abundances()
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Let's start with `[M/H]=-1.0` instead this time:
>>> abu = Abundances(-1.0)
>>> abu
Abundances([M/H]=-1.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-12 0.050 [-1.000]
4 Be 2.398833e-12 0.380 [-1.000]
5 B 5.011872e-11 1.700 [-1.000]
6 C 2.454709e-05 7.390 [-1.000]
...
96 Cm 1.023293e-23 -10.990 [-1.000]
97 Bk 1.023293e-23 -10.990 [-1.000]
98 Cf 1.023293e-23 -10.990 [-1.000]
99 Es 1.023293e-23 -10.990 [-1.000]
Now, let's give it a dictionary of abundances values:
>>> abu = Abundances({"MG_H":-0.5,"SI_H":0.24})
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 4.570882e-04 8.660 [ 0.000]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 6.918310e-05 7.840 [ 0.000]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 5.623413e-05 7.750 [ 0.240]
...
Finally, we can give an entire array or list of abundances values. You have give the type of abundance
values you are giving in the second parameter. The options are `linear`, `log`, `logeps`, or `x_h`::
>>> abu = Abundances([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78,
8.66, 4.56, 7.84, 6.17, 7.03, 6.37, 7.75,
5.36, 7.14, 5.5 , 6.18],'logeps')
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Modifying an Abundances Object
------------------------------
You can always modify an :class:`.Abundances` object `in place`::
>>> abu['O_H'] = -0.5
Abundances([M/H]=-0.17,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Or change the metallicity::
>>> abu['M_H'] = -0.5
Abundances([M/H]=-0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-12 0.550 [-0.500]
4 Be 7.585776e-12 0.880 [-0.500]
5 B 1.584893e-10 2.200 [-0.500]
6 C 7.762471e-05 7.890 [-0.500]
7 N 1.905461e-05 7.280 [-0.500]
8 O 1.445440e-04 8.160 [-0.500]
...
95 Am 3.235937e-23 -10.490 [-0.500]
96 Cm 3.235937e-23 -10.490 [-0.500]
97 Bk 3.235937e-23 -10.490 [-0.500]
98 Cf 3.235937e-23 -10.490 [-0.500]
99 Es 3.235937e-23 -10.490 [-0.500]
You can also change the entire metallicity by an increment amount::
>>> abu += 0.5
Abundances([M/H]=0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-11 1.550 [ 0.500]
4 Be 7.585776e-11 1.880 [ 0.500]
5 B 1.584893e-09 3.200 [ 0.500]
6 C 7.762471e-04 8.890 [ 0.500]
7 N 1.905461e-04 8.280 [ 0.500]
...
96 Cm 3.235937e-22 -9.490 [ 0.500]
97 Bk 3.235937e-22 -9.490 [ 0.500]
98 Cf 3.235937e-22 -9.490 [ 0.500]
99 Es 3.235937e-22 -9.490 [ 0.500]
Or the alpha abundances::
>>> abu['alpha'] -= 0.5
Abundances([M/H]=-0.25,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 2.187762e-05 7.340 [-0.500]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 1.023293e-05 7.010 [-0.500]
15 P 2.290868e-07 5.360 [ 0.000]
16 S 4.365158e-06 6.640 [-0.500]
17 Cl 3.162278e-07 5.500 [ 0.000]
18 Ar 4.786301e-07 5.680 [-0.500]
19 K 1.202264e-07 5.080 [ 0.000]
20 Ca 6.456542e-07 5.810 [-0.500]
21 Sc 1.122018e-09 3.050 [ 0.000]
22 Ti 2.511886e-08 4.400 [-0.500]
23 V 1.000000e-08 4.000 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Creating a New, Modified Abundances Object
------------------------------------------
You can also `call` the object and create a new, modified object.
Create a new :class:`.Abundances` object with a metallicity of -1.5::
>>> abu2 = abu(-1.5)
>>> abu2
Abundances([M/H]=-1.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-13 -0.450 [-1.500]
4 Be 7.585776e-13 -0.120 [-1.500]
5 B 1.584893e-11 1.200 [-1.500]
6 C 7.762471e-06 6.890 [-1.500]
...
96 Cm 3.235937e-24 -11.490 [-1.500]
97 Bk 3.235937e-24 -11.490 [-1.500]
98 Cf 3.235937e-24 -11.490 [-1.500]
99 Es 3.235937e-24 -11.490 [-1.500]
You can also input a dictionary of abundances values::
>>> abu2 = abu({"c_h":-1.5})
>>> abu2
Abundances([M/H]=-0.12,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 7.762471e-06 6.890 [-1.500]
7 N 6.025596e-05 7.780 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Abundances Output
-----------------
The :class:`.Abundances` class can output the information in several ways.
If you select a single element (by element name or index), an :class:`.Abund` object will be returned.::
>>> abu['Ca']
Abund(20 Ca N(Ca)/N(H)=2.042e-06 log(eps)=6.310)
>>> abu[10]
Abund(11 Na N(Na)/N(H)=1.479e-06 log(eps)=6.170)
Selecting values in bracket notation will only return the value. Abundance versus H::
>>> abu['Ca_H']
0.0
Abundance versus M::
>>> abu['Ca_M']
-0.0003221051142099841
There are several useful properties that will print out **all** of the abundances.
Print the linear or `N(X)/N(H)` values with `linear`.::
>>> abu.linear
array([1.00000000e+00, 8.51138038e-02, 1.12201845e-11, 2.39883292e-11,
5.01187234e-10, 2.45470892e-04, 6.02559586e-05, 4.57088190e-04,
3.63078055e-08, 6.91830971e-05, 1.47910839e-06, 1.07151931e-05,
...
1.02329299e-22, 1.14815362e-12, 1.02329299e-22, 3.01995172e-13,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22, 1.02329299e-22,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22])
Or you can also use `log`, `logeps`, `xh`, or `xm`. The `log` abundances::
>>> abu.log
array([ 0. , -1.07, -10.95, -10.62, -9.3 , -3.61, -4.22, -3.34,
-7.44, -4.16, -5.83, -4.97, -5.63, -4.25, -6.64, -4.86,
-6.5 , -5.82, -6.92, -5.69, -8.95, -7.1 , -8. , -6.36,
...
-11.1 , -10. , -11.35, -21.99, -21.99, -21.99, -21.99, -21.99,
-21.99, -11.94, -21.99, -12.52, -21.99, -21.99, -21.99, -21.99,
-21.99, -21.99, -21.99])
Abundances in `log(eps)` notation, or `log(N(X)/N(H))+12.0`::
>>> abu.logeps
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.03, 6.37, 7.75, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Bracket notation, relative to H::
>>> abu.xh
array([ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , -0.5 , 0. , 0.24, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
...
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
Bracket notation, relative to M::
>>> abu.xm
array([-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -5.00322105e-01,
...
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04])
You can also return the abundances in formats that are useful for model atmospheres.
Return abundance values in the format for Kurucz model atmospheres::
>>> abu.to_kurucz()
array([ 0.92075543, 0.07836899, -10.98585571, -10.65585571,
-9.33585571, -3.64585571, -4.25585571, -3.37585571,
-7.47585571, -4.19585571, -5.86585571, -4.50585571,
-6.64585571, -4.58585571, -7.11585571, -5.80585571,
...
-20. , -11.97585571, -20. , -12.55585571,
-20. , -20. , -20. , -20. ,
-20. , -20. , -20. ]
Return abundance values in the format for MARCS model atmospheres::
>>> abu.to_marcs()
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Other useful properties and methods::
# Return the metallicity as [M/H]
>>> abu.metallicity
0.0
# Return the metallicity as Sum(N(X)/N(H) over all metals
>>> abu.metals
0.0009509329459494126
# Return all of the element symbols
>>> abu.symbol
['H','He','Li','Be','B','C','N','O','F',
...
'U','Np','Pu','Am','Cm','Bk','Cf','Es']
# Return all of the element mass values (in amu).
>>> abu.mass
[1.00794, 4.0026, 6.941, 9.01218, 10.811, 12.0107,
...
244.0, 243.0, 247.0, 247.0, 251.0, 252.0]
| /roland-1.0.0.tar.gz/roland-1.0.0/docs/abundance.rst | 0.813794 | 0.690839 | abundance.rst | pypi |
*********
Linelists
*********
Using Linelists in Roland
=========================
|roland| has an :class:`.Abund` class for single abundance values and an :class:`.Abundances` class that holds the abundances for
all elements. These are handy for modifying abundances when running synthesis.
Let's import the :class:`.Abundances` class from the :mod:`.abundance` module::
>>> from roland.abundance import Abundances
Internal Values and Hydrogen/Helium Abundances
----------------------------------------------
Internally, an :class:`.Abundances` object stores the abundances values in linear `N(X)/N(H)` format.
The Hydrogen and Helium abundances are special and are held constant at `N(H)/N(H)=1.0` and `N(He)/N(H)=0.0851138`.
Solar Abundance Values
----------------------
The :class:`.Abundances` object stored internal values for solar abundances. The type of values can be
selected at initialization time. The options are:
- "asplund: Asplund, Grevesse and Sauval (2005)
- "husser": Asplund et al. 2009 chosen for the Husser et al. (2013) Phoenix model atmospheres
- "kurucz": abundances for the Kurucz model atmospheres (Grevesse+Sauval 1998)
- "marcs": abundances for the MARCS model atmospheres (Grevesse+2007 values with CNO abundances from Grevesse+Sauval 2008)
The "asplund" values are used by default.
Initializing
------------
You can initialize an :class:`.Abundances` object using 1) a list of abundances, 2) a dictionary of abundance values, 3) a single
metallicity or 4) just with the default solar values (no input).
This will use the default solar values::
>>> abu = Abundances()
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Let's start with `[M/H]=-1.0` instead this time:
>>> abu = Abundances(-1.0)
>>> abu
Abundances([M/H]=-1.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-12 0.050 [-1.000]
4 Be 2.398833e-12 0.380 [-1.000]
5 B 5.011872e-11 1.700 [-1.000]
6 C 2.454709e-05 7.390 [-1.000]
...
96 Cm 1.023293e-23 -10.990 [-1.000]
97 Bk 1.023293e-23 -10.990 [-1.000]
98 Cf 1.023293e-23 -10.990 [-1.000]
99 Es 1.023293e-23 -10.990 [-1.000]
Now, let's give it a dictionary of abundances values:
>>> abu = Abundances({"MG_H":-0.5,"SI_H":0.24})
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 4.570882e-04 8.660 [ 0.000]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 6.918310e-05 7.840 [ 0.000]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 5.623413e-05 7.750 [ 0.240]
...
Finally, we can give an entire array or list of abundances values. You have give the type of abundance
values you are giving in the second parameter. The options are `linear`, `log`, `logeps`, or `x_h`::
>>> abu = Abundances([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78,
8.66, 4.56, 7.84, 6.17, 7.03, 6.37, 7.75,
5.36, 7.14, 5.5 , 6.18],'logeps')
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Modifying an Abundances Object
------------------------------
You can always modify an :class:`.Abundances` object `in place`::
>>> abu['O_H'] = -0.5
Abundances([M/H]=-0.17,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Or change the metallicity::
>>> abu['M_H'] = -0.5
Abundances([M/H]=-0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-12 0.550 [-0.500]
4 Be 7.585776e-12 0.880 [-0.500]
5 B 1.584893e-10 2.200 [-0.500]
6 C 7.762471e-05 7.890 [-0.500]
7 N 1.905461e-05 7.280 [-0.500]
8 O 1.445440e-04 8.160 [-0.500]
...
95 Am 3.235937e-23 -10.490 [-0.500]
96 Cm 3.235937e-23 -10.490 [-0.500]
97 Bk 3.235937e-23 -10.490 [-0.500]
98 Cf 3.235937e-23 -10.490 [-0.500]
99 Es 3.235937e-23 -10.490 [-0.500]
You can also change the entire metallicity by an increment amount::
>>> abu += 0.5
Abundances([M/H]=0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-11 1.550 [ 0.500]
4 Be 7.585776e-11 1.880 [ 0.500]
5 B 1.584893e-09 3.200 [ 0.500]
6 C 7.762471e-04 8.890 [ 0.500]
7 N 1.905461e-04 8.280 [ 0.500]
...
96 Cm 3.235937e-22 -9.490 [ 0.500]
97 Bk 3.235937e-22 -9.490 [ 0.500]
98 Cf 3.235937e-22 -9.490 [ 0.500]
99 Es 3.235937e-22 -9.490 [ 0.500]
Or the alpha abundances::
>>> abu['alpha'] -= 0.5
Abundances([M/H]=-0.25,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 2.187762e-05 7.340 [-0.500]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 1.023293e-05 7.010 [-0.500]
15 P 2.290868e-07 5.360 [ 0.000]
16 S 4.365158e-06 6.640 [-0.500]
17 Cl 3.162278e-07 5.500 [ 0.000]
18 Ar 4.786301e-07 5.680 [-0.500]
19 K 1.202264e-07 5.080 [ 0.000]
20 Ca 6.456542e-07 5.810 [-0.500]
21 Sc 1.122018e-09 3.050 [ 0.000]
22 Ti 2.511886e-08 4.400 [-0.500]
23 V 1.000000e-08 4.000 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Creating a New, Modified Abundances Object
------------------------------------------
You can also `call` the object and create a new, modified object.
Create a new :class:`.Abundances` object with a metallicity of -1.5::
>>> abu2 = abu(-1.5)
>>> abu2
Abundances([M/H]=-1.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-13 -0.450 [-1.500]
4 Be 7.585776e-13 -0.120 [-1.500]
5 B 1.584893e-11 1.200 [-1.500]
6 C 7.762471e-06 6.890 [-1.500]
...
96 Cm 3.235937e-24 -11.490 [-1.500]
97 Bk 3.235937e-24 -11.490 [-1.500]
98 Cf 3.235937e-24 -11.490 [-1.500]
99 Es 3.235937e-24 -11.490 [-1.500]
You can also input a dictionary of abundances values::
>>> abu2 = abu({"c_h":-1.5})
>>> abu2
Abundances([M/H]=-0.12,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 7.762471e-06 6.890 [-1.500]
7 N 6.025596e-05 7.780 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Abundances Output
-----------------
The :class:`.Abundances` class can output the information in several ways.
If you select a single element (by element name or index), an :class:`.Abund` object will be returned.::
>>> abu['Ca']
Abund(20 Ca N(Ca)/N(H)=2.042e-06 log(eps)=6.310)
>>> abu[10]
Abund(11 Na N(Na)/N(H)=1.479e-06 log(eps)=6.170)
Selecting values in bracket notation will only return the value. Abundance versus H::
>>> abu['Ca_H']
0.0
Abundance versus M::
>>> abu['Ca_M']
-0.0003221051142099841
There are several useful properties that will print out **all** of the abundances.
Print the linear or `N(X)/N(H)` values with `linear`.::
>>> abu.linear
array([1.00000000e+00, 8.51138038e-02, 1.12201845e-11, 2.39883292e-11,
5.01187234e-10, 2.45470892e-04, 6.02559586e-05, 4.57088190e-04,
3.63078055e-08, 6.91830971e-05, 1.47910839e-06, 1.07151931e-05,
...
1.02329299e-22, 1.14815362e-12, 1.02329299e-22, 3.01995172e-13,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22, 1.02329299e-22,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22])
Or you can also use `log`, `logeps`, `xh`, or `xm`. The `log` abundances::
>>> abu.log
array([ 0. , -1.07, -10.95, -10.62, -9.3 , -3.61, -4.22, -3.34,
-7.44, -4.16, -5.83, -4.97, -5.63, -4.25, -6.64, -4.86,
-6.5 , -5.82, -6.92, -5.69, -8.95, -7.1 , -8. , -6.36,
...
-11.1 , -10. , -11.35, -21.99, -21.99, -21.99, -21.99, -21.99,
-21.99, -11.94, -21.99, -12.52, -21.99, -21.99, -21.99, -21.99,
-21.99, -21.99, -21.99])
Abundances in `log(eps)` notation, or `log(N(X)/N(H))+12.0`::
>>> abu.logeps
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.03, 6.37, 7.75, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Bracket notation, relative to H::
>>> abu.xh
array([ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , -0.5 , 0. , 0.24, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
...
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
Bracket notation, relative to M::
>>> abu.xm
array([-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -5.00322105e-01,
...
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04])
You can also return the abundances in formats that are useful for model atmospheres.
Return abundance values in the format for Kurucz model atmospheres::
>>> abu.to_kurucz()
array([ 0.92075543, 0.07836899, -10.98585571, -10.65585571,
-9.33585571, -3.64585571, -4.25585571, -3.37585571,
-7.47585571, -4.19585571, -5.86585571, -4.50585571,
-6.64585571, -4.58585571, -7.11585571, -5.80585571,
...
-20. , -11.97585571, -20. , -12.55585571,
-20. , -20. , -20. , -20. ,
-20. , -20. , -20. ]
Return abundance values in the format for MARCS model atmospheres::
>>> abu.to_marcs()
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Other useful properties and methods::
# Return the metallicity as [M/H]
>>> abu.metallicity
0.0
# Return the metallicity as Sum(N(X)/N(H) over all metals
>>> abu.metals
0.0009509329459494126
# Return all of the element symbols
>>> abu.symbol
['H','He','Li','Be','B','C','N','O','F',
...
'U','Np','Pu','Am','Cm','Bk','Cf','Es']
# Return all of the element mass values (in amu).
>>> abu.mass
[1.00794, 4.0026, 6.941, 9.01218, 10.811, 12.0107,
...
244.0, 243.0, 247.0, 247.0, 251.0, 252.0]
| /roland-1.0.0.tar.gz/roland-1.0.0/docs/linelist.rst | 0.808408 | 0.709497 | linelist.rst | pypi |
*********
Synthesis
*********
Using Synthesis in Roland
==========================
|roland| has an :class:`.Abund` class for single abundance values and an :class:`.Abundances` class that holds the abundances for
all elements. These are handy for modifying abundances when running synthesis.
Let's import the :class:`.Abundances` class from the :mod:`.abundance` module::
>>> from roland.abundance import Abundances
Internal Values and Hydrogen/Helium Abundances
----------------------------------------------
Internally, an :class:`.Abundances` object stores the abundances values in linear `N(X)/N(H)` format.
The Hydrogen and Helium abundances are special and are held constant at `N(H)/N(H)=1.0` and `N(He)/N(H)=0.0851138`.
Solar Abundance Values
----------------------
The :class:`.Abundances` object stored internal values for solar abundances. The type of values can be
selected at initialization time. The options are:
- "asplund: Asplund, Grevesse and Sauval (2005)
- "husser": Asplund et al. 2009 chosen for the Husser et al. (2013) Phoenix model atmospheres
- "kurucz": abundances for the Kurucz model atmospheres (Grevesse+Sauval 1998)
- "marcs": abundances for the MARCS model atmospheres (Grevesse+2007 values with CNO abundances from Grevesse+Sauval 2008)
The "asplund" values are used by default.
Initializing
------------
You can initialize an :class:`.Abundances` object using 1) a list of abundances, 2) a dictionary of abundance values, 3) a single
metallicity or 4) just with the default solar values (no input).
This will use the default solar values::
>>> abu = Abundances()
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Let's start with `[M/H]=-1.0` instead this time:
>>> abu = Abundances(-1.0)
>>> abu
Abundances([M/H]=-1.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-12 0.050 [-1.000]
4 Be 2.398833e-12 0.380 [-1.000]
5 B 5.011872e-11 1.700 [-1.000]
6 C 2.454709e-05 7.390 [-1.000]
...
96 Cm 1.023293e-23 -10.990 [-1.000]
97 Bk 1.023293e-23 -10.990 [-1.000]
98 Cf 1.023293e-23 -10.990 [-1.000]
99 Es 1.023293e-23 -10.990 [-1.000]
Now, let's give it a dictionary of abundances values:
>>> abu = Abundances({"MG_H":-0.5,"SI_H":0.24})
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 4.570882e-04 8.660 [ 0.000]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 6.918310e-05 7.840 [ 0.000]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 5.623413e-05 7.750 [ 0.240]
...
Finally, we can give an entire array or list of abundances values. You have give the type of abundance
values you are giving in the second parameter. The options are `linear`, `log`, `logeps`, or `x_h`::
>>> abu = Abundances([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78,
8.66, 4.56, 7.84, 6.17, 7.03, 6.37, 7.75,
5.36, 7.14, 5.5 , 6.18],'logeps')
>>> abu
Abundances([M/H]=0.00,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
...
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Modifying an Abundances Object
------------------------------
You can always modify an :class:`.Abundances` object `in place`::
>>> abu['O_H'] = -0.5
Abundances([M/H]=-0.17,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Or change the metallicity::
>>> abu['M_H'] = -0.5
Abundances([M/H]=-0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-12 0.550 [-0.500]
4 Be 7.585776e-12 0.880 [-0.500]
5 B 1.584893e-10 2.200 [-0.500]
6 C 7.762471e-05 7.890 [-0.500]
7 N 1.905461e-05 7.280 [-0.500]
8 O 1.445440e-04 8.160 [-0.500]
...
95 Am 3.235937e-23 -10.490 [-0.500]
96 Cm 3.235937e-23 -10.490 [-0.500]
97 Bk 3.235937e-23 -10.490 [-0.500]
98 Cf 3.235937e-23 -10.490 [-0.500]
99 Es 3.235937e-23 -10.490 [-0.500]
You can also change the entire metallicity by an increment amount::
>>> abu += 0.5
Abundances([M/H]=0.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-11 1.550 [ 0.500]
4 Be 7.585776e-11 1.880 [ 0.500]
5 B 1.584893e-09 3.200 [ 0.500]
6 C 7.762471e-04 8.890 [ 0.500]
7 N 1.905461e-04 8.280 [ 0.500]
...
96 Cm 3.235937e-22 -9.490 [ 0.500]
97 Bk 3.235937e-22 -9.490 [ 0.500]
98 Cf 3.235937e-22 -9.490 [ 0.500]
99 Es 3.235937e-22 -9.490 [ 0.500]
Or the alpha abundances::
>>> abu['alpha'] -= 0.5
Abundances([M/H]=-0.25,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 2.454709e-04 8.390 [ 0.000]
7 N 6.025596e-05 7.780 [ 0.000]
8 O 1.445440e-04 8.160 [-0.500]
9 F 3.630781e-08 4.560 [ 0.000]
10 Ne 2.187762e-05 7.340 [-0.500]
11 Na 1.479108e-06 6.170 [ 0.000]
12 Mg 1.071519e-05 7.030 [-0.500]
13 Al 2.344229e-06 6.370 [ 0.000]
14 Si 1.023293e-05 7.010 [-0.500]
15 P 2.290868e-07 5.360 [ 0.000]
16 S 4.365158e-06 6.640 [-0.500]
17 Cl 3.162278e-07 5.500 [ 0.000]
18 Ar 4.786301e-07 5.680 [-0.500]
19 K 1.202264e-07 5.080 [ 0.000]
20 Ca 6.456542e-07 5.810 [-0.500]
21 Sc 1.122018e-09 3.050 [ 0.000]
22 Ti 2.511886e-08 4.400 [-0.500]
23 V 1.000000e-08 4.000 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Creating a New, Modified Abundances Object
------------------------------------------
You can also `call` the object and create a new, modified object.
Create a new :class:`.Abundances` object with a metallicity of -1.5::
>>> abu2 = abu(-1.5)
>>> abu2
Abundances([M/H]=-1.50,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 3.548134e-13 -0.450 [-1.500]
4 Be 7.585776e-13 -0.120 [-1.500]
5 B 1.584893e-11 1.200 [-1.500]
6 C 7.762471e-06 6.890 [-1.500]
...
96 Cm 3.235937e-24 -11.490 [-1.500]
97 Bk 3.235937e-24 -11.490 [-1.500]
98 Cf 3.235937e-24 -11.490 [-1.500]
99 Es 3.235937e-24 -11.490 [-1.500]
You can also input a dictionary of abundances values::
>>> abu2 = abu({"c_h":-1.5})
>>> abu2
Abundances([M/H]=-0.12,solar=asplund)
Num Name N(X)/N(H) log(eps) [X/H]
1 H 1.000000e+00 12.000 [ ---- ]
2 He 8.511380e-02 10.930 [ ---- ]
3 Li 1.122018e-11 1.050 [ 0.000]
4 Be 2.398833e-11 1.380 [ 0.000]
5 B 5.011872e-10 2.700 [ 0.000]
6 C 7.762471e-06 6.890 [-1.500]
7 N 6.025596e-05 7.780 [ 0.000]
...
96 Cm 1.023293e-22 -9.990 [ 0.000]
97 Bk 1.023293e-22 -9.990 [ 0.000]
98 Cf 1.023293e-22 -9.990 [ 0.000]
99 Es 1.023293e-22 -9.990 [ 0.000]
Abundances Output
-----------------
The :class:`.Abundances` class can output the information in several ways.
If you select a single element (by element name or index), an :class:`.Abund` object will be returned.::
>>> abu['Ca']
Abund(20 Ca N(Ca)/N(H)=2.042e-06 log(eps)=6.310)
>>> abu[10]
Abund(11 Na N(Na)/N(H)=1.479e-06 log(eps)=6.170)
Selecting values in bracket notation will only return the value. Abundance versus H::
>>> abu['Ca_H']
0.0
Abundance versus M::
>>> abu['Ca_M']
-0.0003221051142099841
There are several useful properties that will print out **all** of the abundances.
Print the linear or `N(X)/N(H)` values with `linear`.::
>>> abu.linear
array([1.00000000e+00, 8.51138038e-02, 1.12201845e-11, 2.39883292e-11,
5.01187234e-10, 2.45470892e-04, 6.02559586e-05, 4.57088190e-04,
3.63078055e-08, 6.91830971e-05, 1.47910839e-06, 1.07151931e-05,
...
1.02329299e-22, 1.14815362e-12, 1.02329299e-22, 3.01995172e-13,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22, 1.02329299e-22,
1.02329299e-22, 1.02329299e-22, 1.02329299e-22])
Or you can also use `log`, `logeps`, `xh`, or `xm`. The `log` abundances::
>>> abu.log
array([ 0. , -1.07, -10.95, -10.62, -9.3 , -3.61, -4.22, -3.34,
-7.44, -4.16, -5.83, -4.97, -5.63, -4.25, -6.64, -4.86,
-6.5 , -5.82, -6.92, -5.69, -8.95, -7.1 , -8. , -6.36,
...
-11.1 , -10. , -11.35, -21.99, -21.99, -21.99, -21.99, -21.99,
-21.99, -11.94, -21.99, -12.52, -21.99, -21.99, -21.99, -21.99,
-21.99, -21.99, -21.99])
Abundances in `log(eps)` notation, or `log(N(X)/N(H))+12.0`::
>>> abu.logeps
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.03, 6.37, 7.75, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Bracket notation, relative to H::
>>> abu.xh
array([ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , -0.5 , 0. , 0.24, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
...
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
Bracket notation, relative to M::
>>> abu.xm
array([-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -5.00322105e-01,
...
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04, -3.22105114e-04,
-3.22105114e-04, -3.22105114e-04, -3.22105114e-04])
You can also return the abundances in formats that are useful for model atmospheres.
Return abundance values in the format for Kurucz model atmospheres::
>>> abu.to_kurucz()
array([ 0.92075543, 0.07836899, -10.98585571, -10.65585571,
-9.33585571, -3.64585571, -4.25585571, -3.37585571,
-7.47585571, -4.19585571, -5.86585571, -4.50585571,
-6.64585571, -4.58585571, -7.11585571, -5.80585571,
...
-20. , -11.97585571, -20. , -12.55585571,
-20. , -20. , -20. , -20. ,
-20. , -20. , -20. ]
Return abundance values in the format for MARCS model atmospheres::
>>> abu.to_marcs()
array([12. , 10.93, 1.05, 1.38, 2.7 , 8.39, 7.78, 8.66, 4.56,
7.84, 6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.5 , 6.18,
5.08, 6.31, 3.05, 4.9 , 4. , 5.64, 5.39, 7.45, 4.92,
...
-0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.9 ,
2. , 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06,
-9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99])
Other useful properties and methods::
# Return the metallicity as [M/H]
>>> abu.metallicity
0.0
# Return the metallicity as Sum(N(X)/N(H) over all metals
>>> abu.metals
0.0009509329459494126
# Return all of the element symbols
>>> abu.symbol
['H','He','Li','Be','B','C','N','O','F',
...
'U','Np','Pu','Am','Cm','Bk','Cf','Es']
# Return all of the element mass values (in amu).
>>> abu.mass
[1.00794, 4.0026, 6.941, 9.01218, 10.811, 12.0107,
...
244.0, 243.0, 247.0, 247.0, 251.0, 252.0]
| /roland-1.0.0.tar.gz/roland-1.0.0/docs/spectrumizer.rst | 0.800809 | 0.777849 | spectrumizer.rst | pypi |
import pydot
from spacy.tokens import Token
import visualise_spacy_tree
import visualise_spacy_pattern
from role_pattern_nlp import util
ROLE_COLOURS = [
'deeppink1',
'purple',
'dodgerblue',
'cyan',
]
NULL_COLOUR = 'grey'
DEFAULT_COLOUR = 'aquamarine'
DEFAULT_STYLE_ATTRS = {
'fontname': 'palatino',
'fontsize': 10,
}
DEFAULT_SUBGRAPH_ATTRS = {
# 'color': 'grey',
# 'style': 'solid',
'penwidth': 1,
}
DEFAULT_NODE_ATTRS = {
'color': DEFAULT_COLOUR,
'shape': 'box',
'style': 'rounded',
'penwidth': 2,
'margin': 0.25,
**DEFAULT_STYLE_ATTRS,
}
LEGEND_ATTRS = {
'ranksep': 0,
'penwidth': 0,
}
def get_label_colour_dict(labels):
labels = [l for l in labels if l] # Ignore null labels
labels = util.unique_list(labels)
label2colour = {label: ROLE_COLOURS[i] for i, label in enumerate(labels)}
return label2colour
def assign_role_colours(graph, token_labels, label2colour):
nodes = graph.get_nodes()
for node, label in zip(nodes, token_labels):
if label:
colour = label2colour[label]
node.set_color(colour)
else:
node.set_color(NULL_COLOUR)
return graph
def create_legend(label2colour):
legend = pydot.Dot(graph_type='graph', **DEFAULT_STYLE_ATTRS)
legend_cluster = pydot.Subgraph(graph_name='cluster_legend', **DEFAULT_STYLE_ATTRS, **LEGEND_ATTRS)
rows = []
for label, colour in label2colour.items():
row = '<td>{0}</td><td bgcolor="{1}" width="30"></td>'.format(label, colour)
row = '<tr>{}</tr>'.format(row)
rows.append(row)
row = '<tr><td>Null</td><td bgcolor="{}" width="30"></td></tr>'.format(NULL_COLOUR)
rows.append(row)
row = '<tr><td width="100">No label</td><td bgcolor="{}" width="30"></td></tr>'.format(DEFAULT_COLOUR)
rows.append(row)
table = '<table border="0" cellborder="1" cellspacing="0" cellpadding="4">{}</table>'.format('\n'.join(rows))
table = '<font face="{0}" size="{1}">{2}</font>'.format(
DEFAULT_STYLE_ATTRS['fontname'],
DEFAULT_STYLE_ATTRS['fontsize'] - 2,
table,
)
html = '<{}>'.format(table)
html = pydot.Node(name='legend_table', label=html, shape='none')
legend_cluster.add_node(html)
legend.add_subgraph(legend_cluster)
return legend
def nodes_with_label(nodes, labels, with_label):
nodes_with_label = []
for node, label in zip(nodes, labels):
if label == with_label:
nodes_with_label.append(node)
return nodes_with_label
def add_role_label_clusters(graph, labels):
new_graph = pydot.Dot(graph_type='graph', **DEFAULT_STYLE_ATTRS)
all_nodes = graph.get_nodes()
for label in util.unique_list(labels):
nodes = nodes_with_label(all_nodes, labels, label)
if not label:
for node in nodes:
new_graph.add_node(node)
else:
subgraph_name = 'cluster_' + label
subgraph = pydot.Subgraph(graph_name=subgraph_name, **DEFAULT_SUBGRAPH_ATTRS)
subgraph.set_label(label)
for node in nodes:
subgraph.add_node(node)
new_graph.add_subgraph(subgraph)
for edge in graph.get_edges():
new_graph.add_edge(edge)
return new_graph
def pattern_to_pydot(pattern, legend=False):
spacy_dep_pattern = pattern.spacy_dep_pattern
labels_depth_order = pattern.token_labels_depth_order
labels_original_order = pattern.token_labels
graph = visualise_spacy_pattern.to_pydot(spacy_dep_pattern)
for node in graph.get_nodes():
for k, v in DEFAULT_NODE_ATTRS.items():
node.set(k, v)
if pattern.label2colour:
label2colour = pattern.label2colour
else:
label2colour = get_label_colour_dict(labels_original_order)
pattern.label2colour = label2colour
graph = assign_role_colours(graph, labels_depth_order, label2colour)
if legend:
legend = create_legend(label2colour)
return graph, legend
return graph
def match_to_pydot(match, label2colour={}, legend=False):
labels = match.keys()
if not label2colour:
label2colour = get_label_colour_dict(labels)
doc = util.doc_from_match(match)
try:
Token.set_extension('plot', default=DEFAULT_NODE_ATTRS)
except:
pass
for token in doc:
colour = DEFAULT_COLOUR
for match_token in match.match_tokens:
if match_token.i == token.i:
colour = NULL_COLOUR
for label, labelled_tokens in match.items():
if token.i in [t.i for t in labelled_tokens]:
colour = label2colour[label]
token._.plot['color'] = colour
graph = visualise_spacy_tree.to_pydot(doc)
for edge in graph.get_edges():
for k, v in DEFAULT_STYLE_ATTRS.items():
edge.set(k, v)
if legend:
legend = create_legend(label2colour)
return graph, legend
return graph | /role_pattern_nlp-0.2.0-py3-none-any.whl/role_pattern_nlp/role_pattern_vis.py | 0.407333 | 0.285958 | role_pattern_vis.py | pypi |
from pprint import pformat, pprint
import itertools
import spacy_pattern_builder
from role_pattern_nlp.role_pattern import RolePattern
from role_pattern_nlp import mutate
from role_pattern_nlp import validate
from role_pattern_nlp import util
from role_pattern_nlp.constants import (
DEFAULT_BUILD_PATTERN_TOKEN_FEATURE_DICT,
DEFAULT_REFINE_PATTERN_FEATURE_COMBINATIONS,
DEFAULT_REFINE_PATTERN_FEATURE_DICT,
)
from role_pattern_nlp.exceptions import (
FeaturesNotInFeatureDictError,
RolePatternDoesNotMatchExample,
)
class RolePatternBuilder:
def __init__(self, feature_dict):
self.feature_dict = feature_dict
def build(self, match_example, features=[], **kwargs):
if not features:
features = self.feature_dict.keys()
self.validate_features(features)
feature_dict = {k: v for k, v in self.feature_dict.items() if k in features}
role_pattern = build_role_pattern(
match_example, feature_dict=feature_dict, **kwargs
)
role_pattern.builder = self
return role_pattern
def refine(
self,
pattern,
pos_matches,
neg_matches,
feature_dicts=[DEFAULT_BUILD_PATTERN_TOKEN_FEATURE_DICT],
fitness_func=mutate.pattern_fitness,
tree_extension_depth=2,
):
training_match = pos_matches[0] # Take first pos_match as blue print
all_matches = pos_matches + neg_matches
docs = [util.doc_from_match(match) for match in all_matches]
docs = util.unique_list(docs)
new_pattern_feature_dict = feature_dicts[0]
def get_matches(pattern):
matches = [pattern.match(d) for d in docs]
matches = util.flatten_list(matches)
return matches
def get_fitnesses(patterns, pos_matches, neg_matches):
matches = [get_matches(variant) for variant in patterns]
fitnesses = [
fitness_func(variant, matches, pos_matches, neg_matches)
for variant, matches in zip(patterns, matches)
]
return fitnesses
def get_best_fitness_score(fitnesses):
best_fitness_score = max([fitness['score'] for fitness in fitnesses])
return best_fitness_score
def get_node_level_variants(patterns):
pattern_variants = [
mutate.yield_node_level_pattern_variants(
variant, variant.training_match, feature_dicts
)
for variant in patterns
]
pattern_variants = util.flatten_list(pattern_variants)
return pattern_variants
def get_tree_level_variants(patterns):
pattern_variants = [
mutate.yield_tree_level_pattern_variants(
variant, variant.training_match, new_pattern_feature_dict
)
for variant in patterns
]
pattern_variants = util.flatten_list(pattern_variants)
return pattern_variants
def get_best_variants(variants, fitnesses, best_fitness_score):
best_variants = [
variant
for variant, fitness in zip(pattern_variants, fitnesses)
if fitness['score'] == best_fitness_score
]
return best_variants
def get_shortest_variants(variants):
shortest_length = min(
[len(variant.spacy_dep_pattern) for variant in variants]
)
shortest_variants = [
variant
for variant in variants
if len(variant.spacy_dep_pattern) == shortest_length
]
return shortest_variants
def remove_duplicates(variants):
unique_variants = []
dep_patterns_already = []
for variant in variants:
if variant.spacy_dep_pattern not in dep_patterns_already:
unique_variants.append(variant)
dep_patterns_already.append(variant.spacy_dep_pattern)
return unique_variants
pattern.training_match = training_match
pattern_variants = [pattern]
for i in range(tree_extension_depth):
pattern_variants += get_tree_level_variants(pattern_variants)
pattern_variants = remove_duplicates(pattern_variants)
fitnesses = get_fitnesses(pattern_variants, pos_matches, neg_matches)
best_fitness_score = get_best_fitness_score(fitnesses)
if best_fitness_score == 1.0:
pattern_variants = get_best_variants(
pattern_variants, fitnesses, best_fitness_score
)
pattern_variants = get_shortest_variants(pattern_variants)
return pattern_variants
pattern_variants = get_node_level_variants(pattern_variants)
fitnesses = get_fitnesses(pattern_variants, pos_matches, neg_matches)
best_fitness_score = get_best_fitness_score(fitnesses)
# util.interactive_pattern_evaluation(
# pattern_variants, fitnesses, fitness_floor=0
# )
pattern_variants = get_best_variants(
pattern_variants, fitnesses, best_fitness_score
)
pattern_variants = get_shortest_variants(pattern_variants)
return pattern_variants
def validate_features(self, features):
features_not_in_feature_dict = [
f for f in features if f not in self.feature_dict
]
if features_not_in_feature_dict:
raise FeaturesNotInFeatureDictError(
'RolePatternBuilder received a list of features which includes features that are not present in the feature_dict. Features not present: {}'.format(
', '.join(features_not_in_feature_dict)
)
)
def build_role_pattern(
match_example,
feature_dict=DEFAULT_BUILD_PATTERN_TOKEN_FEATURE_DICT,
validate_pattern=True,
):
doc = util.doc_from_match(match_example)
util.annotate_token_depth(doc)
tokens = util.flatten_list(match_example.values())
tokens = [
doc[idx] for idx in util.token_idxs(tokens)
] # Ensure that tokens have the newly added depth attribute
nx_graph = util.doc_to_nx_graph(doc)
match_tokens = util.smallest_connected_subgraph(tokens, nx_graph, doc)
spacy_dep_pattern = spacy_pattern_builder.build_dependency_pattern(
doc, match_tokens, feature_dict=feature_dict
)
token_labels = build_pattern_label_list(match_tokens, match_example)
role_pattern = RolePattern(spacy_dep_pattern, token_labels)
match_tokens_depth_order = spacy_pattern_builder.util.sort_by_depth(
match_tokens
) # Should be same order as the dependency pattern
token_labels_depth_order = build_pattern_label_list(
match_tokens_depth_order, match_example
)
role_pattern.token_labels_depth_order = token_labels_depth_order
if validate_pattern:
pattern_does_match_example, matches = validate.pattern_matches_example(
role_pattern, match_example
)
if not pattern_does_match_example:
spacy_dep_pattern = role_pattern.spacy_dep_pattern
message = [
'Unable to match example: \n{}'.format(pformat(match_example)),
'From doc: {}'.format(doc),
'Constructed role pattern: \n',
'spacy_dep_pattern: \n{}'.format(pformat(spacy_dep_pattern)),
'token_labels: \n{}'.format(
pformat(role_pattern.token_labels_depth_order)
),
]
if matches:
message.append('Matches found:')
for match in matches:
message += [
'Match tokens: \n{}'.format(pformat(match.match_tokens)),
'Slots: \n{}'.format(pformat(match)),
]
else:
message.append('Matches found: None')
message = '\n{}'.format('\n'.join(message))
raise RolePatternDoesNotMatchExample(message)
return role_pattern
def build_pattern_label_list(match_tokens, match_example):
match_tokens = sorted(match_tokens, key=lambda t: t.i)
match_token_labels = []
for w in match_tokens:
label = None
for label_, tokens in match_example.items():
token_idxs = [t.i for t in tokens]
if (
w.i in token_idxs
): # Use idxs to prevent false inequality caused by state changes
label = label_
match_token_labels.append(label)
return match_token_labels | /role_pattern_nlp-0.2.0-py3-none-any.whl/role_pattern_nlp/role_pattern_builder.py | 0.568775 | 0.255239 | role_pattern_builder.py | pypi |
from pprint import pprint
import spacy_pattern_builder
from role_pattern_nlp import role_pattern_builder
from role_pattern_nlp.role_pattern import RolePattern
from role_pattern_nlp.match import RolePatternMatch
from role_pattern_nlp import util
def pattern_fitness(pattern, matches, pos_matches, neg_matches):
true_pos = [m for m in pos_matches if m in matches]
true_neg = [m for m in neg_matches if m not in matches]
n_true_pos = len(true_pos)
n_true_neg = len(true_neg)
pos_score = n_true_pos / len(pos_matches)
neg_score = n_true_neg / len(neg_matches)
pos_score_weighted = pos_score * 0.5
neg_score_weighted = neg_score * 0.5
fitness_score = pos_score_weighted + neg_score_weighted
return {
'score': fitness_score,
'pos_score': pos_score,
'neg_score': neg_score,
'true_pos': true_pos,
'true_neg': true_neg,
}
def yield_node_level_pattern_variants(role_pattern, training_match, feature_dicts):
spacy_dependency_pattern = role_pattern.spacy_dep_pattern
match_token_labels = role_pattern.token_labels
match_tokens = training_match.match_tokens
dependency_pattern_variants = spacy_pattern_builder.yield_node_level_pattern_variants(
spacy_dependency_pattern, match_tokens, feature_dicts
)
for dependency_pattern_variant in dependency_pattern_variants:
assert len(dependency_pattern_variant) == len(spacy_dependency_pattern)
role_pattern_variant = RolePattern(
dependency_pattern_variant, match_token_labels
)
role_pattern_variant.training_match = training_match
yield role_pattern_variant
def yield_tree_level_pattern_variants(role_pattern, training_match, feature_dict):
match_tokens = training_match.match_tokens
extended_match_tokens = spacy_pattern_builder.yield_extended_trees(match_tokens)
doc = util.doc_from_match(training_match)
for match_tokens in extended_match_tokens:
token_labels = role_pattern_builder.build_pattern_label_list(
match_tokens, training_match
)
dependency_pattern_variant = spacy_pattern_builder.build_dependency_pattern(
doc, match_tokens, feature_dict=feature_dict
)
assert (
len(dependency_pattern_variant) == len(role_pattern.spacy_dep_pattern) + 1
)
assert len(token_labels) == len(role_pattern.token_labels) + 1
role_pattern_variant = RolePattern(dependency_pattern_variant, token_labels)
role_pattern_variant.builder = role_pattern.builder
new_training_match = RolePatternMatch(training_match)
new_training_match.match_tokens = match_tokens
role_pattern_variant.training_match = new_training_match
yield role_pattern_variant | /role_pattern_nlp-0.2.0-py3-none-any.whl/role_pattern_nlp/mutate.py | 0.440229 | 0.439807 | mutate.py | pypi |
# Role Questions
This repository contains the official implementation of the method described in: [Asking It All: Generating Contextualized Questions for Any Semantic Role](http://https://aclanthology.org/2021.emnlp.XXXX)
by [Valentina Pyatkin](https://valentinapy.github.io) (BIU), [Paul Roit](https://paulroit.com) (BIU), [Julian Michael](https://julianmichael.org), [Reut Tsarfaty](https://nlp.biu.ac.il/~rtsarfaty/) (BIU, AI2) [Yoav Goldberg](https://u.cs.biu.ac.il/~yogo/) (BIU, AI2) and [Ido Dagan](https://u.cs.biu.ac.il/~dagani/) (BIU)
Paper Abstract:
> Asking questions about a situation is an inherent step towards understanding it.
> To this end, we introduce the task of role question generation, which, given a predicate mention and a passage, requires producing a set of questions asking about all possible semantic roles of the predicate. We develop a two-stage model for this task, which first produces a context-independent question prototype for each role and then revises it to be contextually appropriate for the passage.
> Unlike most existing approaches to question generation, our approach does not require conditioning on existing answers in the text.
> Instead, we condition on the type of information to inquire about, regardless of whether the answer appears explicitly in the text, could be inferred from it, or should be sought elsewhere.
> Our evaluation demonstrates that we generate diverse and well-formed questions for a large, broad-coverage ontology of predicates and roles.
## Introduction
This paper presents a method to generate questions that inquire about certain semantic concepts that may appear in a text.
For example, in the text:
> John **sold** the pen to Mary
the predicate word sold evokes a semantic frame with the
following roles fulfilled by the verb's explicit arguments: _John_ as <u>the seller</u>, _the pen_ as <u>the thing sold</u> and _Mary_ as <u><the buyer</u>.
The predicate sell also evokes another semantic role which is not fulfilled in the context of this sentence: <u>The price paid</u>.
In this work we would like to create grammatical, fluent and fit-to-context questions that target each such role, whether it is fulfilled or not.
Given the source sentence and the target role, we would like to create the following questions:
* John **sold** the pen to Mary; <u>The seller</u> ==> Who sold the pen to Mary?
* John **sold** the pen to Mary; <u>The buyer</u> ==> Who did John sell the pen to?
* John **sold** the pen to Mary; <u>The thing sold</u> ==> What did John sell?
* John **sold** the pen to Mary; <u>The price paid</u> ==> What did John sell the pen for?
This work relies on a semantic ontology to list and identify all semantic roles, and is implemented on top of [PropBank](https://github.com/propbank/).
If you simply want to predict questions using our method, check the installation requirements and the "Easy Way to Predict Role Questions" section.
For reproducibility reasons we also detail how to obtain various steps of our pipeline (you don't need to follow these if you only want to use the model for inference).
## Installation Requirements:
The following python libraries are required:
- torch==1.7.1
- spacy==2.3.2
- transformers==4.1.1
- allennlp==1.2.0rc1
This project uses data and code from: the [QA based Nominal SRL project](https://github.com/kleinay/QANom)
This project (QANom) can be installed with pip (pip install qanom).
## Easy Way to Predict Role Questions:
If you just want to predict Role Questions for a given context and predicate(s), you can use a simple script that we prepared.
You should download and unzip the [contextualizer model](https://nlp.biu.ac.il/~pyatkiv/roleqsmodels/question_transformation.tar.gz) .
To run the script you can use the following command:
> python predict_questions.py --infile <INPUT_FILE_PATH> --outfile <OUTPUT_FILE_PATH> --transformation_model_path <PATH_TO_DOWNLOADED_CONTEXTUALIZER_MODEL> --device_number <NUMBER_OF_CUDA_DEVICE> --with_adjuncts <TRUE or FALSE>
The input file should be a jsonl file (check debug_file.jsonl for an example), containing the following information: an instance id (id),
the sentence the target predicate appears in (sentence), the target index of the predicate in the sentence (target_idx), the target lemma (target_lemma),
the POS of the target (target_pos), the predicate sense in terms of OntoNotes (predicate_sense). Our model works best with disambiguated predicate senses
in terms of OntoNotes, so if you have a predicate disambiguation system or gold sense information please include it. Otherwise you could simply choose the first
sense by putting 1 in that field, with some performance tradeoffs.
## Data Dependencies
The following datasets are required in-order to re-create our data and evaluation.
Our scripts will refer to the root directories after downloading or after extracting these resources.
- For collecting prototypes and training the contextualizer:
-- [OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) (The frame files under English metadata) _Download from the LDC_
-- [OntoNotes 5.0 (2012) in CoNLL format](https://github.com/ontonotes/conll-formatted-ontonotes-5.0/). _Please convert the gold skeleton files to gold conll files_
-- [QA-SRL Bank 2.0](http://qasrl.org/data/qasrl-v2.tar).
-- [QANom](https://github.com/kleinay/QANom)
-- [NomBank 1.0](https://nlp.cs.nyu.edu/meyers/nombank/nombank.1.0.tgz)
-- [PennTreeBank v3](https://catalog.ldc.upenn.edu/LDC99T42) _Download from the LDC_
- For evaluation:
-- [Gerber and Chai](http://lair.cse.msu.edu/projects/implicit_annotations.html)
-- [ON5V](http://projects.cl.uni-heidelberg.de/india/)
## Model Dependencies
We have used the publicly available verbal SRL parser by AllenNLP, the download link is:
[bert-base-srl-2020.03.24](https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.03.24.tar.gz)
AllenNLP may change storage or re-train this model with version changes, the latest can be found here: [SRL in AllenNLP](https://demo.allennlp.org/semantic-role-labeling)
We are also extensively using BERT and BART models via the huggingface python library, but they aren't explicitly mentioned due to their ubiquitousness.
## Data Preprocessing Instructions
All scripts are run from the root directory of this project.
#### OntoNotes
We preprocess OntoNotes into a json-lines format, where each output line contains a sentence,
its frames (predicate and arguments). Given co-ref data we enrich each frame with other mentions
of explicit arguments from any sentence in the document.
These would be marked as implicit arguments bearing the same role as their explicit mentions.
Note that more implicit arguments could be present in the text, but they are unmarked in OntoNotes.
> python ./ontonotes/preprocess_ontonotes.py --conll_onto_path <CONLL_FORMATTED_ONTONOTES_DIR>
This will generate ./ontonotes/ontonotes.[train|dev|test].jsonl
#### QA-SRL and QANom
> python ./qasrl/preprocess_qasrl_bank.py --qasrl_path <QASRL_V2_DIR>
> pyhton ./qasrl/preprocess_qanom --qanom_dir <QA_NOM_DIR>
#### Frames
We process the PropBank frame files that were distributed with the OntoNotes 5.0 release in 2012.
Since then PropBank frame files had another large release (The Unified PropBank of 2016)
but to stay compatible with the predicate-sense annotation in OntoNotes we refrain from using it.
> python ./role_lexicon/preprocess_ontonotes_roles.py --onto_dir <ONTONOTES_RELEASE_DIR>
#### NomBank
We use NomBank to train a nominal SRL parser, since the annotations of NomBank are provided on top of the Penn TreeBank
we have to preprocess both external resources together.
_Note_: Verify that the Penn TreeBank directory contains the trees in MRG format.
> python ./nombank/preprocess_nombank2.py --nombank_dir <NOMBANK_DIR> --penntreebank_dir <PTB_DIR>
## Aligning QA-SRL with PropBank
This part is described in section 4.1 of our paper.
It consists of joint processing of two datasets, OntoNotes (annotated with PropBank SRL)
and QA-SRL-Bank (annotated with questions and answers) each with parsers of the other formalism.
- QA-SRL Bank is parsed with PropBank SRL parser, producing sentences with role-labeled spans.
Then, the annotated answers are aligned with the labelled spans, and the label (A0) is applied to the QA-SRL question-answer pair.
- OntoNotes gold arguments are re-labelled with QA-SRL parser's questions.
The produced question is added to the gold argument and role label.
However, the SRL parser actually consists of two different models, one for Verbal SRL (publicly available)
and one for Nominal SRL (which we trained using the same model architecture and hyper-params as the verbal one).
We run the verbal SRL parser on top of QA-SRL Bank, which only contains verbal predicates,
and the nominal SRL parser on top of QA-Nom, which has only deverbal noun predicates.
Moreover, while detecting verbal predicates is rather easy with a simple PoS tagger,
detecting noun predicates is a more delicate task.
For this purpose we train a nominal predicate classifier using nominal predicates in OntoNotes.
#### Training Nominal Predicate Classifier with OntoNotes 5 Data
The following command will use pre-processed ontonotes.[train|dev].jsonl files
to train a nominal predicate identifier:
> python ./ontonotes/finetune_predicate_indicator.py --model_name bert-base-uncased
The trained model will be saved under: ./experiments/nom_pred
#### Running nominal predicate detector using the trained model.
The following will take in sentences and output the same sentences with their detected noun predicates.
The input and output should both be .jsonl files (JSON-lines, JSON object per line)
with a property named 'text'. The output file will contain all the fields of the input with
the added property: "target_indicator" that would contain a list of token indices that correspond to a predicate.
> python ./ontonotes/predict_predicate_indicators.py --in_path <IN_FILE.jsonl> --out_path <OUT_FILE.jsonl>
#### Training Nominal SRL Parser using NomBank Data
To train our nominal SRL parser we use (and adjust) the following [config file from AllenNLP](https://github.com/allenai/allennlp-models/blob/main/training_config/structured_prediction/bert_base_srl.jsonnet).
#### Predicting questions for OntoNotes
We predict questions for OntoNotes by using the question generation part of the [QA-SRL model of Fitzgerald et al.](https://github.com/nafitzgerald/nrl-qasrl).
#### Aligning Predicted SRL arguments with QA-SRL question answer pairs.
- Aligning annotated QA-SRL question-answer pairs with predicted (verbal) SRL spans and labels
> python align_QASRL.py
- Aligning annotated QA-Nom question-answer pairs with predicated (nominal) SRL spans and labels
> python align_QANOM.py
#### Merging and Unifying and further Processing QA-SRL datasets.
The previous step results in multiple intermediate results, we would like to unify these:
- QA-SRL and QA-Nom that are aligned to a PropBank role and unified into a single file.
- Merged with QA-SRL Frame-Aligned Bank
- Decompose Questions to its prototype form
TODO @plroit
> python ./qasrl/unify_qasrl_qanom_roles_placeholder_fillers.py
## Training the Contextualizer
#### Preparing Training Data
This step creates source and target training files for the seq2seq model employed by the contextualizer.
> python ./bart_transformation/create_training_data_question_transf.py
#### Running the Training Process
We trained the contextualizer using Huggingface's [BART seq2seq implementation (legacy)](https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq).
## Running Prototype Selector
This step may run for a long time, it is designed to run on parallel on 10 nodes
by supplying an index from 0 to 9 (10 is a hardcoded value).
Then you can run this script with a special option to merge and process all outputs.
TODO @plroit
> python ./prototypes/calc_prototype_accuracy.py --index 0
> python ./prototypes/calc_prototype_accuracy.py --index 1
> ...
> python ./prototypes/calc_prototype_accuracy.py --index 9
> python ./prototypes/calc_prototype_accuracy.py --merge_all
| /roleqgen-0.0.1.tar.gz/roleqgen-0.0.1/README.md | 0.846895 | 0.903038 | README.md | pypi |
import networkx as nx
import numpy as np
from scipy.sparse import identity
from sklearn.preprocessing import normalize
def compute_iter(X, T_indptr, T_data, theta, n, w, dim):
for i in range(n):
a, b = T_indptr[i:i+2]
probabilities = np.expand_dims(T_data[a:b], -1)
phi = np.mean(np.exp(1j * probabilities * theta), axis=0)
X[i, w*dim:(w+1)*dim] = np.concatenate([phi.real, phi.imag])
def compute_embedding(X, H, n, theta, walk_len=3, offset=0):
dim = 2 * theta.shape[1]
T = H.copy()
for w in range(walk_len):
T @= H
compute_iter(X, T.indptr, T.data, theta, n, w + offset*walk_len, dim)
class RoleWalk:
def __init__(
self, walk_len=3, n_samples=10, bounds=(1e-3, 100),
embedding_dim=2, random_state=0
):
self.walk_len = walk_len
self.n_samples = n_samples
self.bounds = bounds
self.embedding_dim = embedding_dim
# timesteps
theta = np.linspace(bounds[0], bounds[1], n_samples)
theta = theta[None, :].astype(np.float32)
self.theta = theta
self.random_state = random_state
def fit_transform(self, G):
n = len(G.nodes)
A = nx.to_scipy_sparse_matrix(G)
# extract raw embedding from sampling the characteristic function
if nx.is_directed(G):
dim = 4 * self.n_samples * self.walk_len
X = np.zeros((n, dim), dtype=np.float32)
H = normalize(identity(n) + A, norm="l1")
H_T = normalize(identity(n) + A.T, norm="l1")
del A
compute_embedding(X, H, n, self.theta, self.walk_len, offset=0)
compute_embedding(X, H_T, n, self.theta, self.walk_len, offset=1)
else:
dim = 2 * self.n_samples * self.walk_len
X = np.zeros((n, dim), dtype=np.float32)
H = normalize(A, norm="l1")
compute_embedding(X, H, n, self.theta, self.walk_len)
# random projection
if self.embedding_dim is not None:
r = np.random.RandomState(self.random_state)
U = r.random(
size=(X.shape[1], self.embedding_dim)).astype(np.float32)
Q, _ = np.linalg.qr(U)
X = X @ Q
return X
def fit_predict(
self, G,
min_n_roles=2,
max_n_roles=10,
method="kmeans",
metric="silhouette"
):
if not isinstance(G, np.ndarray):
X = self.fit_transform(G)
else:
X = G
if metric == "silhouette":
from sklearn.metrics import silhouette_score as get_score
elif metric == "calinski_harabasz":
from sklearn.metrics import calinski_harabasz_score as get_score
if method == "kmeans":
from sklearn.cluster import KMeans as Clusterer
X += np.random.normal(size=X.shape, scale=1e-5) # avoid duplicates
elif method == "agglomerative":
from sklearn.cluster import AgglomerativeClustering as Clusterer
elif method == "spectral":
from sklearn.cluster import SpectralClustering as Clusterer
best_score = -float("inf")
for i in range(min_n_roles, max_n_roles+1):
labels = Clusterer(n_clusters=i).fit_predict(X)
score = get_score(X, labels)
if score > best_score:
best_score = score
best_y = labels[:]
return best_y | /rolewalk-1.0.1-py3-none-any.whl/rolewalk.py | 0.651909 | 0.557665 | rolewalk.py | pypi |
# RolexBoost
Unofficial implementation of D. Yang, H. Lee and D. Lim, "RolexBoost: A Rotation-Based Boosting Algorithm With Adaptive Loss Functions," in IEEE Access, vol. 8, pp. 41037-41044, 2020, doi: 10.1109/ACCESS.2020.2976822.
This is the course project of Fundamentals of Machine Learning, Tsinghua University, 2020 Autumn.
## Installation
```bash
pip install rolexboost
```
## API reference
We provided scikit-learn-like API for the RolexBoost algorithm proposed in the paper,
together with the RotationForest and FlexBoost, which are the source of RolexBoost's idea.
Note that
1. Only classifiers are provided. We did not implemented the regressors because they are not mentioned in the paper, while this project is intended as a reproduction.
2. We only ensures that the `fit` and `predict` API works well. Some others, such as `score`, may be functional thanks to the scikit-learn `BaseEstimator` and `ClassifierMixin` base classes, but still others, such as `fit_predict` or `predict_proba`, are currently unavailable.
We might implement those two in the future if someone is interested in this project.
### Basic Example
```python
>>> import pandas as pd
>>> import numpy as np
>>> from rolexboost import RolexBoostClassifier, FlexBoostClassifier, RotationForestClassifier
>>> clf = RolexBoostClassifier() # Or the other two classifiers
>>> df = pd.DataFrame({"A": [2,6,5,7,1,8], "B":[8,5,2,3,4,6], "C": [3,9,5,4,6,1], "Label": [0,1,1,0,0,1]})
>>> df
A B C Label
0 2 8 3 0
1 6 5 9 1
2 5 2 5 1
3 7 3 4 0
4 1 4 6 0
5 8 6 1 1
>>> X = df[["A", "B", "C"]]
>>> y = df["Label"]
>>> clf.fit(X, y)
RolexBoostClassifier()
>>> clf.predict(X)
array([0, 1, 1, 0, 0, 1], dtype=int64)
>>> test_X = np.array([
... [3,1,2],
... [2,5,1],
... [5,1,7]
... ])
>>> clf.predict(test_X)
array([1, 0, 1], dtype=int64)
```
### API References
#### Rotation Forest
```python
RotationForestClassifier(
n_estimators=100,
n_features_per_subset=3,
bootstrap_rate=0.75,
criterion="gini",
splitter="best",
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features=None,
random_state=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
class_weight=None,
presort="deprecated",
ccp_alpha=0.0,
)
```
- `n_estimators`: number of base estimators
- `n_features_per_subset`: number of features in each subset
- `bootstrap_rate`: ratio of samples bootstrapped in the original dataset
All other parameters are passed to the `DecisionTreeClassifier` of scikit-learn. Please refer to [their documentation](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier) for details.
Note:
In the algorithm description, a parameter controls the number of subsets, and the number of features is derived from it.
However, the validation part of the paper says that "the number of features in each subset was set to three".
In our framework, the parameter `n_samples_per_subset` is thus formulated in this way to make the benchmark evaluation easier.
#### FlexBoost
```python
FlexBoostClassifier(
n_estimators=100,
K=0.5,
criterion="gini",
splitter="best",
max_depth=1,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features=None,
random_state=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
class_weight=None,
presort="deprecated",
ccp_alpha=0.0,
)
```
- `n_estimators`: number of base estimators
- `K`: the parameter to control the "aggressiveness" and "conservativeness" in the adaptive loss function choosing process. It should be a number between 0 and 1.
All other parameters are passed to the `DecisionTreeClassifier` of scikit-learn. Please refer to [their documentation](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier) for details.
The default parameter for `max_depth` is 1, because FlexBoost is a modification of AdaBoost, and they should converge to the same result when `K=1`.
In [scikit-learn implementation of AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html#sklearn.ensemble.AdaBoostClassifier), the default `max_depth` for the DecisionTreeClassifier is 1.
#### RolexBoost
```python
RolexBoostClassifier(
n_estimators=100,
n_features_per_subset=3,
bootstrap_rate=0.75,
K=0.5,
criterion="gini",
splitter="best",
max_depth=1,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features=None,
random_state=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
class_weight=None,
presort="deprecated",
ccp_alpha=0.0
)
```
- `n_estimators`: number of base estimators
- `n_features_per_subset`: number of features in each subset
- `bootstrap_rate`: ratio of samples bootstrapped in the original dataset
- `K`: the parameter to control the "aggressiveness" and "conservativeness" in the adaptive loss function choosing process. It should be a number between 0 and 1.
All other parameters are passed to the `DecisionTreeClassifier` of scikit-learn. Please refer to [their documentation](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier) for details.
Note:
In the algorithm description, a parameter controls the number of subsets, and the number of features is derived from it.
However, the validation part of the paper says that "the number of features in each subset was set to three".
In our framework, the parameter `n_samples_per_subset` is thus formulated in this way to make the benchmark evaluation easier.
The default parameter for `max_depth` is 1, because RolexBoost integrates the idea of FlexBoost. Please refer to the last section about why FlexBoost has a default `max_depth` of 1.
## Performance Benchmarks
We have tested the three algorithms on 13 datasets mentioned in the paper.
Here is the result:
| algorithm | accuracy | benchmark | ratio |
| -------------- | -------- | --------- | ------ |
| RotationForest | 0.7898 | 0.7947 | 0.9938 |
| FlexBoost | 0.7976 | 0.8095 | 0.9853 |
| RolexBoost | 0.7775 | 0.8167 | 0.9520 |
- `accuracy` refers to the average accuracy of our implementation
- `benchmark` refers to the average accuracy reported in the paper
- `ratio` is accuracy/benchmark
For the detail of each algorithm on each dataset, please run the tests/accuracy-test.py.
The test may take ~1 hour to finish.
Some datasets reported in the paper are not involved in the benchmark testing for the following two reasons:
1. We cannot find the corresponding dataset in the UCI Machine Learning Repository
2. The 3-class problems are each divided into three 2-class problems. We are not sure about how such division is done.
| /rolexboost-0.0.1.tar.gz/rolexboost-0.0.1/README.md | 0.623262 | 0.911061 | README.md | pypi |
from __future__ import annotations
from math import e
from math import pi
from typing import Any
from typing import Callable
from pyparsing import CaselessKeyword
from pyparsing import CaselessLiteral
from pyparsing import infixNotation
from pyparsing import Literal
from pyparsing import oneOf
from pyparsing import opAssoc
from pyparsing import ParserElement
from pyparsing import ParseResults
from pyparsing import pyparsing_common
from pyparsing.exceptions import ParseException
from .operations import add
from .operations import expo
from .operations import factorial
from .operations import floor_div
from .operations import mod
from .operations import mult
from .operations import roll_dice
from .operations import sqrt
from .operations import sub
from .operations import true_div
from .types import EvaluationResults
from .types import RollOption
from .types import RollResults
ParserElement.enablePackrat()
ROLL_TYPE: RollOption
class DiceParser:
"""Parser for evaluating dice strings."""
OPERATIONS: dict[
str,
Callable[
# Arguments
[
int | float | EvaluationResults,
int | float | EvaluationResults,
],
# Output
int | float | EvaluationResults,
],
] = {
"+": add,
"-": sub,
"*": mult,
"/": true_div,
"//": floor_div,
"%": mod,
"d": roll_dice,
}
def __init__(self: DiceParser) -> None:
"""Initialize a parser to handle dice strings."""
self._parser = self._create_parser()
@staticmethod
def _create_parser() -> ParserElement:
"""Create an instance of a dice roll string parser."""
atom = (
CaselessLiteral("d%").setParseAction(
lambda: DiceParser._handle_roll(1, 100)
)
| pyparsing_common.number
| CaselessKeyword("pi").setParseAction(lambda: pi)
| CaselessKeyword("e").setParseAction(lambda: e)
)
expression = infixNotation(
atom,
[
# Unary minus
(Literal("-"), 1, opAssoc.RIGHT, DiceParser._handle_unary_minus),
# Square root
(CaselessLiteral("sqrt"), 1, opAssoc.RIGHT, DiceParser._handle_sqrt),
# Exponents
(oneOf("^ **"), 2, opAssoc.RIGHT, DiceParser._handle_expo),
# Unary minus (#2)
(Literal("-"), 1, opAssoc.RIGHT, DiceParser._handle_unary_minus),
# Factorial
(Literal("!"), 1, opAssoc.LEFT, DiceParser._handle_factorial),
# Dice notations
(
CaselessLiteral("d%"),
1,
opAssoc.LEFT,
lambda toks: DiceParser._handle_roll(toks[0][0], 100),
),
(
CaselessLiteral("d"),
2,
opAssoc.RIGHT,
lambda toks: DiceParser._handle_roll(toks[0][0], toks[0][2]),
),
# This line causes the recursion debug to go off.
# Will have to find a way to have an optional left
# operand in this case.
(
CaselessLiteral("d"),
1,
opAssoc.RIGHT,
lambda toks: DiceParser._handle_roll(1, toks[0][1]),
),
# Keep notation
(oneOf("k K"), 2, opAssoc.LEFT, DiceParser._handle_keep),
(oneOf("k K"), 1, opAssoc.LEFT, DiceParser._handle_keep),
# Multiplication and division
(
oneOf("* / % //"),
2,
opAssoc.LEFT,
DiceParser._handle_standard_operation,
),
# Addition and subtraction
(oneOf("+ -"), 2, opAssoc.LEFT, DiceParser._handle_standard_operation),
# TODO: Use this to make a pretty exception message
# where we point out and explain the issue.
],
).setFailAction(lambda s, loc, expr, err: print(err))
return expression
@staticmethod
def _handle_unary_minus(
toks: list[list[int | float | EvaluationResults]],
) -> int | float | EvaluationResults:
return -toks[0][1]
@staticmethod
def _handle_sqrt(
toks: list[list[str | int | float | EvaluationResults]],
) -> EvaluationResults:
value: int | float | EvaluationResults | str = toks[0][1]
if not isinstance(value, (int, float, EvaluationResults)):
raise TypeError("The given value must be int, float, or EvaluationResults")
return sqrt(value)
@staticmethod
def _handle_expo(
toks: list[list[int | float | EvaluationResults]],
) -> int | float | EvaluationResults:
return expo(toks[0][0], toks[0][2])
@staticmethod
def _handle_factorial(
toks: list[list[int | float | EvaluationResults]],
) -> int | float | EvaluationResults:
return factorial(toks[0][0])
@staticmethod
def _handle_roll(
sides: int | float | EvaluationResults,
num: int | float | EvaluationResults,
) -> int | float | EvaluationResults:
global ROLL_TYPE
roll_option: RollOption = ROLL_TYPE
return roll_dice(sides, num, roll_option)
@staticmethod
def _handle_keep(
toks: list[list[int | float | EvaluationResults | str]],
) -> int | float | EvaluationResults:
tokens: list[int | float | EvaluationResults | str] = toks[0]
if not isinstance(tokens[0], EvaluationResults):
raise TypeError("Left value must contain a dice roll.")
# We initialize our result with the left-most value.
# As we perform operations, this value will be continuously
# updated and used as the left-hand side.
result: EvaluationResults = tokens[0]
# If it's the case that we have an implied keep amount, we
# need to manually add it to the end here.
if len(tokens) % 2 == 0:
tokens.append(1)
# Because we get things like [[1, "+", 2, "+", 3]], we have
# to be able to handle additional operations beyond a single
# left/right pair.
for i in range(1, len(tokens), 2):
op_index = i
right_index = i + 1
operation_string: str = str(tokens[op_index])
right: EvaluationResults | float | int | str = tokens[right_index]
if isinstance(right, EvaluationResults):
result += right
result.total -= right.total
right = right.total
last_roll: RollResults = result.rolls[-1]
lower_total_by: int | float = 0
if operation_string == "k":
lower_total_by = last_roll.keep_lowest(float(right))
result.history.append(f"Keeping lowest: {right}: {last_roll.rolls}")
else:
lower_total_by = last_roll.keep_highest(float(right))
result.history.append(f"Keeping highest: {right}: {last_roll.rolls}")
result.total -= lower_total_by
return result
@staticmethod
def _handle_standard_operation(
toks: list[list[str | int | float | EvaluationResults]],
) -> str | int | float | EvaluationResults:
# We initialize our result with the left-most value.
# As we perform operations, this value will be continuously
# updated and used as the left-hand side.
left_hand_side: int | float | EvaluationResults | str = toks[0][0]
if isinstance(left_hand_side, str):
left_hand_side = float(left_hand_side)
result: int | float | EvaluationResults = left_hand_side
# Because we get things like [[1, "+", 2, "+", 3]], we have
# to be able to handle additional operations beyond a single
# left/right pair.
for pair in range(1, len(toks[0]), 2):
right_hand_side: int | float | EvaluationResults | str = toks[0][pair + 1]
if isinstance(right_hand_side, str):
right_hand_side = float(right_hand_side)
operation_string: str = str(toks[0][pair])
op: Callable[
[
int | float | EvaluationResults,
int | float | EvaluationResults,
],
int | float | EvaluationResults,
] = DiceParser.OPERATIONS[operation_string]
result = op(result, right_hand_side)
return result
def parse(
self: DiceParser, dice_string: str, roll_option: RollOption = RollOption.Normal
) -> ParseResults:
"""Parse well-formed dice roll strings."""
global ROLL_TYPE
ROLL_TYPE = roll_option
try:
result: ParseResults = self._parser.parseString(dice_string, parseAll=True)
except ParseException as err:
raise SyntaxError("Unable to parse input string: " + dice_string) from err
return result
def evaluate(
self: DiceParser,
dice_string: str,
roll_option: RollOption = RollOption.Normal,
) -> int | float | EvaluationResults:
"""Parse and evaluate the given dice string."""
parse_result: Any = self.parse(dice_string, roll_option)[0]
if not isinstance(parse_result, (int, float, EvaluationResults)):
raise TypeError(f"Invalid return type given in result: {parse_result}")
return parse_result | /roll-cli-2.0.0.tar.gz/roll-cli-2.0.0/src/roll_cli/parser/diceparser.py | 0.843283 | 0.29041 | diceparser.py | pypi |
from __future__ import annotations
from enum import Enum
from math import ceil
from math import factorial
from math import sqrt
from .rollresults import RollResults
class Operators(Enum):
"""Helper operator names for handling operations.
This makes it so that we don't have magic strings
everywhere. Instead, we can do things like:
Operators.add
"""
add = "+"
sub = "-"
mul = "*"
truediv = "/"
floordiv = "//"
mod = "%"
expo = "^"
expo2 = "**"
die = "d"
class EvaluationResults:
"""Hold the current state of all rolls and the total from other ops."""
def __init__(
self: EvaluationResults,
value: int | float | EvaluationResults = 0,
rolls: list[RollResults] | None = None,
) -> None:
"""Initialize an EvaluationResults object."""
if rolls is None:
rolls = []
total: int | float | None = None
history: list[str] = []
if isinstance(value, EvaluationResults):
rolls.extend(value.rolls)
history = value.history
total = value.total
else:
total = value
self.total: int | float = total
self.rolls: list[RollResults] = rolls
self.history: list[str] = history
def add_roll(self: EvaluationResults, roll: RollResults) -> None:
"""Add the results of a roll to the total evaluation results."""
self.total += roll.total()
self.rolls.append(roll)
self.history.append(f"Rolled: {roll.dice}: {roll.rolls}")
def sqrt(self: EvaluationResults) -> None:
"""Take the square root of the total value."""
new_total = sqrt(self.total)
self.history.append(f"Square Root: {self.total}: {new_total}")
self.total = new_total
def factorial(self: EvaluationResults) -> None:
"""Factorial the total value."""
new_total = factorial(ceil(self.total))
self.history.append(f"Factorial: {self.total}! = {new_total}")
self.total = new_total
def _process_right_hand_value(
self: EvaluationResults, x: EvaluationResults | int | float
) -> int | float:
"""Allows two ER objects to have their dice and history combined.
This is important so that the history and rolls are preserved
otherwise we lose key pieces of information that would disallow us
from being able to make modifications to rolls, i.e., keep notation,
as well as being able to give a complete verbose output when finished.
"""
right_hand_value: int | float | EvaluationResults
if isinstance(x, (int, float)):
right_hand_value = x
elif isinstance(x, EvaluationResults):
self._collect_rolls(x)
self.history.extend(x.history)
right_hand_value = x.total
else:
raise TypeError("The supplied type is not valid: " + type(x).__name__)
return right_hand_value
def _collect_rolls(self: EvaluationResults, er: EvaluationResults) -> None:
"""Add all rolls together if both objects are EvaluationResults.
The way that we do this is by extending the other object's rolls
in place for efficiency's sake and then we set our object to that
one instead of the other way around so that we hopefully are going
to have the most recently roll as our last.
"""
er.rolls.extend(self.rolls)
self.rolls = er.rolls
def __str__(self: EvaluationResults) -> str:
"""Return a string representation of the eval results.
This will output all of the history of the dice roll
along with the total sum.
e.g.
Roll: 4d6 + 2
4d6: [3, 1, 1, 2]
7 + 2: 9
9
Roll: 10d10K5 + 4
10d10: [10, 6, 2, 1, 3, 3, 4, 8, 6, 4]
K5: [4, 6, 6, 8, 10]
34 + 4: 38
38
"""
history_string: str = "\n".join([h for h in self.history]) + "\n"
total_string: str = f"{self.total}"
if isinstance(self.total, float) and self.total % 1 == 0:
total_string = f"{int(self.total)}"
return f"{history_string}{total_string}"
def __int__(self: EvaluationResults) -> int:
"""Change the evaluation result total to an integer."""
return int(self.total)
def __float__(self: EvaluationResults) -> float:
"""Change the evaluation result total to a float."""
return float(self.total)
def __len__(self: EvaluationResults) -> int:
"""Return the number of rolls that have been rolled."""
return len(self.rolls)
def __eq__(self: EvaluationResults, other: object) -> bool:
"""Return whether or not a given value is numerically equal."""
if not isinstance(other, (int, float, EvaluationResults)):
return False
if isinstance(other, (int, float)):
return other == self.total
return self.total == other.total
def __neg__(self: EvaluationResults) -> EvaluationResults:
"""Negate the total value."""
self.total = -self.total
return self
def __add__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Add a given value to the evaluation result total."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total += right_hand_value
self.history.append(
f"Adding: {previous_total} + {right_hand_value} = {self.total}"
)
return self
def __iadd__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Add a given value to the evaluation result total."""
return self.__add__(x)
def __radd__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Add a given value to the evaluation result total."""
return self.__add__(x)
def __sub__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Subtract a given value from the evaluation result total."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total -= right_hand_value
self.history.append(
f"Subtracting: {previous_total} - {right_hand_value} = {self.total}"
)
return self
def __isub__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Subtract a given value from the evaluation result total."""
return self.__sub__(x)
def __rsub__(self: EvaluationResults, x: int | float) -> EvaluationResults:
"""Subtract a given value from the evaluation result total."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total = right_hand_value - self.total
self.history.append(
f"Adding: {right_hand_value} - {previous_total} = {self.total}"
)
return self
def __mul__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Multiply the evaluation result total by a given value."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total *= right_hand_value
self.history.append(
f"Multiplying: {previous_total} * {right_hand_value} = {self.total}"
)
return self
def __imul__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Multiply the evaluation result total by a given value."""
return self.__mul__(x)
def __rmul__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Multiply the evaluation result total by a given value."""
return self.__mul__(x)
def __truediv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total /= right_hand_value
self.history.append(
f"Dividing: {previous_total} / {right_hand_value} = {self.total}"
)
return self
def __itruediv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number."""
return self.__truediv__(x)
def __rtruediv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total = right_hand_value / self.total
self.history.append(
f"Dividing: {right_hand_value} / {previous_total} = {self.total}"
)
return self
def __floordiv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number and floor."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total //= right_hand_value
self.history.append(
f"Floor dividing: {previous_total} // {right_hand_value} = {self.total}"
)
return self
def __ifloordiv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number and floor."""
return self.__floordiv__(x)
def __rfloordiv__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Divide the evaluation result total by a given number and floor."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total = right_hand_value // self.total
self.history.append(
f"Floor dividing: {right_hand_value} // {previous_total} = {self.total}"
)
return self
def __mod__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Perform modulus divison on the evaluation total with given value."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total %= right_hand_value
self.history.append(
f"Modulus dividing: {previous_total} % {right_hand_value} = {self.total}"
)
return self
def __imod__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Perform modulus divison on the evaluation total with given value."""
return self.__mod__(x)
def __rmod__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Perform modulus divison on the evaluation total with given value."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total = right_hand_value % self.total
self.history.append(
f"Modulus dividing: {right_hand_value} % {previous_total} = {self.total}"
)
return self
def __pow__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Exponentiate the evaluation results by the given value."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total **= right_hand_value
self.history.append(
f"Exponentiating: {previous_total} ** {right_hand_value} = {self.total}"
)
return self
def __ipow__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Exponentiate the evaluation results by the given value."""
return self.__pow__(x)
def __rpow__(
self: EvaluationResults, x: int | float | EvaluationResults
) -> EvaluationResults:
"""Exponentiate the evaluation results by the given value."""
right_hand_value: int | float = self._process_right_hand_value(x)
previous_total = self.total
self.total = right_hand_value**self.total
self.history.append(
f"Exponentiating: {right_hand_value} ** {previous_total} = {self.total}"
)
return self | /roll-cli-2.0.0.tar.gz/roll-cli-2.0.0/src/roll_cli/parser/types/evaluationresults.py | 0.923368 | 0.409752 | evaluationresults.py | pypi |
from bisect import bisect_left, bisect_right
from typing import Iterator, Tuple
class InitiativeQueue:
"""Initiative tracking queue.
Its external interface resembles that of a list, supporting
iteration and item accessing.
This is effectively a priority queue, but is implemented with O(n)
insertion, deletion and lookup. This is not supposed to be an efficient
data structure, just one that is easy to use in the RFI REPL.
See add, remove, move_up and move_down as the main functions.
"""
def __init__(self):
"""See help(InitiativeQueue) for more information."""
# This split is necessary due to how bisect works.
# There is no key function, and creating a new list every time would be
# unnecessary.
self.names = []
self.initiatives = []
def add(self, name: str, initiative: int) -> None:
"""Add a name to the initiative queue.
Arguments:
name (str): name of the new entry.
initiative (int): entry initiative, with higher values coming first.
Raises:
ValueError: if called with a name that is already in the queue.
"""
if name in self.names:
raise ValueError("Duplicate name in initiative queue.")
insertion_idx = bisect_left(self.initiatives, initiative)
self._add(name, initiative, insertion_idx)
def remove(self, name: str) -> None:
"""Remove an entry from the queue.
Arguments:
name (str): name of entry to be removed.
Raises:
ValueError: if the name is not in the queue.
"""
removal_idx = self._get_index(name)
self._remove(removal_idx)
def update(self, name: str, new_initiative: int) -> None:
"""Update the initiative of an entry.
Equivalent to
self.remove(name)
self.add(name, new_initiative)
Arguments:
name (str): name of entry to be updated.
new_initiative (int): new value for entry initiative.
Raises:
ValueError: if the name is not in the queue.
"""
self.remove(name)
self.add(name, new_initiative)
def update_name(self, name: str, new_name: str) -> None:
"""Change the name of an entry.
Argumnents:
name (str): current name of entry to be updated.
new_name (str): desired name of entry.
Raises:
ValueError: if the current name is not in the queue.
ValueError: if the new name is already in the queue.
"""
idx = self._get_index(name)
if new_name in self.names:
raise ValueError("Desired name already exists in queue.")
self.names[idx] = new_name
def move_up(self, name: str) -> None:
"""Move a name up (closer to index 0) in case of a tie.
This is only to be used when two or more entries have the same
initiative, but their relative order has to be changed.
Arguments:
name (str): name of entry to be moved.
Raises:
ValueError: if the name is not in the queue.
ValueError: if moving the entry up would violate initiative order.
"""
original_idx = self._get_index(name)
initiative = self.initiatives[original_idx]
max_valid_idx = bisect_right(self.initiatives, initiative) - 1
if max_valid_idx > original_idx:
self._move(original_idx, original_idx + 1)
else:
raise ValueError(f"Can't move {name} up without violating initiative order.")
def move_down(self, name: str) -> None:
"""Move a name down (further from index 0) in case of a tie.
This is only to be used when two or more entries have the same
initiative, but their relative order has to be changed.
Arguments:
name (str): name of entry to be moved.
Raises:
ValueError: if the name is not in the queue.
ValueError: if moving the entry down would violate initiative order.
"""
original_idx = self._get_index(name)
initiative = self.initiatives[original_idx]
min_valid_idx = bisect_left(self.initiatives, initiative)
if min_valid_idx < original_idx:
self._move(original_idx, original_idx - 1)
else:
raise ValueError(f"Can't move {name} down without violating initiative order.")
def position_of(self, name: str) -> int:
"""Find position of name in queue.
Arguments:
name (str): name of entry to be found.
Raises:
ValueError: if the name is not in the list.
"""
idx = self._get_index(name)
return len(self) - idx - 1
def clear(self) -> None:
"""Clear queue entries."""
self.names.clear()
self.initiatives.clear()
def _get_index(self, name: str) -> int:
try:
return self.names.index(name)
except ValueError:
raise ValueError(f"Name not in initiative queue: {name}")
def _add(self, name: str, initiative: int, idx: int) -> None:
self.names.insert(idx, name)
self.initiatives.insert(idx, initiative)
def _remove(self, idx: int) -> (str, int):
name, initiative = self.names[idx], self.initiatives[idx]
del self.names[idx]
del self.initiatives[idx]
return name, initiative
def _move(self, original_idx: int, final_idx: int) -> None:
name, initiative = self._remove(original_idx)
self._add(name, initiative, final_idx)
def __iter__(self) -> Iterator[Tuple[str, int]]:
"""Iterate over self[idx] without looping."""
max_idx = len(self)
for idx in range(max_idx):
yield self[idx]
def __len__(self) -> int:
"""Retrieve size of queue."""
return len(self.names)
def __contains__(self, name: str) -> bool:
"""Check if there is an entry with the given name."""
return name in self.names
def __bool__(self) -> bool:
"""Check if the queue has elements in it."""
return bool(self.names)
def __getitem__(self, n: int) -> (str, int): # pylint: disable=invalid-name
"""Retrieve the entry with the nth greatest initiative.
Arguments:
n (int): position of the list to retrieve.
Returns:
name (str): entry name.
initiative (int): entry initiative.
"""
# Invert index so self[0] returns the item with greatest initiative.
idx = -n - 1
return self.names[idx], self.initiatives[idx] | /roll-for-initiative-0.16.1.tar.gz/roll-for-initiative-0.16.1/rfi/initiative.py | 0.915351 | 0.526221 | initiative.py | pypi |
import sympy as sp
import sympy.physics.mechanics as me
from inspect import signature
import pandas as pd
from sympy.core.numbers import Float
import numpy as np
def substitute_dynamic_symbols(expression):
dynamic_symbols = me.find_dynamicsymbols(expression)
derivatives = find_derivatives(dynamic_symbols)
derivative_list = []
# First susbtitute the Derivatives starting with the highest order (since higher order could be broken up in lower order)
subs = []
for order in reversed(sorted(derivatives.keys())):
for derivative in list(derivatives[order]):
name = find_name(derivative)
symbol = sp.Symbol(name)
subs.append((derivative, symbol))
derivative_list.append(derivative)
new_expression_derivatives = expression.subs(subs)
none_derivatives = dynamic_symbols - set(derivative_list)
# ...Then substitute the dynamic symbols
subs = []
for dynamic_symbol in list(none_derivatives):
name = find_name(dynamic_symbol=dynamic_symbol)
symbol = sp.Symbol(name)
subs.append((dynamic_symbol, symbol))
new_expression = new_expression_derivatives.subs(subs)
return new_expression
def find_name(dynamic_symbol):
if isinstance(dynamic_symbol, sp.Derivative):
name = find_derivative_name(dynamic_symbol)
else:
name = dynamic_symbol.name
return name
def find_derivatives(dynamic_symbols:set)->dict:
derivatives = {}
for dynamic_symbol in list(dynamic_symbols):
if isinstance(dynamic_symbol, sp.Derivative):
order = dynamic_symbol.args[1][1]
if not order in derivatives:
derivatives[order] = []
derivatives[order].append(dynamic_symbol)
return derivatives
def find_derivative_name(derivative):
if not isinstance(derivative, sp.Derivative):
raise ValueError('%s must be an instance of sympy.Derivative' % derivative)
order = derivative.args[1][1]
symbol = derivative.expr
name = '%s%id' % (symbol.name, order)
return name
def lambdify(expression):
new_expression = substitute_dynamic_symbols(expression)
args = new_expression.free_symbols
# Rearranging to get the parameters in alphabetical order:
symbol_dict = {symbol.name: symbol for symbol in args}
symbols = []
for symbol_name in sorted(symbol_dict.keys()):
symbols.append(symbol_dict[symbol_name])
lambda_function = sp.lambdify(symbols, new_expression, modules='numpy')
return lambda_function
def run(function,inputs, **kwargs):
inputs=inputs.copy()
if isinstance(inputs,dict):
inputs = pd.Series(inputs)
constants = pd.Series(dict(**kwargs))
if isinstance(inputs, pd.Series):
inputs_columns = inputs.index
elif isinstance(inputs, pd.DataFrame):
inputs_columns = inputs.columns
else:
raise ValueError('inputs should be wither pd.Series or pd.DataFrame')
constant_columns = constants.index
constant_columns = list(set(constant_columns) - set(inputs_columns))
for constant_column in constant_columns:
inputs[constant_column] = constants[constant_column]
s = signature(function)
input_names = set(s.parameters.keys())
missing = list(input_names - set(inputs_columns) - set(constant_columns))
if len(missing) > 0:
raise ValueError('Sympy lambda function misses:%s' % (missing))
return function(**inputs[input_names])
def significant(number, precision=3):
"""
Get the number with significant figures
Parameters
----------
number
Sympy Float
precision
number of significant figures
Returns
-------
Sympy Float with significant figures.
"""
number_string = np.format_float_positional(float(number), precision=precision,
unique=False, fractional=False, trim='k')
return Float(number_string)
def significant_numbers(expression, precision=3):
"""
Change to a wanted number of significant figures in the expression
Parameters
----------
expression
Sympy expression
precision
number of significant figures
Returns
-------
Sympy expression with significant figures.
"""
new_expression = expression.copy()
return _significant_numbers(new_expression, precision=precision)
def _significant_numbers(new_expression, precision=3):
for part in new_expression.args:
if isinstance(part, Float):
new_expression = new_expression.subs(part, significant(part, precision=precision))
elif hasattr(part, 'args'):
new_part = _significant_numbers(part, precision=precision)
new_expression = new_expression.subs(part, new_part)
return new_expression | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/substitute_dynamic_symbols.py | 0.656658 | 0.382776 | substitute_dynamic_symbols.py | pypi |
import sympy as sp
import sympy.physics.mechanics as me
import rolldecayestimators.special_symbol as ss
import sympy as sp
import sympy.physics.mechanics as me
Fn = ss.Symbol(name='F_n',description='Froude number',unit='-')
t = ss.Symbol(name='t',description='time',unit='s')
I = ss.Symbol(name='I',description='total roll inertia',unit='kg*m**2')
B = ss.Symbol(name='B',description='total roll damping',unit='kg*m*/s')
rho = ss.Symbol(name='rho',description='water density',unit='kg/m**3')
g = ss.Symbol(name='g',description='acceleration of gravity',unit='m/s**2')
Disp = ss.Symbol(name='Disp',description='displacement',unit='m**3')
M_x = ss.Symbol(name='M_x',description='External roll moment',unit='Nm')
m = ss.Symbol(name='m',description='mass of ship',unit='kg')
GM = ss.Symbol(name='GM', description='metacentric height', unit='m')
dGM = ss.Symbol(name='dGM', description='metacentric height correction', unit='m/rad')
omega = ss.Symbol(name='omega', description='Angular velocity of external moment', unit='rad/s')
L_pp = ss.Symbol(name='L_pp',description='ship perpendicular length',unit='m')
b = ss.Symbol(name='b', description='ship beam', unit='m')
x_s = ss.Symbol(name='x_s',description='section x-coordinate',unit='m')
AP = ss.Symbol(name='AP',description='ship perpendiculars',unit='-')
FP = ss.Symbol(name='FP',description='ship perpendiculars',unit='-')
r2 = ss.Symbol(name='r^2',description='coefficient of determination',unit='-')
C_p = ss.Symbol(name='C_p',description='Prismatic coefficient',unit='-')
C_b = ss.Symbol(name='C_b',description='Block coefficient',unit='-')
I_RUD = ss.Symbol(name='I_RUD',description='Number of rudders',unit='-')
BK_L = ss.Symbol(name='BK_L',description='Bilge keel length',unit='m')
BK_B = ss.Symbol(name='BK_B',description='Bilge keel height',unit='m')
A_0 = ss.Symbol(name='A_0',description='Mid ship area coefficient',unit='-')
ship_type_id = ss.Symbol(name='ship_type',description='Ship type',unit='-')
I_xx = ss.Symbol(name='I_xx',description='Roll intertia',unit='kg*m**2')
K_xx = ss.Symbol(name='K_xx',description='Nondimensional roll radius of gyration',unit='-')
R_h = ss.Symbol(name='R_h',description='Rudder height',unit='m')
A_R = ss.Symbol(name='A_R',description='Rudder area',unit='m**2')
TWIN = ss.Symbol(name='twin',description='Twin skrew',unit='True/False')
kg = ss.Symbol(name='kg',description='Keel to g',unit='m')
C_W = ss.Symbol(name='C_W',description='Water area coefficient',unit='-')
T_F = ss.Symbol(name='T_F',description='Draught forward',unit='m')
T_A = ss.Symbol(name='T_A',description='Draught aft',unit='m')
T = ss.Symbol(name='T',description='Mean draught',unit='m')
S = ss.Symbol(name='S',description='Wetted surface',unit='m**2')
V = ss.Symbol(name='V',description='Ship speed',unit='m/s')
OG = ss.Symbol(name='OG',description='Distance into water from still water to centre of gravity',unit='m')
# Sections:
B_E0s = ss.Symbol(name="B'_E0", description='Zero speed sectional eddy damping', unit='Nm*s/(m)')
T_s = ss.Symbol(name='T_s',description='Section draught',unit='m')
B_s = ss.Symbol(name='B_s',description='Section beam',unit='m')
sigma = ss.Symbol(name='sigma',description='Section area coefficient',unit='-')
phi = me.dynamicsymbols('phi') # Roll angle
#phi = ss.Symbol(name='phi', description='Roll angle', unit='rad') # Roll angle
phi_dot = phi.diff()
phi_dot_dot = phi_dot.diff()
phi_a = ss.Symbol(name='phi_a', description='roll amplitude', unit='rad')
zeta = ss.Symbol(name='zeta', description='Damping coefficient', unit = '-') # Linear roll damping coefficeint
omega0 = ss.Symbol(name='omega0',description='Natural angular velocity',unit='rad/s') # Natural roll frequency
d = sp.Symbol('d') # Nonlinear roll damping coefficient
A_44 = ss.Symbol(name='A_44', description='Total mass moment of inertia', unit='kg*m**2')
B_1 = ss.Symbol(name='B_1',description='Linear damping coefficient',unit='Nm*s') # Natural roll frequency
B_2 = ss.Symbol(name='B_2',description='Quadratic damping coefficient',unit='Nm*s**2') # Natural roll frequency
B_3 = ss.Symbol(name='B_3',description='Cubic damping coefficient',unit='Nm*s**3') # Natural roll frequency
C = ss.Symbol(name='C', description='General stiffness coefficient', unit=r'Nm/rad') # Introducing a helper coefficient C
C_1 = ss.Symbol(name='C_1', description='Linear stiffness coefficient', unit=r'Nm')
C_3 = ss.Symbol(name='C_3',description='Stiffness coefficient', unit=r'Nm')
C_5 = ss.Symbol(name='C_5',description='Stiffness coefficient', unit=r'Nm')
B_e = ss.Symbol(name='B_e', description='Equivalen linearized damping', unit='Nm*s')
B_44_hat = ss.Symbol(name='B_44_hat', description='Nondimensional damping', unit='-')
B_e_hat = ss.Symbol(name='B_e_hat', description='Nondimensional damping', unit='-')
B_e_hat_0 = ss.Symbol(name='B_e_hat_0', description='Nondimensional damping', unit='-')
B_e_factor = ss.Symbol(name='B_e_factor', description='Nondimensional damping', unit='-')
B_W_e_hat = ss.Symbol(name='B_W_e_hat', description='Nondimensional damping', unit='-')
B_F_e_hat = ss.Symbol(name='B_F_e_hat', description='Nondimensional damping', unit='-')
B_BK_e_hat = ss.Symbol(name='B_BK_e_hat', description='Nondimensional damping', unit='-')
B_E_e_hat = ss.Symbol(name='B_E_e_hat', description='Nondimensional damping', unit='-')
B_L_e_hat = ss.Symbol(name='B_L_e_hat', description='Nondimensional damping', unit='-')
B_1_hat = ss.Symbol(name='B_1_hat', description='Nondimensional damping', unit='-')
B_2_hat = ss.Symbol(name='B_2_hat', description='Nondimensional damping', unit='-')
omega_hat = ss.Symbol(name='omega_hat', description='Nondimensional roll frequency', unit='-')
omega0_hat = ss.Symbol(name='omega0_hat', description='Nondimensional roll frequency', unit='-')
B_1_hat0 = ss.Symbol(name='B_1_hat0', description='Nondimensional damping at zero speed', unit='-')
B_44_ = ss.Symbol(name='B_44', description='Total roll damping at a certain roll amplitude', unit='Nm*s')
B_F = ss.Symbol(name='B_F', description='Friction roll damping', unit='Nm*s')
B_W = ss.Symbol(name='B_W', description='Wave roll damping', unit='Nm*s')
B_E = ss.Symbol(name='B_E', description='Eddy roll damping', unit='Nm*s')
B_E0 = ss.Symbol(name='B_E0', description='Zero speed eddy damping ', unit='Nm*s')
B_BK = ss.Symbol(name='B_{BK}', description='Bilge keel roll damping', unit='Nm*s')
B_L = ss.Symbol(name='B_L', description='Hull lift roll damping', unit='Nm*s')
B_44_HAT = ss.Symbol(name='B_44_HAT', description='Total roll damping at a certain roll amplitude', unit='-')
B_F_HAT = B_F_hat = ss.Symbol(name='B_F_HAT', description='Friction roll damping', unit='-')
B_W_HAT = B_W_hat = ss.Symbol(name='B_W_HAT', description='Wave roll damping', unit='-')
B_E_HAT = B_E_hat = ss.Symbol(name='B_E_HAT', description='Eddy roll damping', unit='-')
B_E0_hat = ss.Symbol(name='B_E0_HAT', description='Eddy roll damping at zero speed', unit='-')
B_BK_HAT = B_BK_hat = ss.Symbol(name='B_BK_HAT', description='Bilge keel roll damping', unit='-')
B_L_HAT = B_L_hat = ss.Symbol(name='B_L_HAT', description='Hull lift roll damping', unit='-')
## Functions:
GZ = sp.Function('GZ')(phi)
B_44 = sp.Function('B_{44}')(phi_dot)
C_44 = sp.Function('C_{44}')(phi)
M_44 = sp.Function('M_{44}')(omega*t)
## Analytical
y = me.dynamicsymbols('y')
y0 = me.dynamicsymbols('y0')
y0_dot = y0.diff()
y0_dotdot = y0_dot.diff()
D = sp.symbols('D')
phi_0 = me.dynamicsymbols('phi_0')
phi_0_dot = phi_0.diff()
phi_0_dotdot = phi_0_dot.diff()
ikeda_simplified = sp.Function('f')(L_pp, b, C_b, A_0,
OG, phi_a, BK_L, BK_B, omega,
T, V)
"""
Ikeda, Y.,
1978. On eddy making component of roll damping force on naked hull. University of Osaka Prefacture,
Department of Naval Architecture, Japan, Report No. 00403,
Published in: Journal of Society of Naval Architects of Japan, Volume 142.
"""
C_P = ss.Symbol(name='C_P', description='Pressure difference coefficient', unit='-')
C_r = ss.Symbol(name='C_r', description='Eddy damping coefficient', unit='-')
r_max = ss.Symbol(name='r_max', description='Maximum distance from roll axis to hull', unit='m')
P_m = ss.Symbol(name='P_m', description='Pressure difference', unit='N/m**2')
R_b = ss.Symbol(name='R_b', description='Bilge radius', unit='m')
f_1 = ss.Symbol(name='f_1', description='Difference of flow factor', unit='-')
f_2 = ss.Symbol(name='f_2', description='Modification factor', unit='-')
H_0 = ss.Symbol(name='H_0', description='Half b-draft ratio', unit='-')
B_E_star_hat = ss.Symbol(name='B_E_star_hat', description='Only nonlinear nondimensional eddy damping', unit='-')
B_F_star_hat = ss.Symbol(name='B_F_star_hat', description='Only nonlinear nondimensional friction damping', unit='-')
B_W_star_hat = ss.Symbol(name='B_W_star_hat', description='Only nonlinear nondimensional wave damping', unit='-')
B_star_hat = ss.Symbol(name='B_star_hat', description='Only nonlinear nondimensional damping', unit='-')
"""[summary]
Ikeda, Y., Himeno, Y., Tanaka, N., 1978.
Components of Roll Damping of Ship at Forward Speed.
J. SNAJ, Nihon zousen gakkai ronbunshu 1978, 113–125. https://doi.org/10.2534/jjasnaoe1968.1978.113
"""
K = ss.Symbol(name='K', description='Reduced frequency', unit='-')
alpha = ss.Symbol(name='alpha', description='Eddy damping speed dependancy factor', unit='-')
## Lewis
a_1 = ss.Symbol(name='a_1', description='Lewis section coefficient', unit='-')
a_3 = ss.Symbol(name='a_3', description='Lewis section coefficient', unit='-')
D_1= ss.Symbol(name='D_1', description='Lewis section coefficient', unit='-') | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/symbols.py | 0.658966 | 0.454291 | symbols.py | pypi |
import numpy as np
import pandas as pd
from rolldecayestimators import lambdas
def sample_increase(X, increase=5):
N = len(X) * increase
t_interpolated = np.linspace(X.index[0], X.index[-1], N)
X_interpolated = pd.DataFrame(index=t_interpolated)
for key, values in X.items():
X_interpolated[key] = np.interp(t_interpolated, values.index, values)
return X_interpolated
def get_peaks(X:pd.DataFrame, key='phi1d')->pd.DataFrame:
"""
Find the peaks in the signal by finding zero roll angle velocity
Parameters
----------
X
DataFrame with roll signal as "phi"
key = 'phi1d'
Returns
-------
Dataframe with rows from X where phi1d is close to 0.
"""
phi1d = np.array(X[key])
index = np.arange(0, len(X.index))
index_later = np.roll(index, shift=-1)
index_later[-1] = index[-1]
mask = (
((phi1d[index] > 0) &
(phi1d[index_later] < 0)) |
((phi1d[index] < 0) &
(phi1d[index_later] > 0))
)
index_first = index[mask]
index_second = index[mask] + 1
# y = m + k*x
# k = (y2-y1)/(x2-x1)
# m = y1 - k*x1
# y = 0 --> x = -m/k
X_1 = X.iloc[index_first].copy()
X_2 = X.iloc[index_second].copy()
rows, cols = X_1.shape
x1 = np.array(X_1.index)
x2 = np.array(X_2.index)
y1 = np.array(X_1['phi1d'])
y2 = np.array(X_2['phi1d'])
k = (y2 - y1) / (x2 - x1)
m = y1 - k * x1
x = -m / k
X_1 = np.array(X_1)
X_2 = np.array(X_2)
factor = (x - x1) / (x2 - x1)
factor = np.tile(factor, [cols, 1]).T
X_zero = X_1 + (X_2 - X_1) * factor
X_zerocrossings = pd.DataFrame(data=X_zero, columns=X.columns, index=x)
return X_zerocrossings
def calculate_amplitudes(X_zerocrossings):
X_amplitudes = pd.DataFrame()
for i in range(len(X_zerocrossings) - 1):
s1 = X_zerocrossings.iloc[i]
s2 = X_zerocrossings.iloc[i + 1]
amplitude = (s2 - s1).abs()
amplitude.name = (s1.name + s2.name)/2 # mean time
X_amplitudes = X_amplitudes.append(amplitude)
X_amplitudes['phi']/=2
X_amplitudes['phi_a'] = X_amplitudes['phi']
return X_amplitudes
def calculate_amplitudes_and_damping(X:pd.DataFrame):
X_interpolated = sample_increase(X=X)
X_zerocrossings = get_peaks(X=X_interpolated)
X_amplitudes = calculate_amplitudes(X_zerocrossings=X_zerocrossings)
X_amplitudes = calculate_damping(X_amplitudes=X_amplitudes)
T0 = 2*X_amplitudes.index
X_amplitudes['omega0'] = 2 * np.pi/np.gradient(T0)
#X_amplitudes['time'] = np.cumsum(X_amplitudes.index)
return X_amplitudes
def calculate_damping(X_amplitudes):
df_decrements = pd.DataFrame()
for i in range(len(X_amplitudes) - 2):
s1 = X_amplitudes.iloc[i]
s2 = X_amplitudes.iloc[i + 2]
decrement = s1 / s2
decrement.name = s1.name
df_decrements = df_decrements.append(decrement)
df_decrements['zeta_n'] = 1 / (2 * np.pi) * np.log(df_decrements['phi'])
X_amplitudes_new = X_amplitudes.copy()
X_amplitudes_new = X_amplitudes_new.iloc[0:-1].copy()
X_amplitudes_new['zeta_n'] = df_decrements['zeta_n'].copy()
X_amplitudes_new['B_n'] = 2*X_amplitudes_new['zeta_n'] # [Nm*s]
return X_amplitudes_new
def fft_omega0(frequencies, dft):
index = np.argmax(dft)
natural_frequency = frequencies[index]
omega0 = 2 * np.pi * natural_frequency
return omega0
def fft(series):
"""
FFT of a series
Parameters
----------
series
Returns
-------
"""
signal = series.values
time = series.index
dt = np.mean(np.diff(time))
#n = 11*len(time)
n = 50000
frequencies = np.fft.rfftfreq(n=n, d=dt) # [Hz]
dft = np.abs(np.fft.rfft(signal, n=n))
return frequencies, dft
def linearized_matrix(df_rolldecay, df_ikeda, phi_as = np.deg2rad(np.linspace(1,10,10)), g=9.81, rho=1000,
do_hatify=True, suffixes=('','_ikeda')):
"""
Calculate B_e equivalent linearized damping for a range of roll amplitudes for both model tests and simplified ikeda.
Parameters
----------
df_rolldecay
df_ikeda
phi_as
Returns
-------
"""
df = pd.DataFrame()
for phi_a in phi_as:
df_ = linearize(phi_a=phi_a, df_rolldecay=df_rolldecay, df_ikeda=df_ikeda, g=g, rho=rho, do_hatify=do_hatify,
suffixes=suffixes)
df_['phi_a']=phi_a
df =df.append(df_, ignore_index=True)
return df
def linearize_si(phi_a, df_ikeda, components = ['B_44', 'B_F', 'B_W', 'B_E', 'B_BK', 'B_L'], do_hatify=True):
"""
Calculate the equivalent linearized damping B_e
Parameters
----------
phi_a
df_ikeda
g
rho
Returns
-------
"""
df_ikeda = df_ikeda.copy()
for component in components:
new_key = '%s_e' % component
B1_key = '%s_1' % component
B2_key = '%s_2' % component
df_ikeda[new_key] = lambdas.B_e_lambda(B_1=df_ikeda[B1_key],
B_2=df_ikeda[B2_key],
omega0=df_ikeda['omega0'],
phi_a=phi_a)
if do_hatify:
df_ikeda['B_e'] = df_ikeda['B_44_e']
else:
df_ikeda['B_e_hat'] = df_ikeda['B_44_hat_e']
return df_ikeda
def hatify(df_ikeda, g=9.81, rho=1000, components = ['B','B_44', 'B_F', 'B_W', 'B_E', 'B_BK', 'B_L']):
df_ikeda=df_ikeda.copy()
new_keys = ['%s_e' % key for key in components]
new_hat_keys = ['%s_e_hat' % key for key in components]
Disp = np.tile(df_ikeda['Disp'],[len(components),1]).T
beam = np.tile(df_ikeda['b'],[len(components),1]).T
df_ikeda[new_hat_keys] = lambdas.B_e_hat_lambda(B_e=df_ikeda[new_keys],
Disp=Disp,
beam=beam,
g=g, rho=rho)
df_ikeda['B_e_hat'] = df_ikeda['B_44_e_hat']
return df_ikeda
def linearize_model_test(phi_a, df_rolldecay, g=9.81, rho=1000):
"""
Calculate the equivalent linearized damping B_e
Parameters
----------
phi_a
df_rolldecay
g
rho
Returns
-------
"""
df_rolldecay = df_rolldecay.copy()
df_rolldecay['B_e'] = lambdas.B_e_lambda(B_1=df_rolldecay['B_1'],
B_2=df_rolldecay['B_2'],
omega0=df_rolldecay['omega0'],
phi_a=phi_a)
df_rolldecay['B_e_hat'] = lambdas.B_e_hat_lambda(B_e=df_rolldecay['B_e'],
Disp=df_rolldecay['Disp'],
beam=df_rolldecay['b'],
g=g, rho=rho)
return df_rolldecay
def linearize(phi_a:float, df_rolldecay:pd.DataFrame, df_ikeda:pd.DataFrame, g=9.81, rho=1000,
components = ['B_44', 'B_F', 'B_W', 'B_E', 'B_BK', 'B_L'], do_hatify=True, suffixes=('','_ikeda')):
if not do_hatify:
components = ['%s_hat' % key for key in components]
df_rolldecay = linearize_model_test(phi_a=phi_a, df_rolldecay=df_rolldecay, g=g, rho=rho)
df_ikeda = linearize_si(phi_a=phi_a, df_ikeda=df_ikeda, components=components, do_hatify=do_hatify)
if do_hatify:
df_ikeda = hatify(df_ikeda=df_ikeda,g=g, rho=rho, components=components)
df_compare = pd.merge(left=df_rolldecay, right=df_ikeda, how='inner', left_index=True, right_index=True,
suffixes=suffixes)
return df_compare | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/measure.py | 0.763836 | 0.630856 | measure.py | pypi |
import numpy as np
import pandas as pd
from scipy.integrate import odeint
from rolldecayestimators import DirectEstimator
from rolldecayestimators.symbols import *
from rolldecayestimators.substitute_dynamic_symbols import lambdify
dGM = sp.symbols('dGM')
lhs = phi_dot_dot + 2*zeta*omega0*phi_dot + omega0**2*phi+(dGM*phi*sp.Abs(phi)) + d*sp.Abs(phi_dot)*phi_dot
roll_diff_equation = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.Eq(lhs=phi, rhs=sp.solve(roll_diff_equation, phi.diff().diff())[0])
calculate_acceleration = lambdify(acceleration.rhs)
# Defining the diff equation for this estimator:
class DirectEstimatorImproved(DirectEstimator):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
@staticmethod
def estimator(df, omega0, zeta, dGM, d):
phi = df['phi']
phi1d = df['phi1d']
phi2d = calculate_acceleration(omega0=omega0, phi=phi, phi1d=phi1d, zeta=zeta, dGM=dGM, d=d)
return phi2d
@staticmethod
def roll_decay_time_step(states, t, omega0, zeta, dGM, d):
# states:
# [phi,phi1d]
phi_old = states[0]
p_old = states[1]
phi1d = p_old
phi2d = calculate_acceleration(omega0=omega0, phi=phi_old, phi1d=p_old, zeta=zeta, dGM=dGM, d=d)
d_states_dt = np.array([phi1d, phi2d])
return d_states_dt
def simulate(self, t: np.ndarray, phi0: float, phi1d0: float, omega0: float, zeta: float, dGM: float,
d: float) -> pd.DataFrame:
"""
Simulate a roll decay test using the quadratic method.
:param t: time vector to be simulated [s]
:param phi0: initial roll angle [rad]
:param phi1d0: initial roll speed [rad/s]
:param omega0: roll natural frequency[rad/s]
:param d: quadratic roll damping [-]
:param zeta:linear roll damping [-]
:return: pandas data frame with time series of 'phi' and 'phi1d'
"""
states0 = [phi0, phi1d0]
args = (
omega0,
zeta,
dGM,
d,
)
states = odeint(func=self.roll_decay_time_step, y0=states0, t=t, args=args)
df = pd.DataFrame(index=t)
df['phi'] = states[:, 0]
df['phi1d'] = states[:, 1]
return df | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/direct_estimator_improved.py | 0.922126 | 0.550547 | direct_estimator_improved.py | pypi |
import sympy as sp
from rolldecayestimators.symbols import *
from sympy import pi,sqrt
from rolldecayestimators import equations
from rolldecayestimators import symbols
eq_B_E_star_hat = sp.Eq(B_E0_hat,
8/(3*pi)*B_E_star_hat # (6) (part 1)
)
eq_B_E0_hat = sp.Eq(B_E0_hat,
4 * L_pp * T_s ** 4 * omega_hat * phi_a / (3 * pi * Disp * b ** 2) * C_r # (6) (part 2)
)
eq_volume = sp.Eq(Disp,
T_s*B_s*sigma*L_pp,
)
solution = sp.solve([eq_B_E0_hat, eq_volume, eq_B_E_star_hat],
C_r,Disp,B_E_star_hat,
dict=True
)
eq_C_r_2 = sp.Eq(C_r, solution[0][C_r])
eqs = [eq_C_r_2,
equations.omega_hat_equation]
eq_C_r_omega =sp.Eq(symbols.C_r,
sp.solve(eqs, symbols.C_r, symbols.omega_hat, dict=True)[0][symbols.C_r])
eq_C_r = sp.Eq(C_r,
2/(rho*T_s**2)*((1 - f_1*R_b/T_s)*(1 - OG/T_s - f_1*R_b/T_s) + f_2*(H_0 - f_1*R_b/T_s)**2)*P_m/3 # (10)
)
eq_P_m = sp.Eq(P_m,
3*1/2*rho*r_max**2*C_P*sp.Abs(phi.diff())*phi.diff().diff() # (13)
)
eq_B_E0s = sp.Eq(B_E0s,
4 * rho * T_s**4*omega*phi_a * C_r / (3 * pi) # (2.19) [2]
)
eq_B_E0 = sp.Eq(B_E0,
sp.Integral(B_E0s,(x_s,AP,FP))
)
eq_R_b = sp.Eq(R_b,
sqrt((B_s*T_s*(1-sigma))/(1-pi/4)) # Martin R_b approximations
)
eq_B_star_hat = sp.Eq(B_star_hat,
B_E_star_hat + B_W_star_hat + B_F_star_hat)
## Speed dependancy
eq_K = sp.Eq(K,
L_pp*omega/V
)
eq_eddy_speed_general = sp.Eq(B_E/B_E0,
(alpha*K)**2 / ((alpha*K)**2 + 1)
)
eq_eddy_speed = eq_eddy_speed_general.subs(alpha, 0.04) ## alpha from Ikeda paper.
## Lewis:
eq_D_1 = sp.Eq(D_1,
(3 + 4 * sigma / pi) + (1 - 4 * sigma / pi) * ((H_0 - 1) / (H_0 + 1)) ** 2
)
eq_a_3 = sp.Eq(a_3,
(-D_1 + 3 + sqrt(9 - 2 * D_1)) / D_1
)
eq_a_1 = sp.Eq(a_1,
(1 + a_3) * (H_0 - 1) / (H_0 + 1)
) | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/equations_ikeda_naked.py | 0.635675 | 0.308164 | equations_ikeda_naked.py | pypi |
import numpy as np
from sklearn.base import clone
from sklearn.model_selection import KFold
import matplotlib.pyplot as plt
def model_filter(group, models):
return group.name in models
def cross_validates(model, data, features, label='B_e_hat', n_splits=5, itterations=10):
scores = []
for i in range(itterations):
scores_ = cross_validate(model, data, features=features, label=label, n_splits=n_splits)
scores.append(scores_)
return np.array(scores)
def cross_validate(model, data, features, label='B_e_hat', n_splits=5):
groups_model = data.groupby(by='model_number')
models = data['model_number'].unique()
np.random.shuffle(models) # Inplace
kf = KFold(n_splits=n_splits)
scores = []
model_test = clone(model)
for train_index, test_index in kf.split(models):
models_train = models[train_index]
models_test = models[test_index]
data_train = groups_model.filter(func=model_filter, models=models_train)
data_test = X_test = groups_model.filter(func=model_filter, models=models_test)
X_train = data_train[features]
X_test = data_test[features]
y_train = data_train[label]
y_test = data_test[label]
model_test.fit(X=X_train, y=y_train)
score = model_test.score(X=X_test, y=y_test)
scores.append(score)
return scores
def plot_validate(model, data, features, label='B_e_hat', n_splits=5):
groups_model = data.groupby(by='model_number')
models = data['model_number'].unique()
np.random.shuffle(models) # Inplace
kf = KFold(n_splits=n_splits)
model_test = clone(model)
nrows=2
ncols = int(np.ceil(n_splits/nrows))
fig,axes=plt.subplots(ncols=ncols, nrows=nrows)
axess=axes.flatten()
not_used = axess[n_splits:]
for ax in not_used:
ax.remove()
axess=axess[0:n_splits]
fold=0
for (train_index, test_index),ax in zip(kf.split(models),axess):
models_train = models[train_index]
models_test = models[test_index]
data_train = groups_model.filter(func=model_filter, models=models_train)
data_test = X_test = groups_model.filter(func=model_filter, models=models_test)
X_train = data_train[features]
X_test = data_test[features]
y_train = data_train[label]
y_test = data_test[label]
model_test.fit(X=X_train, y=y_train)
ax.plot(y_test, model_test.predict(X=X_test),'.')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
lim = np.max([xlim[1], ylim[1]])
ax.set_xlim(0, lim)
ax.set_ylim(0, lim)
ax.set_title('Test fold %i' % fold)
fold+=1
ax.plot([0, lim], [0, lim], 'r-')
ax.set_aspect('equal', 'box')
ax.grid(True)
#axes[0].legend()
axess[0].set_ylabel('$\hat{B_e}$ (prediction)')
axess[3].set_ylabel('$\hat{B_e}$ (model test)')
axess[3].set_xlabel('$\hat{B_e}$ (model test)')
axess[4].set_xlabel('$\hat{B_e}$ (model test)')
return fig | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/cross_validation.py | 0.537041 | 0.589303 | cross_validation.py | pypi |
import numpy as np
import pandas as pd
from rolldecayestimators.ikeda import Ikeda
import rolldecayestimators.ikeda
import rolldecayestimators.simplified_ikeda as si
from rolldecayestimators import ikeda_speed
class SimplifiedIkeda(Ikeda):
"""
Class that helps with running various calculations known as the "Simplified Ikedas method" to predict ship roll damping.
The idea with this class is that it can be subclassed so that varius roll damping contributions can be replaced or
removed in order to study different "flavors" of the Ikeda methods (I have seen a lot of different implementations).
"""
def __init__(self, V: np.ndarray, w: np.ndarray, fi_a: float, beam: float, lpp: float,
kg: float, volume: float, draught: float, A0: float, BKL:float, BKB:float,
g=9.81, rho=1000.0, visc=1.15 * 10 ** -6, **kwargs):
"""
Manually specify the inputs to the calculations.
Note: Some inputs need to be specified here, but others can be defined in other ways
(by importing a ship geometry etc.)
Parameters
----------
V
ship speed [m/s]
w
roll frequency [rad/s]
fi_a
roll amplitude [rad]
beam
ship b [m]
lpp
ship perpedicular length [m]
kg
vertical centre of gravity [m] (positive upward)
volume
ship displaced volume [m3]
draught
ship draught [m]
A0
mid section coefficient (A0 = A_mid/(B*T))
BKL
length bilge keel [m] (=0 --> no bilge keel)
BKB
height bilge keel [m] (=0 --> no bilge keel)
g
gravity [m/s2]
rho
water density [kg/m3]
Returns
-------
None
"""
self.V = V
self.g = g
self.w = w
self.fi_a = fi_a
self.beam = beam
self.lpp = lpp
self.kg = kg
self.volume = volume
self.lBK = BKL
self.bBK = BKB
self.rho = rho
self.visc = visc
self._A0=A0
self._draught=draught
#B_W0: pd.Series
@property
def A0(self):
return self._A0
@property
def draught(self):
return self._draught
def calculate_B44(self):
"""
Calculate total roll damping
Returns
-------
B_44_hat : ndarray
Nondimensioal total roll damping [-]
"""
B_44 = (self.calculate_B_W() +
self.calculate_B_F() +
self.calculate_B_E() +
self.calculate_B_L() +
self.calculate_B_BK()
)
return B_44
def calculate_B_W0(self):
"""
Calculate roll wave damping at zero speed
Returns
-------
B_W0_hat : ndarray
Nondimensional roll wave damping at zero speed [-]
"""
B_W0_hat = si.calculate_B_W0(BD=self.BD, CB=self.Cb, CMID=self.A0, OGD=self.OGD, OMEGAHAT=self.w_hat)
return B_W0_hat
def calculate_B_W(self, Bw_div_Bw0_max=np.inf):
"""
Calculate roll wave damping at speed
Returns
-------
B_W_hat : ndarray
Nondimensional roll wave damping at speed [-]
"""
B_W0_hat = self.calculate_B_W0()
Bw_div_Bw0 = self.calculate_Bw_div_Bw0()
B_W_hat = B_W0_hat*Bw_div_Bw0
return B_W_hat
def calculate_B_F(self):
"""
Calculate skin friction damping
Returns
-------
B_F_hat : ndarray
Nondimensional skin friction damping [-]
"""
B_F_hat = si.calculate_B_F(BD=self.BD, BRTH=self.beam, CB=self.Cb, DRAFT=self.draught, KVC=self.visc,
LPP=self.lpp, OGD=self.OGD, OMEGA=self.w, PHI=np.rad2deg(self.fi_a))
return B_F_hat
def calculate_B_E(self):
"""
Calculate bilge eddy damping
Returns
-------
B_E_hat : ndarray
Nondimensional eddy damping [-]
"""
B_E_hat = si.calculate_B_E(BD=self.BD, CB=self.Cb, CMID=self.A0, OGD=self.OGD, OMEGAHAT=self.w_hat,
PHI=np.rad2deg(self.fi_a))
return B_E_hat
def calculate_B_BK(self):
"""
Calculate bilge keel damping
Returns
-------
B_BK_hat : ndarray
Nondimensional bilge keel damping [-]
"""
B_BK_hat = si.calculate_B_BK(BBKB=self.bBK/self.beam, BD=self.BD, CB=self.Cb, CMID=self.A0, LBKL=self.lBK/self.lpp, OGD=self.OGD,
OMEGAHAT=self.w_hat, PHI=np.rad2deg(self.fi_a))
return B_BK_hat
class SimplifiedIkedaBK2(SimplifiedIkeda):
def calculate_B_BK(self):
"""
Calculate bilge keel damping
Returns
-------
B_BK_hat : ndarray
Bilge keel damping [-]
"""
if np.any(~(self.bBK == 0) & (self.lBK == 0)):
raise rolldecayestimators.ikeda.BilgeKeelError('BKB is 0 but BKL is not!')
return 0.0
if isinstance(self.R, np.ndarray):
index = int(len(self.R) / 2) # Somewhere in the middle of the ship
R = self.R[index]
else:
R = self.R
Bp44BK_N0, Bp44BK_H0, B44BK_L, B44BKW0 = ikeda_speed.bilge_keel(w=self.w, fi_a=self.fi_a, V=self.V, B=self.beam,
d=self.draught, A=self.A_mid,
bBK=self.bBK, R=R, g=self.g, OG=self.OG,
Ho=self.Ho, ra=self.rho)
B44BK_N0 = Bp44BK_N0 * self.lBK
B44BK_H0 = Bp44BK_H0 * self.lBK
B44_BK = B44BK_N0 + B44BK_H0 + B44BK_L
B44_BK=rolldecayestimators.ikeda.array(self.B_hat(B44_BK))
mask = ((self.lBK == 0) | (pd.isnull(self.lBK)))
B44_BK[mask] = 0
return B44_BK
class SimplifiedIkedaABS(SimplifiedIkeda):
def calculate_B_W0(self):
"""
Calculate roll wave damping at zero speed
Returns
-------
B_W0_hat : ndarray
Nondimensional roll wave damping at zero speed [-]
"""
return np.abs(super().calculate_B_W0())
def calculate_B_W(self, Bw_div_Bw0_max=np.inf):
"""
Calculate roll wave damping at speed
Returns
-------
B_W_hat : ndarray
Nondimensional roll wave damping at speed [-]
"""
return np.abs(super().calculate_B_W())
def calculate_B_F(self):
"""
Calculate skin friction damping
Returns
-------
B_F_hat : ndarray
Nondimensional skin friction damping [-]
"""
return np.abs(super().calculate_B_F())
def calculate_B_E(self):
"""
Calculate bilge eddy damping
Returns
-------
B_E_hat : ndarray
Nondimensional eddy damping [-]
"""
return np.abs(super().calculate_B_E())
def calculate_B_BK(self):
"""
Calculate bilge keel damping
Returns
-------
B_BK_hat : ndarray
Nondimensional bilge keel damping [-]
"""
return np.abs(super().calculate_B_BK()) | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/simplified_ikeda_class.py | 0.744563 | 0.577555 | simplified_ikeda_class.py | pypi |
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from rolldecayestimators.simplified_ikeda import calculate_roll_damping, SimplifiedIkedaInputError
def variate_ship(ship, key, changes):
N = len(changes)
data = np.tile(ship.values, (N, 1))
df = pd.DataFrame(data=data, columns=ship.index)
variations = changes * ship[key]
df[key] = variations
df.index = df[key].copy()
return df
def calculate_variation(df, catch_error=False, limit_inputs=False, verify_input=True):
result = df.apply(func=calculate, catch_error=catch_error, limit_inputs=limit_inputs, verify_input=verify_input, axis=1)
return result
def plot_variation(ship, key='lpp', changes=None, ax=None, catch_error=False, plot_change_factor=True):
if changes is None:
N = 30
changes = np.linspace(0.5, 1.5, N)
df = variate_ship(ship=ship, key=key, changes=changes)
result = calculate_variation(df=df, catch_error=catch_error)
result[key] = df[key].copy()
ax = _plot_result(ship=ship, result=result, key=key, changes=changes, ax=ax, plot_change_factor=plot_change_factor)
return ax
def _plot_result(ship, result, key, changes, plot_change_factor=True, ax=None):
if ax is None:
fig, ax = plt.subplots()
if plot_change_factor:
result['change factor'] = changes
result.plot(x='change factor', ax=ax)
else:
result.plot(x=key, ax=ax)
ax.set_title('Variation of %s: %0.3f' % (key, ship[key]))
return ax
def calculate(row, catch_error=False, limit_inputs=False, verify_input=True, **kwargs):
LPP = row.lpp
Beam = row.b
DRAFT = row.DRAFT
PHI = row.phi_max
lBK = row.BKL
bBK = row.BKB
OMEGA = row.omega0
OG = (-row.kg + DRAFT)
CB = row.CB
CMID = row.A0
V = row.V
s = pd.Series()
try:
B44HAT, BFHAT, BWHAT, BEHAT, BBKHAT, BLHAT = calculate_roll_damping(LPP, Beam, CB, CMID, OG, PHI, lBK, bBK,
OMEGA, DRAFT, V=V ,limit_inputs=limit_inputs,
verify_input=verify_input, **kwargs)
except SimplifiedIkedaInputError:
if catch_error:
return s
else:
raise
s['B_44_hat'] = B44HAT
s['B_F_hat'] = BFHAT
s['B_W_hat'] = BWHAT
s['B_E_hat'] = BEHAT
s['B_BK_hat'] = BBKHAT
s['B_L_hat'] = BLHAT
return s | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/sensitivity.py | 0.567577 | 0.373476 | sensitivity.py | pypi |
from numpy import sqrt,pi,tanh, exp, cos, sin, arccos
import numpy as np
def eddy_sections(bwl:np.ndarray, a_1:np.ndarray, a_3:np.ndarray, sigma:np.ndarray, H0:np.ndarray, Ts:np.ndarray,
OG:float, R:float, wE:float, fi_a:float, ra=1000.0):
"""
Calculation of eddy damping according to
Ikeda, Y.,
1978. On eddy making component of roll damping force on naked hull. University of Osaka Prefacture,
Department of Naval Architecture, Japan, Report No. 00403,
Published in: Journal of Society of Naval Architects of Japan, Volume 142.
Parameters
----------
bwl
sectional b water line [m]
a_1
sectional lewis coefficients
a_3
sectional lewis coefficients
sigma
sectional coefficient
H0
sectional coefficient
Ts
sectional draft [m]
OG
vertical distance water line to cg [m]
R
bilge radius [m]
ra
water density [kg/m3]
wE
roll requency [rad/s]
fi_a
roll amplitude [rad]
Returns
-------
B_e0s
Eddy damping per unit length for the sections at zero speed.
"""
d = Ts
C_r = calculate_C_r(bwl=bwl,a_1=a_1, a_3=a_3, sigma=sigma, H0=H0, d=Ts, OG=OG, R=R, ra=ra)
B_e0s = 4*ra/(3*pi)*d**4*wE*fi_a*C_r # (6) (Rewritten)
return np.array([B_e0s])
def calculate_C_r(bwl:np.ndarray, a_1:np.ndarray, a_3:np.ndarray, sigma:np.ndarray, H0:np.ndarray, d,
OG:float, R:float, ra=1000.0):
"""
Parameters
----------
bwl
sectional b water line [m]
a_1
sectional lewis coefficients
a_3
sectional lewis coefficients
sigma
sectional coefficient
H0
sectional coefficient
Ts
sectional draft [m]
OG
vertical distance water line to cg [m]
R
bilge radius [m]
ra
water density [kg/m3]
Returns
-------
B_e0s
Eddy damping per unit length for the sections at zero speed.
"""
## start to obtain y from the parameter H0 and a of the cylinder section:
gamma, r_max = calculate_gamma(sigma=sigma, OG=OG, d=d, a_1=a_1, a_3=a_3, H0=H0, bwl=bwl)
C_p = calculate_C_p(gamma=gamma) # (14)
f_1 = 1/2*(1 + tanh(20*(sigma - 0.7))) # (12)
f_2 = calculate_f2(sigma=sigma) # (15)
P_m_ = 3*1/2*ra*r_max**2*C_p # (13) (*|phi1d|*phi2d left out, but will cancel out in the next)
C_r = 2/(ra*d**2)*((1 - f_1*R/d)*(1 - OG/d - f_1*R/d) + f_2*(H0 - f_1*R/d)**2)*P_m_/3 # (10)
#M_re = 1/2*ra*r_max**2*d**2*C_p*((1-f_1*R/d)*(1 - OG/d - f_1*R/d) + f_2*(H0 - f_1*R/d)**2)
#C_r = M_re/(1/2*ra*d**4)
return C_r
def calculate_f2(sigma):
f_2 = 1 / 2 * (1 - cos(pi * sigma)) - 1.5 * (1 - exp(-5 * (1 - sigma))) * sin(pi * sigma) ** 2 # (15)
return f_2
def calculate_C_p(gamma):
C_p = 1 / 2 * (0.87 * exp(-gamma) - 4 * exp(-0.187 * gamma) + 3) # (14)
return C_p
def calculate_gamma(sigma, OG, d, a_1, a_3, H0, bwl):
# (A-1):
sigma_ = (sigma-OG/d)/(1-OG/d)
H0_ = H0/(1-OG/d)
# (A-4)
def calculate_A(psi):
return -2 * a_3 * cos(5 * psi) + a_1 * (1 - a_3) * cos(3 * psi) + (
(6 - 3 * a_1) * a_3 ** 2 + (a_1 ** 2 - 3 * a_1) * a_3 + a_1 ** 2) * cos(psi)
def calculate_B(psi):
return -2 * a_3 * sin(5 * psi) + a_1 * (1 - a_3) * sin(3 * psi) + (
(6 + 3 * a_1) * a_3 ** 2 + (3 * a_1 + a_1 ** 2) * a_3 + a_1 ** 2) * sin(psi)
def calculate_M():
return bwl / (2 * (1 + a_1 + a_3)) # Note B is bwl!!!
def calculate_H(psi):
return 1 + a_1 ** 2 + 9 * a_3 ** 2 + 2 * a_1 * (1 - 3 * a_3) * cos(2 * psi) - 6 * a_3 * cos(4 * psi)
def calculate_r_max(psi):
M = calculate_M()
return M * sqrt(
((1 + a_1) * sin(psi) - a_3 * sin(3 * psi)) ** 2 + (
(1 - a_1) * cos(psi) + a_3 * cos(3 * psi)) ** 2) # (A-5)
# (A-6)
psi_1 = 0
factor = a_1*(1+a_3)/(4*a_3)
# Limit factor to [-1,1]
mask = np.abs(factor) > 1
factor[mask] = 1*np.sign(factor[mask])
psi_2 = 1/2*arccos(factor)
# (A-7)
r_max_1 = calculate_r_max(psi_1)
r_max_2 = calculate_r_max(psi_2)
mask = r_max_1 >= r_max_2
psi = np.array(psi_2)
psi[mask] = psi_1
A = calculate_A(psi=psi)
B = calculate_B(psi=psi)
M = calculate_M()
H = calculate_H(psi=psi)
r_max = np.array(r_max_2)
r_max[mask] = r_max_1[mask]
f_3 = 1 + 4*exp(-1.65*10**5*(1-sigma)**2) # (A-9)
gamma = sqrt(pi)*f_3/((2*d)*(1-OG/d)*sqrt(H0_*sigma_))*(r_max+2*M/H)*sqrt(A**2+B**2) # (A-10)
return gamma, r_max | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/ikeda_naked.py | 0.867415 | 0.786459 | ikeda_naked.py | pypi |
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
from sklearn.utils.validation import check_is_fitted
import inspect
from rolldecayestimators.substitute_dynamic_symbols import lambdify
from rolldecayestimators.symbols import *
from rolldecayestimators import equations
from rolldecayestimators.direct_estimator import DirectEstimator
class AnalyticalLinearEstimator(DirectEstimator):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
# Defining the diff equation for this estimator:
rhs = -phi_dot_dot / (omega0 ** 2) - 2 * zeta / omega0 * phi_dot
roll_diff_equation = sp.Eq(lhs=phi, rhs=rhs)
acceleration = sp.Eq(lhs=phi, rhs=sp.solve(roll_diff_equation, phi.diff().diff())[0])
functions = {
'phi':lambdify(sp.solve(equations.analytical_solution, phi)[0]),
'velocity':lambdify(sp.solve(equations.analytical_phi1d, phi_dot)[0]),
'acceleration':lambdify(sp.solve(equations.analytical_phi2d, phi_dot_dot)[0]),
}
@property
def parameter_names(self):
signature = inspect.signature(self.calculate_acceleration)
remove = ['phi_0','phi_01d', 't']
if not self.omega_regression:
remove.append('omega0')
return list(set(signature.parameters.keys()) - set(remove))
def estimator(self, x, xs):
parameters = {key: x for key, x in zip(self.parameter_names, x)}
if not self.omega_regression:
parameters['omega0'] = self.omega0
t = xs.index
phi_0 = xs.iloc[0][self.phi_key]
phi_01d = xs.iloc[0][self.phi1d_key]
return self.functions['phi'](t=t,phi_0=phi_0,phi_01d=phi_01d,**parameters)
def predict(self, X)->pd.DataFrame:
check_is_fitted(self, 'is_fitted_')
t = X.index
phi_0 = X.iloc[0][self.phi_key]
phi_01d = X.iloc[0][self.phi1d_key]
df = pd.DataFrame(index=t)
df['phi'] = self.functions['phi'](t=t, phi_0=phi_0, phi_01d=phi_01d, **self.parameters)
df['phi1d'] = self.functions['velocity'](t=t, phi_0=phi_0, phi_01d=phi_01d, **self.parameters)
df['phi2d'] = self.functions['acceleration'](t=t, phi_0=phi_0, phi_01d=phi_01d, **self.parameters)
return df | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/analytical_linear_estimator.py | 0.850469 | 0.554169 | analytical_linear_estimator.py | pypi |
import numpy as np
from numpy import pi, abs, tanh, cos, sin, exp, sqrt, arccos
def calculate_rmax(M, a_1, a_3, psi):
r_maxs = M * sqrt(
((1 + a_1) * sin(psi) - a_3 * sin(3 * psi)) ** 2 + ((1 - a_1) * cos(psi) + a_3 * cos(3 * psi)) ** 2)
return r_maxs
def eddy_sections(bwl:np.ndarray, a_1:np.ndarray, a_3:np.ndarray, sigma:np.ndarray, H0:np.ndarray, Ts:np.ndarray,
OG:float, R_b:float, wE:float, fi_a:float, rho=1000.0):
"""
Calculation of eddy damping according to Ikeda.
This implementation is a translation from:
Ikeda, Y., 1978. On eddy making component of roll damping force on naked hull. University of Osaka Prefacture,
Department of Naval Architecture, Japan, Report No. 00403,
Published in: Journal of Society of Naval Architects of Japan, Volume 142.
Parameters
----------
bwl
sectional b water line [m]
a_1
sectional lewis coefficients
a_3
sectional lewis coefficients
sigma
sectional coefficient
H0
sectional coefficient
Ts
sectional draft [m]
OG
vertical distance water line to cg [m]
R_b
bilge radius [m]
rho
water density [kg/m3]
wE
roll requency [rad/s]
fi_a
roll amplitude [rad]
Returns
-------
BE0s_hat
Eddy damping per unit length for the sections at zero speed.
"""
d = Ts
B = bwl # Probably
M = B / (1 + a_1 + a_3)
f_1 = 1/2*(1+tanh(20*(sigma-0.7)))
f_2 = 1/2*(1-cos(pi*sigma))-1.5*(1-exp(-5*(1-sigma)))*sin(pi*sigma)**2
H0_prim = H0*d/(d-OG)
sigma_prim = (sigma*d-OG)/(d-OG)
f_3 = 1 + 4*exp(-1.65*10**5*(1-sigma)**2)
psi_1 = 0.0
psi_2 = 1 / 2 * arccos(a_1 * (1 + a_3) / (4 * a_3))
r_max_1 = calculate_rmax(M=M, a_1=a_1, a_3=a_3, psi=psi_1)
r_max_2 = calculate_rmax(M=M, a_1=a_1, a_3=a_3, psi=psi_2)
mask = r_max_1 >= r_max_2
psi = psi_2
psi[mask] = psi_1
## A
cs = [-2 * a_3, a_1*(1 - a_3), ((6 - 3*a_1)*a_3**2 + (a_1**2 - 3*a_1)*a_3 + a_1**2)]
ps = [5*psi, 3*psi, psi]
A_1=0
for c,p in zip(cs,ps):
A_1+=c*cos(p)
## B
B_1 = 0
cs = [-2 * a_3, a_1 * (1 - a_3), ((6 + 3 * a_1)*a_3**2 + (3*a_1 + a_1**2)*a_3 + a_1**2)]
for c, p in zip(cs, ps):
B_1 += c * sin(p)
r_max = calculate_rmax(M=M, a_1=a_1, a_3=a_3, psi=psi)
H = 1 + a_1**2 + 9*a_3**2 + 2*a_1*(1-3*a_3)*cos(2*psi) - 6*a_3*cos(4*psi)
V_max_div_phi1d = f_3*(r_max + 2*M/H*sqrt(A_1**2+B_1**2))
gamma = sqrt(pi)/(2*d*(1-OG/d)*sqrt(H0_prim*sigma_prim))*V_max_div_phi1d
C_p = 1/2*(0.87*exp(-gamma)-4*exp(-0.187*gamma)+3)
R_b = calculate_R_b(beam=bwl, draught=Ts, H0=H0, sigma=sigma)
M_re = 1/2*rho*r_max**2*d**2*C_p*((1-f_1*R_b/d)*(1-OG/d-f_1*R_b/d) + f_2*(H0-f_1*R_b/d)**2)
C_r = M_re / (1/2*rho*d**4)
BE0s = 4/(3*pi)*d**4*wE*fi_a*C_r # Rewritten without hat
return BE0s
def calculate_R_b(beam, draught, H0, sigma):
"""
Calculate bilge radius with Ikedas empirical formula:
Returns
-------
R_b : ndarray
Bilge radius [m]
"""
mask=sigma>1
sigma[mask]=0.99 # Do avoid negative value in sqrt
mask=H0<0
R_b = 2*draught*np.sqrt(H0*(sigma-1)/(pi-4))
mask = (H0>=1) & (R_b/draught>1)
R_b[mask]=draught
mask = (H0 < 1) & (R_b / draught > H0)
R_b[mask] = beam/2
return R_b | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/ikeda_eddy.py | 0.75274 | 0.751762 | ikeda_eddy.py | pypi |
import numpy as np
import pandas as pd
from scipy.integrate import odeint
from rolldecayestimators import DirectEstimator
from rolldecayestimators.symbols import *
from rolldecayestimators import equations, symbols
from rolldecayestimators.substitute_dynamic_symbols import lambdify, run
from sklearn.utils.validation import check_is_fitted
from rolldecayestimators.estimator import RollDecay
class EstimatorCubic(DirectEstimator):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
## Cubic model:
b44_cubic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot) + B_3 * phi_dot ** 3)
restoring_equation_cubic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3 + C_5 * phi ** 5)
subs = [
(B_44, sp.solve(b44_cubic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_cubic, C_44)[0])
]
roll_decay_equation = equations.roll_decay_equation_general_himeno.subs(subs)
# Normalizing with A_44:
lhs = (roll_decay_equation.lhs / A_44).subs(equations.subs_normalize).simplify()
roll_decay_equation_A = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.solve(roll_decay_equation_A, phi_dot_dot)[0]
functions = {
'acceleration':lambdify(acceleration)
}
C_1_equation = equations.C_equation_linear.subs(symbols.C, symbols.C_1) # C_1 = GM*gm
eqs = [
C_1_equation,
equations.normalize_equations[symbols.C_1]
]
A44_equation = sp.Eq(symbols.A_44, sp.solve(eqs, symbols.C_1, symbols.A_44)[symbols.A_44])
functions['A44'] = lambdify(sp.solve(A44_equation, symbols.A_44)[0])
eqs = [equations.C_equation_linear,
equations.omega0_equation,
A44_equation,
]
omgea0_equation = sp.Eq(symbols.omega0, sp.solve(eqs, symbols.A_44, symbols.C, symbols.omega0)[0][2])
functions['omega0'] = lambdify(sp.solve(omgea0_equation,symbols.omega0)[0])
def __init__(self, maxfev=1000, bounds={}, ftol=10 ** -15, p0={}, fit_method='integration'):
new_bounds={
'B_1A':(0, np.inf), # Assuming only positive coefficients
# 'B_2A': (0, np.inf), # Assuming only positive coefficients
# 'B_3A': (0, np.inf), # Assuming only positive coefficients
}
new_bounds.update(bounds)
bounds=new_bounds
super().__init__(maxfev=maxfev, bounds=bounds, ftol=ftol, p0=p0, fit_method=fit_method, omega_regression=True)
@classmethod
def load(cls, B_1A:float, B_2A:float, B_3A:float, C_1A:float, C_3A:float, C_5A:float, X=None, **kwargs):
"""
Load data and parameters from an existing fitted estimator
A_44 is total roll intertia [kg*m**2] (including added mass)
Parameters
----------
B_1A
B_1/A_44 : linear damping
B_2A
B_2/A_44 : quadratic damping
B_3A
B_3/A_44 : cubic damping
C_1A
C_1/A_44 : linear stiffness
C_3A
C_3/A_44 : cubic stiffness
C_5A
C_5/A_44 : pentatonic stiffness
X : pd.DataFrame
DataFrame containing the measurement that this estimator fits (optional).
Returns
-------
estimator
Loaded with parameters from data and maybe also a loaded measurement X
"""
data={
'B_1A':B_1A,
'B_2A':B_2A,
'B_3A':B_3A,
'C_1A':C_1A,
'C_3A':C_3A,
'C_5A':C_5A,
}
return super(cls, cls)._load(data=data, X=X)
def calculate_additional_parameters(self, A44):
check_is_fitted(self, 'is_fitted_')
parameters_additional = {}
for key, value in self.parameters.items():
symbol_key = sp.Symbol(key)
new_key = key[0:-1]
symbol_new_key = ss.Symbol(new_key)
if symbol_new_key in equations.normalize_equations:
normalize_equation = equations.normalize_equations[symbol_new_key]
solution = sp.solve(normalize_equation,symbol_new_key)[0]
new_value = solution.subs([(symbol_key,value),
(symbols.A_44,A44),
])
parameters_additional[new_key]=new_value
return parameters_additional
def result_for_database(self, meta_data={}):
s = super().result_for_database(meta_data=meta_data)
inputs=pd.Series(meta_data)
inputs['m'] = inputs['Volume']*inputs['rho']
parameters = pd.Series(self.parameters)
inputs = parameters.combine_first(inputs)
s['A_44'] = run(self.functions['A44'], inputs=inputs)
parameters_additional = self.calculate_additional_parameters(A44=s['A_44'])
s.update(parameters_additional)
inputs['A_44'] = s['A_44']
s['omega0'] = run(function=self.functions['omega0'], inputs=inputs)
self.results = s # Store it also
return s
class EstimatorQuadraticB(EstimatorCubic):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
## Cubic model:
b44_quadratic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot))
restoring_equation_quadratic = sp.Eq(C_44, C_1 * phi)
subs = [
(B_44, sp.solve(b44_quadratic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_quadratic, C_44)[0])
]
roll_decay_equation = equations.roll_decay_equation_general_himeno.subs(subs)
# Normalizing with A_44:
lhs = (roll_decay_equation.lhs / A_44).subs(equations.subs_normalize).simplify()
roll_decay_equation_A = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.solve(roll_decay_equation_A, phi_dot_dot)[0]
functions = dict(EstimatorCubic.functions)
functions['acceleration'] = lambdify(acceleration)
@classmethod
def load(cls, B_1A:float, B_2A:float, C_1A:float, X=None, **kwargs):
"""
Load data and parameters from an existing fitted estimator
A_44 is total roll intertia [kg*m**2] (including added mass)
Parameters
----------
B_1A
B_1/A_44 : linear damping
B_2A
B_2/A_44 : quadratic damping
C_1A
C_1/A_44 : linear stiffness
X : pd.DataFrame
DataFrame containing the measurement that this estimator fits (optional).
Returns
-------
estimator
Loaded with parameters from data and maybe also a loaded measurement X
"""
data={
'B_1A':B_1A,
'B_2A':B_2A,
'C_1A':C_1A,
}
return super(cls, cls)._load(data=data, X=X)
class EstimatorQuadraticBandC(EstimatorCubic):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
## Quadratic model:
b44_quadratic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot))
restoring_equation_quadratic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3)
subs = [
(B_44, sp.solve(b44_quadratic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_quadratic, C_44)[0])
]
roll_decay_equation = equations.roll_decay_equation_general_himeno.subs(subs)
# Normalizing with A_44:
lhs = (roll_decay_equation.lhs / A_44).subs(equations.subs_normalize).simplify()
roll_decay_equation_A = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.solve(roll_decay_equation_A, phi_dot_dot)[0]
functions = dict(EstimatorCubic.functions)
functions['acceleration'] = lambdify(acceleration)
class EstimatorQuadratic(EstimatorCubic):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
## Quadratic model with Cubic restoring force:
b44_quadratic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot))
restoring_equation_cubic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3 + C_5 * phi ** 5)
subs = [
(B_44, sp.solve(b44_quadratic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_cubic, C_44)[0])
]
roll_decay_equation = equations.roll_decay_equation_general_himeno.subs(subs)
# Normalizing with A_44:
lhs = (roll_decay_equation.lhs / A_44).subs(equations.subs_normalize).simplify()
roll_decay_equation_A = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.solve(roll_decay_equation_A, phi_dot_dot)[0]
functions = dict(EstimatorCubic.functions)
functions['acceleration'] = lambdify(acceleration)
class EstimatorLinear(EstimatorCubic):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
## Linear model:
b44_linear_equation = sp.Eq(B_44, B_1 * phi_dot)
restoring_linear_quadratic = sp.Eq(C_44, C_1 * phi)
subs = [
(B_44, sp.solve(b44_linear_equation, B_44)[0]),
(C_44, sp.solve(restoring_linear_quadratic, C_44)[0])
]
roll_decay_equation = equations.roll_decay_equation_general_himeno.subs(subs)
# Normalizing with A_44:
lhs = (roll_decay_equation.lhs / A_44).subs(equations.subs_normalize).simplify()
roll_decay_equation_A = sp.Eq(lhs=lhs, rhs=0)
acceleration = sp.solve(roll_decay_equation_A, phi_dot_dot)[0]
functions = dict(EstimatorCubic.functions)
functions['acceleration'] = lambdify(acceleration)
@classmethod
def load(cls, B_1A:float, C_1A:float, X=None, **kwargs):
"""
Load data and parameters from an existing fitted estimator
A_44 is total roll intertia [kg*m**2] (including added mass)
Parameters
----------
B_1A
B_1/A_44 : linear damping
C_1A
C_1/A_44 : linear stiffness
X : pd.DataFrame
DataFrame containing the measurement that this estimator fits (optional).
Returns
-------
estimator
Loaded with parameters from data and maybe also a loaded measurement X
"""
data={
'B_1A':B_1A,
'C_1A':C_1A,
}
return super(cls, cls)._load(data=data, X=X) | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/direct_estimator_cubic.py | 0.822153 | 0.575469 | direct_estimator_cubic.py | pypi |
import numpy as np
import pandas as pd
from scipy.integrate import odeint
from rolldecayestimators.substitute_dynamic_symbols import lambdify
from rolldecayestimators.symbols import *
import inspect
from scipy.optimize import curve_fit
from rolldecayestimators.direct_estimator import DirectEstimator
lhs = phi_dot_dot + 2*zeta*omega0*phi_dot + omega0**2*phi
roll_diff_equation = sp.Eq(lhs=lhs,rhs=0)
acceleration = sp.Eq(lhs=phi, rhs=sp.solve(roll_diff_equation, phi.diff().diff())[0])
calculate_acceleration = lambdify(acceleration.rhs)
class DirectLinearEstimator(DirectEstimator):
""" A template estimator to be used as a reference implementation.
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
demo_param : str, default='demo_param'
A parameter used for demonstation of how to pass and store paramters.
"""
# Defining the diff equation for this estimator:
rhs = -phi_dot_dot/(omega0**2) - 2*zeta/omega0*phi_dot
roll_diff_equation = sp.Eq(lhs=phi, rhs=rhs)
acceleration = sp.Eq(lhs=phi, rhs=sp.solve(roll_diff_equation, phi.diff().diff())[0])
functions = {'acceleration':lambdify(acceleration.rhs)}
@classmethod
def load(cls, omega0:float, zeta:float, X=None):
"""
Load data and parameters from an existing fitted estimator
A_44 is total roll intertia [kg*m**2] (including added mass)
Parameters
----------
omega0:
roll natural frequency[rad/s]
zeta:
linear roll damping [-]
X : pd.DataFrame
DataFrame containing the measurement that this estimator fits (optional).
Returns
-------
estimator
Loaded with parameters from data and maybe also a loaded measurement X
"""
data = {
'omega0': omega0,
'zeta': zeta,
}
return super(cls, cls)._load(data=data, X=X) | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/direct_linear_estimator.py | 0.897437 | 0.432003 | direct_linear_estimator.py | pypi |
from sklearn.metrics import r2_score
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
pd.set_option("display.max_columns", None)
import numpy as np
from sklearn.base import BaseEstimator
from sklearn.pipeline import Pipeline
import os.path
from rolldecayestimators.substitute_dynamic_symbols import lambdify,run
from sklearn.linear_model import LinearRegression
from sympy.parsing.sympy_parser import parse_expr
import sympy as sp
from rolldecayestimators import symbols
import dill
dill.settings['recurse'] = True
class Polynom(BaseEstimator):
""" Estimator that wrappes a model pipline and convert it to a SymPy polynomial expression
For more information regarding how to build your own estimator, read more
in the :ref:`User Guide <user_guide>`.
Parameters
----------
model : sklearn.pipeline.Pipeline,
The model should contain:
model['polynomial_feature'] : sklearn.feature_selection.SelectKBest
model['variance_treshold'] : sklearn.feature_selection.VarianceThreshold
model['select_k_best'] : sklearn.feature_selection.SelectKBest
columns : list
a list with the column names from the original dataframe.
y_symbol : sympy.Symbol
y_symbol = poly(....)
"""
def __init__(self, model:Pipeline, columns:list, y_symbol:sp.Symbol):
self.polynomial_features = model['polynomial_feature']
self.variance_treshold = model['variance_treshold']
self.select_k_best = model['select_k_best']
self.feature_names = np.array(self.polynomial_features.get_feature_names())
self.polynomial_regression = LinearRegression()
self.columns = columns
self.feature_eqs = self.define_feature_equations()
self.y_symbol = y_symbol
def fit(self, X, y):
self.X = X
result = self.polynomial_regression.fit(X=self.good_X(X), y=y)
self.equation = self.get_equation()
self.lamda = lambdify(self.equation.rhs)
return result
def score(self, X, y):
y_pred = self.predict(X=X)
return r2_score(y_true=y, y_pred=y_pred)
def predict(self, X):
if isinstance(X,dict):
return run(self.lamda, X)
if isinstance(X,pd.Series):
return run(self.lamda, X)
if not isinstance(X, pd.DataFrame):
assert X.shape[1] == len(self.columns)
X = pd.DataFrame(data=X, columns=self.columns)
return run(self.lamda, X)
@property
def good_index(self):
feature_names_index = pd.Series(self.feature_names)
mask_treshold = self.variance_treshold.get_support()
mask_select_k_best = self.select_k_best.get_support()
return feature_names_index[mask_treshold][mask_select_k_best]
def good_X(self, X):
X2 = self.polynomial_features.transform(X)
return X2[:, self.good_index.index] # Only the good stuff
def define_sympy_symbols(self):
self.sympy_symbols = {key: getattr(symbols, key) for key in self.columns}
def define_parameters(self):
self.parameters = [self.sympy_symbols[key] for key in self.columns]
def define_feature_equations(self):
self.define_sympy_symbols()
self.define_parameters()
xs = {'x%i' % i: name for i, name in enumerate(self.columns)}
xs_sympy = {sp.Symbol(key): self.sympy_symbols[value] for key, value in xs.items()}
feature_eqs = [1.0, ]
for feature in self.feature_names[1:]:
s_eq = feature
s_eq = s_eq.replace(' ', '*')
s_eq = s_eq.replace('^', '**')
sympy_eq = parse_expr(s_eq)
sympy_eq = sympy_eq.subs(xs_sympy)
feature_eqs.append(sympy_eq)
return np.array(feature_eqs)
@property
def good_feature_equations(self):
return self.feature_eqs[self.good_index.index]
def get_equation(self):
rhs = 0
for good_feature_equation, coeff in zip(self.good_feature_equations,
self.polynomial_regression.coef_):
rhs += coeff * good_feature_equation
rhs += self.polynomial_regression.intercept_
return sp.Eq(self.y_symbol, rhs)
def save(self, file_path:str):
if not os.path.splitext(file_path)[-1]:
file_path+='.sym'
dill.dump(self, open(file_path, 'wb'))
@classmethod
def load(cls, file_path:str):
with open(file_path, 'rb') as file:
polynom = dill.load(file)
polynom.equation = polynom.get_equation()
polynom.lamda = lambdify(polynom.equation.rhs)
return polynom | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/polynom_estimator.py | 0.899784 | 0.579936 | polynom_estimator.py | pypi |
import sympy as sp
from rolldecayestimators.symbols import *
# Some of the symbols can be nicer displayed using this:
nicer_LaTeX = [
(B_E0_hat, sp.symbols('\hat{B}_{E0}')),
(B_E_star_hat, sp.symbols('\hat{B^*}_{E}')),
(B_W_star_hat, sp.symbols('\hat{B^*}_{W}')),
(B_F_star_hat, sp.symbols('\hat{B^*}_{F}')),
(B_star_hat, sp.symbols('\hat{B^*}')),
(omega_hat, sp.symbols('\hat{\omega}')),
]
# General roll motion equation according to Himeno:
lhs = A_44*phi_dot_dot + B_44 + C_44
rhs = M_44
roll_equation_himeno = sp.Eq(lhs=lhs, rhs=rhs)
# No external forces (during roll decay)
roll_decay_equation_general_himeno = roll_equation_himeno.subs(M_44,0)
restoring_equation = sp.Eq(C_44,m*g*GZ)
restoring_equation_linear = sp.Eq(C_44,m*g*GM*phi)
restoring_equation_quadratic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3)
restoring_equation_cubic = sp.Eq(C_44,C_1*phi + C_3*phi*sp.Abs(phi) + C_5*phi**3)
## Cubic model:
b44_cubic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot) + B_3 * phi_dot ** 3)
restoring_equation_cubic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3 + C_5 * phi ** 5)
subs = [
(B_44, sp.solve(b44_cubic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_cubic, C_44)[0])
]
roll_decay_equation_himeno_quadratic = roll_decay_equation_general_himeno.subs(subs)
## Quadratic model:
b44_quadratic_equation = sp.Eq(B_44, B_1 * phi_dot + B_2 * phi_dot * sp.Abs(phi_dot))
restoring_equation_quadratic = sp.Eq(C_44, C_1 * phi + C_3 * phi ** 3)
subs = [
(B_44, sp.solve(b44_quadratic_equation, B_44)[0]),
(C_44, sp.solve(restoring_equation_quadratic, C_44)[0])
]
roll_decay_equation_himeno_quadratic = roll_decay_equation_general_himeno.subs(subs)
## Linear model:
b44_linear_equation = sp.Eq(B_44, B_1 * phi_dot)
restoring_linear_quadratic = sp.Eq(C_44, C_1 * phi)
subs = [
(B_44, sp.solve(b44_linear_equation, B_44)[0]),
(C_44, sp.solve(restoring_linear_quadratic, C_44)[0])
]
roll_decay_equation_himeno_linear = roll_decay_equation_general_himeno.subs(subs)
C_equation = sp.Eq(C,C_44/phi)
C_equation_linear = C_equation.subs(C_44,sp.solve(restoring_equation_linear,C_44)[0])
C_equation_cubic = C_equation.subs(C_44,sp.solve(restoring_equation_cubic,C_44)[0])
C_equation_quadratic = C_equation.subs(C_44,sp.solve(restoring_equation_quadratic,C_44)[0])
roll_decay_equation_himeno_quadratic = roll_decay_equation_general_himeno.subs(B_44,
sp.solve(b44_quadratic_equation,B_44)[0]).subs(C,
sp.solve(C_equation_quadratic,C)[0])
subs = [
(B_44, sp.solve(b44_quadratic_equation, B_44)[0]),
(C_44, sp.solve(restoring_linear_quadratic, C_44)[0])
]
roll_decay_equation_himeno_quadratic_b = roll_decay_equation_general_himeno.subs(subs)
roll_decay_equation_himeno_quadratic_c = roll_decay_equation_himeno_quadratic.subs(C_44,sp.solve(C_equation, C_44)[0])
zeta_equation = sp.Eq(2*zeta*omega0,B_1/A_44)
d_equation = sp.Eq(d,B_2/A_44)
omega0_equation = sp.Eq(omega0,sp.sqrt(C/A_44))
eq = sp.Eq(roll_decay_equation_himeno_quadratic_c.lhs/A_44,0) # helper equation
subs = [
(B_1, sp.solve(zeta_equation, B_1)[0]),
(B_2, sp.solve(d_equation, B_2)[0]),
(C / A_44, sp.solve(omega0_equation, C / A_44)[0])
]
roll_decay_equation_quadratic = sp.Eq(sp.expand(eq.lhs).subs(subs), 0)
roll_decay_equation_quadratic = sp.factor(roll_decay_equation_quadratic, phi_dot)
roll_decay_equation_linear = roll_decay_equation_quadratic.subs(d,0)
omega0_equation_linear = omega0_equation.subs(C,sp.solve(C_equation_linear,C)[0])
A44 = sp.solve(omega0_equation_linear, A_44)[0]
zeta_B1_equation = zeta_equation.subs(A_44,A44)
d_B2_equation = d_equation.subs(A_44,A44)
## Cubic model:
subs = [
(B_44,sp.solve(b44_cubic_equation,B_44)[0]),
(C_44,sp.solve(restoring_equation_cubic,C_44)[0])
]
roll_decay_equation_cubic = roll_decay_equation_general_himeno.subs(subs)
## Quadratic model:
subs = [
(B_44,sp.solve(b44_quadratic_equation,B_44)[0]),
(C_44,sp.solve(restoring_equation_cubic,C_44)[0])
]
roll_decay_equation_quadratic_ = roll_decay_equation_general_himeno.subs(subs)
# But this equation does not have a unique solution, so we devide all witht the interia A_44:
normalize_symbols = [B_1, B_2, B_3, C_1, C_3, C_5]
normalize_equations = {}
new_symbols = {}
subs_normalize = []
for symbol in normalize_symbols:
if 'C' in symbol.name:
description = 'Stiffness help coefficients'
elif 'B' in symbol.name:
description = 'Damping help coefficients'
else:
description = ' '
new_symbol = ss.Symbol(name='%sA' % symbol.name, description=description, unit=' ')
new_symbol = sp.Symbol('%sA' % symbol.name)
new_symbols[symbol] = new_symbol
eq = sp.Eq(new_symbol,symbol/A_44)
normalize_equations[symbol]=eq
subs_normalize.append((symbol, sp.solve(eq, symbol)[0]))
lhs = (roll_decay_equation_cubic.lhs/A_44).subs(subs_normalize).simplify()
roll_decay_equation_cubic_A = sp.Eq(lhs=lhs,rhs=0)
lhs = (roll_decay_equation_quadratic_.lhs/A_44).subs(subs_normalize).simplify()
roll_decay_equation_quadratic_A = sp.Eq(lhs=lhs,rhs=0)
## Equivalt linearized damping:
B_e_equation = sp.Eq(B_e,B_1+8/(3*sp.pi)*omega0*phi_a*B_2)
B_e_equation_cubic = sp.Eq(B_e,B_1+8/(3*sp.pi)*omega0*phi_a*B_2 + 3/4*omega0**2*phi_a**2*B_3)
A_44_eq = sp.Eq(A_44, A44)
eqs = [
A_44_eq,
C_equation_linear,
]
omega0_eq = sp.Eq(omega0,sp.solve(eqs, omega0, GM)[1][0])
omega0_eq = omega0_eq.subs(C,C_1)
## Nondimensional damping Himeno:
lhs = B_44_hat
rhs = B_44 / (rho * Disp * b ** 2) * sp.sqrt(b / (2 * g))
B44_equation = sp.Eq(lhs, rhs)
omega0_equation_linear = omega0_equation.subs(C,sp.solve(C_equation_linear,C)[0])
omega_hat_equation = sp.Eq(omega_hat, omega * sp.sqrt(b / (2 * g)))
B44_hat_equation = sp.Eq(B_44_hat, B_44 / (rho * Disp * b ** 2) * sp.sqrt(b / (2 * g)))
B_1_hat_equation = sp.Eq(B_1_hat, B_1 / (rho * Disp * b ** 2) * sp.sqrt(b / (2 * g)))
B_e_hat_equation = sp.Eq(B_e_hat, B_e / (rho * Disp * b ** 2) * sp.sqrt(b / (2 * g)))
B_2_hat_equation = sp.Eq(B_2_hat, B_2 / (rho * Disp * b ** 2) * sp.sqrt(b / (2 * g)) ** (0))
B44_hat_equation_quadratic = B44_hat_equation.subs(B_44,sp.solve(b44_quadratic_equation,B_44)[0])
omega0_hat_equation = omega_hat_equation.subs(omega,omega0)
## Analytical
diff_eq = sp.Eq(y.diff().diff() + 2 * zeta * omega0 * y.diff() + omega0 ** 2 * y, 0)
equation_D = sp.Eq(D, sp.sqrt(1 - zeta ** 2))
lhs = y
rhs = sp.exp(-zeta * omega0 * t) * (y0 * sp.cos(omega0 * D * t) + (y0_dot / (omega0 * D) + zeta * y0 / D) * sp.sin(omega0 * D * t))
analytical_solution_general = sp.Eq(lhs,rhs)
subs = [
(y,phi),
(y0, phi_0),
(y0_dot, phi_0_dot),
(y0_dotdot, phi_0_dotdot),
(D,sp.solve(equation_D,D)[0]),
]
analytical_solution = analytical_solution_general.subs(subs)
analytical_phi1d = sp.Eq(phi_dot,sp.simplify(analytical_solution.rhs.diff(t)))
analytical_phi2d = sp.Eq(phi_dot_dot,sp.simplify(analytical_phi1d.rhs.diff(t)))
rhs = analytical_solution.rhs.args[1]*phi_0
lhs = phi_a
extinction_equation = sp.Eq(lhs,rhs)
xeta_equation = sp.Eq(zeta,
sp.solve(extinction_equation,zeta)[0])
B_1_zeta_eq = sp.Eq(B_1, 2*zeta*omega0*A_44)
B_1_zeta_eq
### Simplified Ikeda
simplified_ikeda_equation = sp.Eq((B_F,
B_W,
B_E,
B_BK,
B_L,
),ikeda_simplified)
### Regression
regression_factor_equation = sp.Eq(B_e_hat,B_e_hat_0*B_e_factor) | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/equations.py | 0.592077 | 0.556098 | equations.py | pypi |
import sympy as sp
from collections import OrderedDict
# my custom class with description attribute
class Symbol(sp.Symbol):
def __new__(self, name, description='', unit=''):
obj = sp.Symbol.__new__(self, name)
obj.description = description
obj.unit = unit
return obj
class Coefficient(Symbol):
def __new__(self, name, description='coefficient', unit=''):
obj = super().__new__(self, name, description=description, unit=unit)
return obj
class BisSymbol(Symbol):
def __new__(self, name,parent_SI_symbol, description='', unit=''):
obj = super().__new__(self, name, description=description, unit=unit)
obj.parent_SI_symbol = parent_SI_symbol
return obj
class Bis(Symbol):
def __new__(self, name, denominator, description='', unit=''):
obj = super().__new__(self, name, description=description, unit=unit)
obj.denominator = denominator
bis_name = "%s''%s" % (name[0], name[1:])
obj.bis = BisSymbol(bis_name,parent_SI_symbol = obj, description=description, unit=unit)
obj.bis_eq = sp.Eq(lhs=obj.bis, rhs=obj / denominator)
return obj
def expand_bis(equation:sp.Eq):
"""
Remove all bis symbols from an expression by substituting with the corresponding bis equation
:param equation: sympy equation
:return: new equation WITHOUT bis symbols.
"""
assert isinstance(equation,sp.Eq)
symbols = equation.lhs.free_symbols | equation.rhs.free_symbols
subs = []
for symbol in symbols:
if isinstance(symbol,BisSymbol):
subs.append((symbol,symbol.parent_SI_symbol.bis_eq.rhs))
expanded_equation = equation.subs(subs)
return expanded_equation
def reduce_bis(equation:sp.Eq):
"""
Replace symbols with corresponding bis symbols from an expression by substituting with the corresponding bis equation
:param equation: sympy equation
:return: new equation WITH bis symbols.
"""
assert isinstance(equation,sp.Eq)
symbols = equation.lhs.free_symbols | equation.rhs.free_symbols
subs = []
for symbol in symbols:
if isinstance(symbol,Bis):
subs.append((symbol,sp.solve(symbol.bis_eq,symbol)[0]))
reduced = equation.subs(subs)
return reduced
def create_html_table(symbols:list):
html = """
<tr>
<th>Variable</th>
<th>Description</th>
<th>SI Unit</th>
</tr>
"""
names = [symbol.name for symbol in symbols if isinstance(symbol, sp.Basic)]
symbols_dict = {symbol.name:symbol for symbol in symbols if isinstance(symbol, sp.Basic)}
for name in sorted(names):
symbol = symbols_dict[name]
if isinstance(symbol, Symbol):
html_row = """
<tr>
<td>$%s$</td>
<td>%s</td>
<td>%s</td>
</tr>
""" % (sp.latex(symbol), symbol.description, symbol.unit)
html += html_row
html_table = """
<table>
%s
</table>
""" % html
return html_table | /rolldecay_estimators-0.6.0-py3-none-any.whl/rolldecayestimators/special_symbol.py | 0.765593 | 0.285646 | special_symbol.py | pypi |
import enum
import string
from typing import Optional, List
import kivalu
import requests
import requests_unixsocket
import yaml
KEY_DEFAULT = "lxd-stack"
URL_DEFAULT = "http+unix://%2Fvar%2Fsnap%2Flxd%2Fcommon%2Flxd%2Funix.socket"
class ResourceType(enum.Enum):
"""
Enumerates the various LXD resource types managed by the library.
"""
STORAGE_POOLS = "storage-pools", "/1.0/storage-pools"
VOLUMES = "volumes", "/1.0/storage-pools/${parent}/volumes/custom"
NETWORKS = "networks", "/1.0/networks"
PROFILES = "profiles", "/1.0/profiles"
INSTANCES = "instances", "/1.0/instances"
def name(self) -> str:
return self.value[0]
def path(self, config) -> str:
"""
:param config: the resource's configuration
:return: the corresponding path relative to the LXD API base URL
"""
return string.Template(self.value[1]).substitute(config)
class Client:
f"""
A simple wrapper around the LXD REST API to manage resources either directly or via "stacks".
This Client connects to the LXD API through the Unix socket (for now).
Apart from how asynchronous operations are handled, it's mainly a convenient, idempotent passthrough.
Therefore, the official documentation is where you'll find all the configuration details you'll need to create LXD resources:
* storage-pools and volumes: https://linuxcontainers.org/lxd/docs/master/api/#/storage and https://linuxcontainers.org/lxd/docs/master/storage
* networks: https://linuxcontainers.org/lxd/docs/master/api/#/networks and https://linuxcontainers.org/lxd/docs/master/networks
* profiles: https://linuxcontainers.org/lxd/docs/master/api/#/profiles and https://linuxcontainers.org/lxd/docs/master/profiles
* instances: https://linuxcontainers.org/lxd/docs/master/api/#/instances and https://linuxcontainers.org/lxd/docs/master/instances
A "stack" is very a convenient way to manage a group of resources linked together.
Heavily inspired by the LXD "preseed" format (see https://linuxcontainers.org/lxd/docs/master/preseed), the structure is almost identical, except:
* "storage_pools" has been renamed "storage-pools" to match the API
* the root "config" element is ignored (use a real preseed file if you want to configure LXD that way)
* instances and volumes are managed through new root elements, "instances" and "volumes"
A typical stack example can be found in tests/test_cli.py.
Check the various functions to see what you can do with stacks and resources.
:param url: URL of the LXD API (scheme is "http+unix", socket path is percent-encoded into the host field), defaults to "{KEY_DEFAULT}"
"""
def __init__(self, url: str = URL_DEFAULT):
self.url = url
self.session = requests_unixsocket.Session()
# this "hook" will be executed after each request (see http://docs.python-requests.org/en/master/user/advanced/#event-hooks)
def hook(response, **_):
response_json = response.json()
if not response.ok:
raise requests.HTTPError(response_json.get("error"))
# some lxd operations are asynchronous, we have to wait for them to finish before continuing
# see https://linuxcontainers.org/lxd/docs/master/rest-api/#background-operation
if response_json.get("type") == "async":
operation = self.session.get(self.url + response_json.get("operation") + "/wait").json().get("metadata")
if operation.get("status_code") != 200:
raise requests.HTTPError(operation.get("err"))
self.session.hooks["response"].append(hook)
def exists(self, config: dict, resource_type: ResourceType) -> bool:
"""
:param config: the resource's configuration
:param resource_type: the resource's type
:return: whether the resource exists or not
"""
resource_path = resource_type.path(config) + "/" + config.get("name")
print("checking existence", resource_path)
try:
self.session.get(self.url + resource_path)
return True
except requests.HTTPError:
return False
def create(self, config: dict, resource_type: ResourceType) -> None:
"""
Creates a resource if it doesn't exist.
The required configuration depends on the resource's type (see rollin_lxd.Client).
:param config: the resource's desired configuration
:param resource_type: the resource's type
"""
type_path = resource_type.path(config)
resource_path = type_path + "/" + config.get("name")
if not self.exists(config, resource_type):
print("creating", resource_path)
self.session.post(self.url + type_path, json=config)
def delete(self, config: dict, resource_type: ResourceType) -> None:
"""
Deletes a resource if it exists.
:param config: the resource's configuration
:param resource_type: the resource's type
"""
resource_path = resource_type.path(config) + "/" + config.get("name")
if self.exists(config, resource_type):
print(f"deleting", resource_path)
self.session.delete(self.url + resource_path)
def is_running(self, config: dict, resource_type: ResourceType = ResourceType.INSTANCES) -> bool:
"""
:param config: the resource's configuration
:param resource_type: the resource's type, defaults to INSTANCES
:return: whether the resource is running or not
"""
resource_path = resource_type.path(config) + "/" + config.get("name")
print("checking status", resource_path)
return self.session.get(self.url + resource_path).json().get("metadata").get("status") == "Running"
def start(self, config: dict, resource_type: ResourceType) -> None:
"""
Starts a resource if it's not running.
:param config: the resource's configuration
:param resource_type: the resource's type
"""
resource_path = resource_type.path(config) + "/" + config.get("name")
if not self.is_running(config, resource_type):
print("starting", resource_path)
self.session.put(self.url + resource_path + "/state", json={"action": "start"})
def stop(self, config: dict, resource_type: ResourceType) -> None:
"""
Stops a resource if it's running.
:param config: the resource's configuration
:param resource_type: the resource's type
"""
resource_path = resource_type.path(config) + "/" + config.get("name")
if self.exists(config, resource_type) and self.is_running(config, resource_type):
print("stopping", resource_path)
self.session.put(self.url + resource_path + "/state", json={"action": "stop"})
def create_stack(self, stack: dict) -> None:
"""
Creates the resources in the given stack if they don't exist.
The required configurations depend on the resource's type (see rollin_lxd.Client).
:param stack: the stack as a dictionary
"""
for resource_type in ResourceType:
for config in stack.get(resource_type.name()) or []:
self.create(config, resource_type)
def delete_stack(self, stack: dict) -> None:
"""
Deletes the resources in the given stack if they exist.
:param stack: the stack as a dictionary
"""
for resource_type in reversed(ResourceType):
for config in stack.get(resource_type.name()) or []:
self.delete(config, resource_type)
def start_stack(self, stack: dict) -> None:
"""
Starts the resources in the given stack if they're not running.
:param stack: the stack as a dictionary
"""
for resource_type in [ResourceType.INSTANCES]:
for config in stack.get(resource_type.name()) or []:
self.start(config, resource_type)
def stop_stack(self, stack: dict) -> None:
"""
Stops the resources in the given stack if they're running.
:param stack: the stack as a dictionary
"""
for resource_type in [ResourceType.INSTANCES]:
for config in stack.get(resource_type.name()) or []:
self.stop(config, resource_type)
def main(args: Optional[List[str]] = None) -> None:
argparser = kivalu.build_argument_parser(description="LXD dynamic management based on kivalu.")
argparser.add_argument("command", choices=["create", "delete", "start", "stop"], help="operation to execute on the stack")
argparser.add_argument("key", nargs="?", default=KEY_DEFAULT, help=f"the stack's key on the kivalu server, defaults to '{KEY_DEFAULT}'")
argparser.add_argument("--lxd-url", default=URL_DEFAULT, help=f'URL of the LXD API (scheme is "http+unix", socket path is percent-encoded into the host field), defaults to "{URL_DEFAULT}"', metavar="<url>")
args = argparser.parse_args(args)
value = kivalu.Client(**vars(args)).get(args.key)
if not value:
print(f"key '{args.key}' not found on server {args.url}")
exit(1)
stack = yaml.load(value, Loader=yaml.BaseLoader)
if not isinstance(stack, dict):
print(f"key '{args.key}' on server {args.url} is not a proper yaml or json dictionary")
exit(1)
# creates a Client and calls the function corresponding to the given command
getattr(Client(args.lxd_url), args.command + "_stack")(stack) | /rollin-lxd-0.10.0.tar.gz/rollin-lxd-0.10.0/rollin_lxd/__init__.py | 0.735452 | 0.228802 | __init__.py | pypi |
from openpyxl import Workbook, load_workbook
from openpyxl.worksheet.worksheet import Worksheet
from openpyxl.styles import *
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s') # logging.basicConfig函数对日志的输出格式及方式做相关配置
logger = logging.getLogger("excel_util")
class ExcelUtil(object):
def __init__(self, excel_path=None, excel_sheet=None):
if excel_path is None:
self.wb: Workbook = Workbook(write_only=False)
logger.info("默认创建一个空workbook。")
self.ws: Worksheet = self.wb.active
logger.info("默认worksheet={0}。".format(self.ws))
else:
self.wb: Workbook = load_workbook(filename=excel_path)
if excel_sheet is not None:
self.ws: Worksheet = self.wb[excel_sheet]
logger.info("加载{0}文件的{1}表单。".format(excel_path, excel_sheet))
else:
logger.info("加载{0}文件。".format(excel_path))
@property
def rows(self):
return self.ws.max_row
@property
def cols(self):
return self.ws.max_column
@property
def cell(self, cell_name):
self.cell = self.ws[cell_name]
return self.cell
@property
def cell(self, row, col):
self.cell = self.ws.cell(row, col)
return self.cell
def set_cell_value(self, content):
self.cell.value = content
def set_cell_value_by_cell_name(self, cell_name, content):
self.ws[cell_name] = content
def set_cell_value(self, row, col, content):
self.ws.cell(row, col).value = content
def get_cell_value_by_cell_name(self, cell_name):
return self.ws[cell_name].value
def get_cell_value(self, row, col):
return self.ws.cell(row, col).value
def change_active_sheet(self, index):
self.wb._active_sheet_index = index
def save(self, save_path):
self.wb.save(save_path)
def get_sheet_list(self) -> list:
return self.wb.get_sheet_names()
def get_sheet(self, sheet_name: str):
self.ws: Worksheet = self.wb.get_sheet_by_name(sheet_name)
if __name__ == '__main__':
excelOperator = ExcelUtil(excel_path="../crawler/Temp.xlsx", excel_sheet="Records")
logger.info(excelOperator.rows) | /rolling_in_the_deep-0.1.3-py3-none-any.whl/rolling_king/jason/openpyxl/excel_util.py | 0.474144 | 0.165357 | excel_util.py | pypi |
from typing import Any, Dict, Generator, Iterable, List, Optional, Union # noqa: F401
import pydot # noqa: F401
from collections import OrderedDict
from pathlib import Path
import logging
import os
import re
import shutil
from IPython.display import HTML, Image
import pandas as pd
Filepath = Union[str, Path]
LOG_LEVEL = os.environ.get('LOG_LEVEL', 'WARNING').upper()
logging.basicConfig(level=LOG_LEVEL)
LOGGER = logging.getLogger(__name__)
# ------------------------------------------------------------------------------
'''
Contains basic functions for more complex ETL functions and classes.
'''
# COLOR-SCHEME------------------------------------------------------------------
COLOR_SCHEME = dict(
background='#242424',
node='#343434',
node_font='#B6ECF3',
node_value='#343434',
node_value_font='#DE958E',
edge='#B6ECF3',
edge_value='#DE958E',
node_library_font='#DE958E',
node_subpackage_font='#A0D17B',
node_module_font='#B6ECF3',
edge_library='#DE958E',
edge_subpackage='#A0D17B',
edge_module='#B6ECF3',
) # type: Dict[str, str]
COLOR_SCALE = [
'#B6ECF3',
'#DE958E',
'#EBB483',
'#A0D17B',
'#93B6E6',
'#AC92DE',
'#E9EABE',
'#7EC4CF',
'#F77E70',
'#EB9E58',
] # type: List[str]
# PREDICATE-FUNCTIONS-----------------------------------------------------------
def is_iterable(item):
# type: (Any) -> bool
'''
Determines if given item is iterable.
Args:
item (object): Object to be tested.
Returns:
bool: Whether given item is iterable.
'''
if is_listlike(item) or is_dictlike(item):
return True
return False
def is_dictlike(item):
# type: (Any) -> bool
'''
Determines if given item is dict-like.
Args:
item (object): Object to be tested.
Returns:
bool: Whether given item is dict-like.
'''
for type_ in [dict, OrderedDict]:
if isinstance(item, type_):
if item.__class__.__name__ == 'Counter':
return False
return True
return False
def is_listlike(item):
# type: (Any) -> bool
'''
Determines if given item is list-like.
Args:
item (object): Object to be tested.
Returns:
bool: Whether given item is list-like.
'''
for type_ in [list, tuple, set]:
if isinstance(item, type_):
return True
return False
# CORE-FUNCTIONS----------------------------------------------------------------
def flatten(item, separator='/', embed_types=True):
# type: (Iterable, str, bool) -> Dict[str, Any]
'''
Flattens a iterable object into a flat dictionary.
Args:
item (object): Iterable object.
separator (str, optional): Field separator in keys. Default: '/'.
Returns:
dict: Dictionary representation of given object.
'''
output = {} # type: Dict[str, Any]
def recurse(item, cursor):
# type (Iterable, Any) -> None
if is_listlike(item):
if embed_types:
name = item.__class__.__name__
item = [(f'<{name}_{i}>', val) for i, val in enumerate(item)]
item = dict(item)
else:
item = dict(enumerate(item))
if is_dictlike(item):
for key, val in item.items():
new_key = f'{cursor}{separator}{str(key)}'
if is_iterable(val) and len(val) > 0:
recurse(val, new_key)
else:
final_key = re.sub('^' + separator, '', new_key)
output[final_key] = val
recurse(item, '')
return output
def nest(flat_dict, separator='/'):
# type: (Dict[str, Any], str) -> Dict[str, Any]
'''
Converts a flat dictionary into a nested dictionary by splitting keys by a
given separator.
Args:
flat_dict (dict): Flat dictionary.
separator (str, optional): Field separator within given dictionary's
keys. Default: '/'.
Returns:
dict: Nested dictionary.
'''
output = {} # type: Dict[str, Any]
for keys, val in flat_dict.items():
split_keys = list(filter(
lambda x: x != '', keys.split(separator)
))
cursor = output
last = split_keys.pop()
for key in split_keys:
if key not in cursor:
cursor[key] = {}
if not isinstance(cursor[key], dict):
msg = f"Duplicate key conflict. Key: '{key}'."
raise KeyError(msg)
cursor = cursor[key]
cursor[last] = val
return output
def unembed(item):
# type: (Any) -> Any
'''
Convert embeded types in dictionary keys into python types.
Args:
item (object): Dictionary with embedded types.
Returns:
object: Converted object.
'''
lut = {'list': list, 'tuple': tuple, 'set': set}
embed_re = re.compile(r'^<([a-z]+)_(\d+)>$')
if is_dictlike(item) and item != {}:
output = {} # type: Any
keys = list(item.keys())
match = embed_re.match(keys[0])
if match:
indices = [embed_re.match(key).group(2) for key in keys] # type: ignore
indices = map(int, indices) # type: ignore
output = []
for i, key in sorted(zip(indices, keys)):
next_item = item[key]
if is_dictlike(next_item):
next_item = unembed(next_item)
output.append(next_item)
output = lut[match.group(1)](output)
return output
else:
for key, val in item.items():
output[key] = unembed(val)
return output
return item
# FILE-FUNCTIONS----------------------------------------------------------------
def list_all_files(
directory, # type: Filepath
include_regex=None, # type: Optional[str]
exclude_regex=None # type: Optional[str]
):
# type: (...) -> Generator[Path, None, None]
'''
Recusively list all files within a given directory.
Args:
directory (str or Path): Directory to walk.
include_regex (str, optional): Include filenames that match this regex.
Default: None.
exclude_regex (str, optional): Exclude filenames that match this regex.
Default: None.
Raises:
FileNotFoundError: If argument is not a directory or does not exist.
Yields:
Path: File.
'''
directory = Path(directory)
if not directory.is_dir():
msg = f'{directory} is not a directory or does not exist.'
raise FileNotFoundError(msg)
include_re = re.compile(include_regex or '') # type: Any
exclude_re = re.compile(exclude_regex or '') # type: Any
for root, _, files in os.walk(directory):
for file_ in files:
filepath = Path(root, file_)
output = True
temp = filepath.absolute().as_posix()
if include_regex is not None and not include_re.search(temp):
output = False
if exclude_regex is not None and exclude_re.search(temp):
output = False
if output:
yield Path(root, file_)
def directory_to_dataframe(directory, include_regex='', exclude_regex=r'\.DS_Store'):
# type: (Filepath, str, str) -> pd.DataFrame
r'''
Recursively list files with in a given directory as rows in a pd.DataFrame.
Args:
directory (str or Path): Directory to walk.
include_regex (str, optional): Include filenames that match this regex.
Default: None.
exclude_regex (str, optional): Exclude filenames that match this regex.
Default: '\.DS_Store'.
Returns:
pd.DataFrame: pd.DataFrame with one file per row.
'''
files = list_all_files(
directory,
include_regex=include_regex,
exclude_regex=exclude_regex
) # type: Any
files = sorted(list(files))
data = pd.DataFrame()
data['filepath'] = files
data['filename'] = data.filepath.apply(lambda x: x.name)
data['extension'] = data.filepath \
.apply(lambda x: Path(x).suffix.lstrip('.'))
data.filepath = data.filepath.apply(lambda x: x.absolute().as_posix())
return data
def get_parent_fields(key, separator='/'):
# type: (str, str) -> List[str]
'''
Get all the parent fields of a given key, split by given separator.
Args:
key (str): Key.
separator (str, optional): String that splits key into fields.
Default: '/'.
Returns:
list(str): List of absolute parent fields.
'''
fields = key.split(separator)
output = [] # type: List[str]
for i in range(len(fields) - 1):
output.append(separator.join(fields[:i + 1]))
return output
def filter_text(
text, # type: str
include_regex=None, # type: Optional[str]
exclude_regex=None, # type: Optional[str]
replace_regex=None, # type: Optional[str]
replace_value=None, # type: Optional[str]
):
# type: (...) -> str
'''
Filter given text by applying regular expressions to each line.
Args:
text (str): Newline separated lines.
include_regex (str, optional): Keep lines that match given regex.
Default: None.
exclude_regex (str, optional): Remove lines that match given regex.
Default: None.
replace_regex (str, optional): Substitutes regex matches in lines with
replace_value. Default: None.
replace_value (str, optional): Regex substitution value. Default: ''.
Raises:
AssertionError: If source is not a file.
Returns:
str: Filtered text.
'''
lines = text.split('\n')
if include_regex is not None:
lines = list(filter(lambda x: re.search(include_regex, x), lines)) # type: ignore
if exclude_regex is not None:
lines = list(filter(lambda x: not re.search(exclude_regex, x), lines)) # type: ignore
if replace_regex is not None:
rep_val = replace_value or ''
lines = [re.sub(replace_regex, rep_val, x) for x in lines]
output = '\n'.join(lines)
return output
def read_text(filepath):
# type: (Filepath) -> str
'''
Convenience function for reading text from given file.
Args:
filepath (str or Path): File to be read.
Raises:
AssertionError: If source is not a file.
Returns:
str: text.
'''
assert Path(filepath).is_file()
with open(filepath) as f:
return f.read()
def write_text(text, filepath):
# type: (str, Filepath) -> None
'''
Convenience function for writing text to given file.
Creates directories as needed.
Args:
text (str): Text to be written.
filepath (str or Path): File to be written.
'''
os.makedirs(Path(filepath).parent, exist_ok=True)
with open(filepath, 'w') as f:
f.write(text)
def copy_file(source, target):
# type: (Filepath, Filepath) -> None
'''
Copy a source file to a target file. Creating directories as needed.
Args:
source (str or Path): Source filepath.
target (str or Path): Target filepath.
Raises:
AssertionError: If source is not a file.
'''
assert Path(source).is_file()
os.makedirs(Path(target).parent, exist_ok=True)
shutil.copy2(source, target)
def move_file(source, target):
# type: (Filepath, Filepath) -> None
'''
Moves a source file to a target file. Creating directories as needed.
Args:
source (str or Path): Source filepath.
target (str or Path): Target filepath.
Raises:
AssertionError: If source is not a file.
'''
src = Path(source).as_posix()
assert Path(src).is_file()
os.makedirs(Path(target).parent, exist_ok=True)
shutil.move(src, target)
# EXPORT-FUNCTIONS--------------------------------------------------------------
def dot_to_html(dot, layout='dot', as_png=False):
# type: (pydot.Dot, str, bool) -> Union[HTML, Image]
'''
Converts a given pydot graph into a IPython.display.HTML object.
Used in jupyter lab inline display of graph data.
Args:
dot (pydot.Dot): Pydot Graph instance.
layout (str, optional): Graph layout style.
Options include: circo, dot, fdp, neato, sfdp, twopi.
Default: dot.
as_png (bool, optional): Display graph as a PNG image instead of SVG.
Useful for display on Github. Default: False.
Raises:
ValueError: If invalid layout given.
Returns:
IPython.display.HTML: HTML instance.
'''
layouts = ['circo', 'dot', 'fdp', 'neato', 'sfdp', 'twopi']
if layout not in layouts:
msg = f'Invalid layout value. {layout} not in {layouts}.'
raise ValueError(msg)
if as_png:
return Image(data=dot.create_png())
svg = dot.create_svg(prog=layout)
html = f'<object type="image/svg+xml" data="data:image/svg+xml;{svg}"></object>' # type: Any
html = HTML(html)
html.data = re.sub(r'\\n|\\', '', html.data)
html.data = re.sub('</svg>.*', '</svg>', html.data)
return html
def write_dot_graph(
dot,
fullpath,
layout='dot',
):
# type: (pydot.Dot, Union[str, Path], str) -> None
'''
Writes a pydot.Dot object to a given filepath.
Formats supported: svg, dot, png.
Args:
dot (pydot.Dot): Pydot Dot instance.
fulllpath (str or Path): File to be written to.
layout (str, optional): Graph layout style.
Options include: circo, dot, fdp, neato, sfdp, twopi. Default: dot.
Raises:
ValueError: If invalid file extension given.
'''
if isinstance(fullpath, Path):
fullpath = Path(fullpath).absolute().as_posix()
_, ext = os.path.splitext(fullpath)
ext = re.sub(r'^\.', '', ext)
if re.search('^svg$', ext, re.I):
dot.write_svg(fullpath, prog=layout)
elif re.search('^dot$', ext, re.I):
dot.write_dot(fullpath, prog=layout)
elif re.search('^png$', ext, re.I):
dot.write_png(fullpath, prog=layout)
else:
msg = f'Invalid extension found: {ext}. '
msg += 'Valid extensions include: svg, dot, png.'
raise ValueError(msg)
# MISC-FUNCTIONS----------------------------------------------------------------
def replace_and_format(regex, replace, string, flags=0):
# type: (str, str, str, Any) -> str
r'''
Perform a regex substitution on a given string and format any named group
found in the result with groupdict data from the pattern. Group beggining
with 'i' will be converted to integers. Groups beggining with 'f' will be
converted to floats.
----------------------------------------------------------------------------
Named group anatomy:
====================
* (?P<NAME>PATTERN)
* NAME becomes a key and whatever matches PATTERN becomes its value.
>>> re.search('(?P<i>\d+)', 'foobar123').groupdict()
{'i': '123'}
----------------------------------------------------------------------------
Examples:
=========
Special groups:
* (?P<i>\d) - string matched by '\d' will be converted to an integer
* (?P<f>\d) - string matched by '\d' will be converted to an float
* (?P<i_foo>\d) - string matched by '\d' will be converted to an integer
* (?P<f_bar>\d) - string matched by '\d' will be converted to an float
Named groups (long):
>>> proj = '(?P<p>[a-z0-9]+)'
>>> spec = '(?P<s>[a-z0-9]+)'
>>> desc = '(?P<d>[a-z0-9\-]+)'
>>> ver = '(?P<iv>\d+)\.'
>>> frame = '(?P<i_f>\d+)'
>>> regex = f'{proj}\.{spec}\.{desc}\.v{ver}\.{frame}.*'
>>> replace = 'p-{p}_s-{s}_d-{d}_v{iv:03d}_f{i_f:04d}.jpeg'
>>> string = 'proj.spec.desc.v1.25.png'
>>> replace_and_format(regex, replace, string, flags=re.IGNORECASE)
p-proj_s-spec_d-desc_v001_f0025.jpeg
Named groups (short):
>>> replace_and_format(
'(?P<p>[a-z0-9]+)\.(?P<s>[a-z0-9]+)\.(?P<d>[a-z0-9\-]+)\.v(?P<iv>\d+)\.(?P<i_f>\d+).*',
'p-{p}_s-{s}_d-{d}_v{iv:03d}_f{i_f:04d}.jpeg',
'proj.spec.desc.v1.25.png',
)
p-proj_s-spec_d-desc_v001_f0025.jpeg
No groups:
>>> replace_and_format('foo', 'bar', 'foobar')
barbar
----------------------------------------------------------------------------
Args:
regex (str): Regex pattern to search string with.
replace (str): Replacement string which may contain formart variables
ie '{variable}'.
string (str): String to be converted.
flags (object, optional): re.sub flags. Default: 0.
Returns:
str: Converted string.
'''
match = re.search(regex, string, flags=flags)
grp = {}
if match:
grp = match.groupdict()
for key, val in grp.items():
if key.startswith('f'):
grp[key] = float(val)
elif key.startswith('i'):
grp[key] = int(val)
output = re.sub(regex, replace, string, flags=flags)
# .format won't evaluate math expressions so do this
if grp != {}:
output = eval(f"f'{output}'", None, grp)
return output | /rolling-pin-0.9.3.tar.gz/rolling-pin-0.9.3/rolling_pin/tools.py | 0.772831 | 0.173218 | tools.py | pypi |
from typing import Any, Dict, List, Union # noqa: F401
from IPython.display import HTML, Image # noqa: F401
from copy import deepcopy
from itertools import chain
from pathlib import Path
import re
from lunchbox.enforce import Enforce
from pandas import DataFrame
import yaml
from rolling_pin.blob_etl import BlobETL
from rolling_pin.conform_config import ConformConfig
import rolling_pin.tools as rpt
Rules = List[Dict[str, str]]
# ------------------------------------------------------------------------------
CONFORM_COLOR_SCHEME = deepcopy(rpt.COLOR_SCHEME)
CONFORM_COLOR_SCHEME.update({
'node_font': '#DE958E',
'node_value_font': '#B6ECF3',
'edge': '#DE958E',
'edge_value': '#B6ECF3',
'node_library_font': '#B6ECF3',
'node_module_font': '#DE958E',
'edge_library': '#B6ECF3',
'edge_module': '#DE958E'
})
class ConformETL:
'''
ConformETL creates a DataFrame from a given directory of source files.
Then it generates target paths given a set of rules.
Finally, the conform method is called and the source files are copied to
their target filepaths.
'''
@staticmethod
def _get_data(
source_rules=[], rename_rules=[], group_rules=[], line_rules=[]
):
# type: (Rules, Rules, Rules, Rules) -> DataFrame
'''
Generates DataFrame from given source_rules and then generates target
paths for them given other rules.
Args:
source_rules (Rules): A list of rules for parsing directories.
Default: [].
rename_rules (Rules): A list of rules for renaming source filepath
to target filepaths. Default: [].
group_rules (Rules): A list of rules for grouping files.
Default: [].
line_rules (Rules): A list of rules for peforming line copies on
files belonging to a given group. Default: [].
Returns:
DataFrame: Conform DataFrame.
'''
# source
source = [] # type: List[Any]
for rule in source_rules:
files = rpt.list_all_files(
rule['path'],
include_regex=rule.get('include', None),
exclude_regex=rule.get('exclude', None),
)
source.extend(files)
source = sorted([x.as_posix() for x in source])
data = DataFrame()
data['source'] = source
data['target'] = source
# rename
for rule in rename_rules:
data.target = data.target.apply(
lambda x: rpt.replace_and_format(
rule['regex'], rule['replace'], x
)
)
# group
data['groups'] = data.source.apply(lambda x: [])
for rule in group_rules:
mask = data.source \
.apply(lambda x: re.search(rule['regex'], x)) \
.astype(bool)
data.loc[mask, 'groups'] = data.groups \
.apply(lambda x: x + [rule['name']])
mask = data.groups.apply(lambda x: x == [])
data.loc[mask, 'groups'] = data.loc[mask, 'groups'] \
.apply(lambda x: ['base'])
# line
groups = set([x['group'] for x in line_rules])
data['line_rule'] = data.groups \
.apply(lambda x: len(set(x).intersection(groups)) > 0)
return data
@classmethod
def from_yaml(cls, filepath):
# type: (Union[str, Path]) -> ConformETL
'''
Construct ConformETL instance from given yaml file.
Args:
filepath (str or Path): YAML file.
Raises:
EnforceError: If file does not end in yml or yaml.
Returns:
ConformETL: ConformETL instance.
'''
filepath = Path(filepath).as_posix()
ext = Path(filepath).suffix[1:].lower()
msg = f'{filepath} does not end in yml or yaml.'
Enforce(ext, 'in', ['yml', 'yaml'], message=msg)
# ----------------------------------------------------------------------
with open(filepath) as f:
config = yaml.safe_load(f)
return cls(**config)
def __init__(
self, source_rules=[], rename_rules=[], group_rules=[], line_rules=[]
):
# type: (Rules, Rules, Rules, Rules) -> None
'''
Generates DataFrame from given source_rules and then generates target
paths for them given other rules.
Args:
source_rules (Rules): A list of rules for parsing directories.
Default: [].
rename_rules (Rules): A list of rules for renaming source filepath
to target filepaths. Default: [].
group_rules (Rules): A list of rules for grouping files.
Default: [].
line_rules (Rules): A list of rules for peforming line copies on
files belonging to a given group. Default: [].
Raises:
DataError: If configuration is invalid.
'''
config = dict(
source_rules=source_rules,
rename_rules=rename_rules,
group_rules=group_rules,
line_rules=line_rules,
)
cfg = ConformConfig(config)
cfg.validate()
config = cfg.to_native()
self._data = self._get_data(
source_rules=source_rules,
rename_rules=rename_rules,
group_rules=group_rules,
line_rules=line_rules,
) # type: DataFrame
self._line_rules = line_rules # type: Rules
def __repr__(self):
# type: () -> str
'''
String representation of conform DataFrame.
Returns:
str: Table optimized for output to shell.
'''
data = self._data.copy()
data.line_rule = data.line_rule.apply(lambda x: 'X' if x else '')
data.rename(lambda x: x.upper(), axis=1, inplace=True)
output = data \
.to_string(index=False, max_colwidth=150, col_space=[50, 50, 20, 10])
return output
@property
def groups(self):
# type: () -> List[str]
'''
list[str]: List of groups found with self._data.
'''
output = self._data.groups.tolist()
output = sorted(list(set(chain(*output))))
output.remove('base')
output.insert(0, 'base')
return output
def to_dataframe(self):
# type: () -> DataFrame
'''
Returns:
DataFrame: Copy of internal data.
'''
return self._data.copy()
def to_blob(self):
# type: () -> BlobETL
'''
Converts self into a BlobETL object with target column as keys and
source columns as values.
Returns:
BlobETL: BlobETL of target and source filepaths.
'''
data = self._data
keys = data.target.tolist()
vals = data.source.tolist()
output = dict(zip(keys, vals))
return BlobETL(output)
def to_html(
self, orient='lr', color_scheme=CONFORM_COLOR_SCHEME, as_png=False
):
# type: (str, Dict[str, str], bool) -> Union[Image, HTML]
'''
For use in inline rendering of graph data in Jupyter Lab.
Graph from target to source filepath. Target is in red, source is in
cyan.
Args:
orient (str, optional): Graph layout orientation. Default: lr.
Options include:
* tb - top to bottom
* bt - bottom to top
* lr - left to right
* rl - right to left
color_scheme: (dict, optional): Color scheme to be applied to graph.
Default: rolling_pin.conform_etl.CONFORM_COLOR_SCHEME
as_png (bool, optional): Display graph as a PNG image instead of
SVG. Useful for display on Github. Default: False.
Returns:
IPython.display.HTML: HTML object for inline display.
'''
return self.to_blob() \
.to_html(orient=orient, color_scheme=color_scheme, as_png=as_png)
def conform(self, groups='all'):
# type: (Union[str, List[str]]) -> None
'''
Copies source files to target filepaths.
Args:
groups (str or list[str]): Groups of files which are to be conformed.
'all' means all groups. Default: 'all'.
'''
if isinstance(groups, str):
groups = [groups]
if groups == ['all']:
groups = self.groups
data = self.to_dataframe()
# copy files
grps = set(groups)
mask = data.groups \
.apply(lambda x: set(x).intersection(grps)) \
.apply(lambda x: len(x) > 0)
data = data[mask]
data.apply(lambda x: rpt.copy_file(x.source, x.target), axis=1)
# copy lines
data['text'] = data.source.apply(rpt.read_text)
rules = list(filter(lambda x: x['group'] in groups, self._line_rules))
for rule in rules:
mask = data.groups.apply(lambda x: rule['group'] in x)
data.loc[mask, 'text'] = data.loc[mask, 'text'].apply(
lambda x: rpt.filter_text(
x,
include_regex=rule.get('include', None),
exclude_regex=rule.get('exclude', None),
replace_regex=rule.get('regex', None),
replace_value=rule.get('replace', None),
)
)
data.apply(lambda x: rpt.write_text(x.text, x.target), axis=1) | /rolling-pin-0.9.3.tar.gz/rolling-pin-0.9.3/rolling_pin/conform_etl.py | 0.908252 | 0.337013 | conform_etl.py | pypi |
from typing import Dict, List # noqa: F401
from pathlib import Path
from schematics import Model
from schematics.exceptions import ValidationError
from schematics.types import ListType, ModelType, StringType
Rules = List[Dict[str, str]]
# ------------------------------------------------------------------------------
def is_dir(dirpath):
# type: (str) -> None
'''
Validates whether a given dirpath exists.
Args:
dirpath (str): Directory path.
Raises:
ValidationError: If dirpath is not a directory or does not exist.
'''
if not Path(dirpath).is_dir():
msg = f'{dirpath} is not a directory or does not exist.'
raise ValidationError(msg)
class ConformConfig(Model):
'''
A class for validating configurations supplied to ConformETL.
Attributes:
source_rules (Rules): A list of rules for parsing directories.
Default: [].
rename_rules (Rules): A list of rules for renaming source filepath
to target filepaths. Default: [].
group_rules (Rules): A list of rules for grouping files.
Default: [].
line_rules (Rules): A list of rules for peforming line copies and
substitutions on files belonging to a given group. Default: [].
'''
class SourceRule(Model):
path = StringType(required=True, validators=[is_dir]) # type: StringType
include = StringType(required=False, serialize_when_none=False) # type: StringType
exclude = StringType(required=False, serialize_when_none=False) # type: StringType
class RenameRule(Model):
regex = StringType(required=True) # type: StringType
replace = StringType(required=True) # type: StringType
class GroupRule(Model):
name = StringType(required=True) # type: StringType
regex = StringType(required=True) # type: StringType
class LineRule(Model):
group = StringType(required=True) # type: StringType
include = StringType(required=False, serialize_when_none=False) # type: StringType
exclude = StringType(required=False, serialize_when_none=False) # type: StringType
regex = StringType(required=False) # type: StringType
replace = StringType(required=False) # type: StringType
source_rules = ListType(ModelType(SourceRule), required=True) # type: ListType
rename_rules = ListType(ModelType(RenameRule), required=False) # type: ListType
group_rules = ListType(ModelType(GroupRule), required=False) # type: ListType
line_rules = ListType(ModelType(LineRule), required=False) # type: ListType | /rolling-pin-0.9.3.tar.gz/rolling-pin-0.9.3/rolling_pin/conform_config.py | 0.883851 | 0.295128 | conform_config.py | pypi |
from typing import Any, Dict, List, Union # noqa: F401
import json
import os
import re
from pathlib import Path
import cufflinks as cf
import numpy as np
import pandas as pd
from pandas import DataFrame
import radon.complexity
from radon.cli import Config
from radon.cli import CCHarvester, HCHarvester, MIHarvester, RawHarvester
from rolling_pin.blob_etl import BlobETL
import rolling_pin.tools as rpt
# ------------------------------------------------------------------------------
'''
Contain the RadonETL class, which is used for generating a radon report on the
code wthin a given directory.
'''
class RadonETL():
'''
Conforms all four radon reports (raw metrics, Halstead, maintainability and
cyclomatic complexity) into a single DataFrame that can then be plotted.
'''
def __init__(self, fullpath):
# type: (Union[str, Path]) -> None
'''
Constructs a RadonETL instance.
Args:
fullpath (str or Path): Python file or directory of python files.
'''
self._report = RadonETL._get_radon_report(fullpath)
# --------------------------------------------------------------------------
@property
def report(self):
# type: () -> Dict
'''
dict: Dictionary of all radon metrics.
'''
return self._report
@property
def data(self):
# type: () -> DataFrame
'''
DataFrame: DataFrame of all radon metrics.
'''
return self._get_radon_data()
@property
def raw_metrics(self):
# type: () -> DataFrame
'''
DataFrame: DataFrame of radon raw metrics.
'''
return self._get_raw_metrics_dataframe(self._report)
@property
def maintainability_index(self):
# type: () -> DataFrame
'''
DataFrame: DataFrame of radon maintainability index metrics.
'''
return self._get_maintainability_index_dataframe(self._report)
@property
def cyclomatic_complexity_metrics(self):
# type: () -> DataFrame
'''
DataFrame: DataFrame of radon cyclomatic complexity metrics.
'''
return self._get_cyclomatic_complexity_dataframe(self._report)
@property
def halstead_metrics(self):
# type: () -> DataFrame
'''
DataFrame: DataFrame of radon Halstead metrics.
'''
return self._get_halstead_dataframe(self._report)
# --------------------------------------------------------------------------
def _get_radon_data(self):
# type: () -> DataFrame
'''
Constructs a DataFrame representing all the radon reports generated for
a given python file or directory containing python files.
Returns:
DataFrame: Radon report DataFrame.
'''
hal = self.halstead_metrics
cc = self.cyclomatic_complexity_metrics
raw = self.raw_metrics
mi = self.maintainability_index
data = hal.merge(cc, how='outer', on=['fullpath', 'name'])
data['object_type'] = data.object_type_x
mask = data.object_type_x.apply(pd.isnull)
mask = data[mask].index
data.loc[mask, 'object_type'] = data.loc[mask, 'object_type_y']
del data['object_type_x']
del data['object_type_y']
module = raw.merge(mi, on='fullpath')
cols = set(module.columns.tolist()) # type: Any
cols = cols.difference(data.columns.tolist())
cols = list(cols)
for col in cols:
data[col] = np.nan
mask = data.object_type == 'module'
for i, row in data[mask].iterrows():
for col in cols:
val = module[module.fullpath == row.fullpath][col].item()
data.loc[i, col] = val
cols = [
'fullpath', 'name', 'class_name', 'object_type', 'blank', 'bugs',
'calculated_length', 'code', 'column_offset', 'comment',
'cyclomatic_complexity', 'cyclomatic_rank', 'difficulty', 'effort',
'h1', 'h2', 'length', 'logical_code', 'maintainability_index',
'maintainability_rank', 'multiline_comment', 'n1', 'n2',
'single_comment', 'source_code', 'start_line', 'stop_line', 'time',
'vocabulary', 'volume',
]
data = data[cols]
return data
# --------------------------------------------------------------------------
@staticmethod
def _get_radon_report(fullpath):
# type: (Union[str, Path]) -> Dict[str, Any]
'''
Gets all 4 report from radon and aggregates them into a single blob
object.
Args:
fullpath (str or Path): Python file or directory of python files.
Returns:
dict: Radon report blob.
'''
fullpath_ = [Path(fullpath).absolute().as_posix()] # type: List[str]
output = [] # type: Any
config = Config(
min='A',
max='F',
exclude=None,
ignore=None,
show_complexity=False,
average=False,
total_average=False,
order=getattr(
radon.complexity, 'SCORE', getattr(radon.complexity, 'SCORE')
),
no_assert=False,
show_closures=False,
)
output.append(CCHarvester(fullpath_, config).as_json())
config = Config(
exclude=None,
ignore=None,
summary=False,
)
output.append(RawHarvester(fullpath_, config).as_json())
config = Config(
min='A',
max='C',
exclude=None,
ignore=None,
multi=True,
show=False,
sort=False,
)
output.append(MIHarvester(fullpath_, config).as_json())
config = Config(
exclude=None,
ignore=None,
by_function=False,
)
output.append(HCHarvester(fullpath_, config).as_json())
output = list(map(json.loads, output))
keys = [
'cyclomatic_complexity', 'raw_metrics', 'maintainability_index',
'halstead_metrics',
]
output = dict(zip(keys, output))
return output
@staticmethod
def _get_raw_metrics_dataframe(report):
# type: (Dict) -> DataFrame
'''
Converts radon raw metrics report into a pandas DataFrame.
Args:
report (dict): Radon report blob.
Returns:
DataFrame: Raw metrics DataFrame.
'''
raw = report['raw_metrics']
fullpaths = list(raw.keys())
path_lut = {k: f'<list_{i}>' for i, k in enumerate(fullpaths)}
fullpath_fields = {x: {'fullpath': x} for x in fullpaths}
# loc = Lines of Code (total lines) - sloc + blanks + multi + single_comments
# lloc = Logical Lines of Code
# comments = Comments lines
# multi = Multi-line strings (assumed to be docstrings)
# blank = Blank lines (or whitespace-only lines)
# single_comments = Single-line comments or docstrings
name_lut = dict(
blank='blank',
comments='comment',
lloc='logical_code',
loc='code',
multi='multiline_comment',
single_comments='single_comment',
sloc='source_code',
fullpath='fullpath',
)
data = BlobETL(raw, '#')\
.update(fullpath_fields) \
.set_field(0, lambda x: path_lut[x])\
.set_field(1, lambda x: name_lut[x])\
.to_dict() # type: Union[Dict, DataFrame]
data = DataFrame(data)
data.sort_values('fullpath', inplace=True)
data.reset_index(drop=True, inplace=True)
cols = [
'fullpath', 'blank', 'code', 'comment', 'logical_code',
'multiline_comment', 'single_comment', 'source_code',
]
data = data[cols]
return data
@staticmethod
def _get_maintainability_index_dataframe(report):
# type: (Dict) -> DataFrame
'''
Converts radon maintainability index report into a pandas DataFrame.
Args:
report (dict): Radon report blob.
Returns:
DataFrame: Maintainability DataFrame.
'''
mi = report['maintainability_index']
fullpaths = list(mi.keys())
path_lut = {k: f'<list_{i}>' for i, k in enumerate(fullpaths)}
fullpath_fields = {x: {'fullpath': x} for x in fullpaths}
name_lut = dict(
mi='maintainability_index',
rank='maintainability_rank',
fullpath='fullpath',
)
data = None # type: Any
data = BlobETL(mi, '#')\
.update(fullpath_fields) \
.set_field(0, lambda x: path_lut[x])\
.set_field(1, lambda x: name_lut[x])\
.to_dict()
data = DataFrame(data)
data.sort_values('fullpath', inplace=True)
data.reset_index(drop=True, inplace=True)
cols = ['fullpath', 'maintainability_index', 'maintainability_rank']
data = data[cols]
# convert rank to integer
rank_lut = {k: i for i, k in enumerate('ABCDEF')}
data['maintainability_rank'] = data['maintainability_rank']\
.apply(lambda x: rank_lut[x])
return data
@staticmethod
def _get_cyclomatic_complexity_dataframe(report):
# type: (Dict) -> DataFrame
'''
Converts radon cyclomatic complexity report into a pandas DataFrame.
Args:
report (dict): Radon report blob.
Returns:
DataFrame: Cyclomatic complexity DataFrame.
'''
filters = [
[4, 6, 'method_closure',
'^[^#]+#<list_[0-9]+>#methods#<list_[0-9]+>#closures#<list_[0-9]+>#[^#]+$'],
[3, 4, 'closure', '^[^#]+#<list_[0-9]+>#closures#<list_[0-9]+>#[^#]+$'],
[3, 4, 'method', '^[^#]+#<list_[0-9]+>#methods#<list_[0-9]+>#[^#]+$'],
[2, 2, None, '^[^#]+#<list_[0-9]+>#[^#]+$'],
] # type: Any
cc = report['cyclomatic_complexity']
rows = []
for i, j, type_, regex in filters:
temp = BlobETL(cc, '#').query(regex) # type: DataFrame
if len(temp.to_flat_dict().keys()) > 0:
temp = temp.to_dataframe(i)
item = temp\
.apply(lambda x: dict(zip(x[j], x['value'])), axis=1)\
.tolist()
item = DataFrame(item)
item['fullpath'] = temp[0]
if type_ is not None:
item.type = type_
rows.append(item)
data = pd.concat(rows, ignore_index=True, sort=False)
cols = [
'fullpath', 'name', 'classname', 'type', 'complexity', 'rank',
'lineno', 'endline', 'col_offset'
]
data = data[cols]
lut = {
'fullpath': 'fullpath',
'name': 'name',
'classname': 'class_name',
'type': 'object_type',
'complexity': 'cyclomatic_complexity',
'rank': 'cyclomatic_rank',
'lineno': 'start_line',
'endline': 'stop_line',
'col_offset': 'column_offset',
}
data.drop_duplicates(inplace=True)
data.rename(mapper=lambda x: lut[x], axis=1, inplace=True)
data.reset_index(drop=True, inplace=True)
# convert rank to integer
rank_lut = {k: i for i, k in enumerate('ABCDEF')}
data['cyclomatic_rank'] = data['cyclomatic_rank']\
.apply(lambda x: rank_lut[x])
return data
@staticmethod
def _get_halstead_dataframe(report):
# type: (Dict) -> DataFrame
'''
Converts radon Halstead report into a pandas DataFrame.
Args:
report (dict): Radon report blob.
Returns:
DataFrame: Halstead DataFrame.
'''
hal = report['halstead_metrics']
keys = [
'h1', 'h2', 'n1', 'n2', 'vocabulary', 'length', 'calculated_length',
'volume', 'difficulty', 'effort', 'time', 'bugs',
]
data = BlobETL(hal, '#').query('function|closure').to_dataframe(3)
data['fullpath'] = data[0]
data['object_type'] = data[1].apply(lambda x: re.sub('s$', '', x))
data['name'] = data.value.apply(lambda x: x[0])
score = data.value.apply(lambda x: dict(zip(keys, x[1:]))).tolist()
score = DataFrame(score)
data = data.join(score)
total = BlobETL(hal, '#').query('total').to_dataframe()
total['fullpath'] = total[0]
total = total.groupby('fullpath', as_index=False)\
.agg(lambda x: dict(zip(keys, x)))
score = total.value.tolist()
score = DataFrame(score)
total = total.join(score)
total['object_type'] = 'module'
total['name'] = total.fullpath\
.apply(lambda x: os.path.splitext((Path(x).name))[0])
data = pd.concat([data, total], ignore_index=True, sort=False)
cols = ['fullpath', 'name', 'object_type']
cols.extend(keys)
data = data[cols]
return data
# EXPORT--------------------------------------------------------------------
def write_plots(self, fullpath):
# type: (Union[str, Path]) -> RadonETL
'''
Writes metrics plots to given file.
Args:
fullpath (Path or str): Target file.
Returns:
RadonETL: self.
'''
cf.go_offline()
def remove_test_modules(data):
# type: (DataFrame) -> DataFrame
mask = data.fullpath\
.apply(lambda x: not re.search(r'_test\.py$', x)).astype(bool)
return data[mask]
lut = dict(
h1='h1 - the number of distinct operators',
h2='h2 - the number of distinct operands',
n1='n1 - the total number of operators',
n2='n2 - the total number of operands',
vocabulary='vocabulary (h) - h1 + h2',
length='length (N) - n1 + n2',
calculated_length='calculated_length - h1 * log2(h1) + h2 * log2(h2)',
volume='volume (V) - N * log2(h)',
difficulty='difficulty (D) - h1 / 2 * n2 / h2',
effort='effort (E) - D * V',
time='time (T) - E / 18 seconds',
bugs='bugs (B) - V / 3000 - an estimate of the errors in the implementation',
)
params = dict(
theme='henanigans',
colors=rpt.COLOR_SCALE,
dimensions=(900, 900),
asFigure=True,
)
html = '<body style="background: #242424">\n'
raw = remove_test_modules(self.raw_metrics)
mi = remove_test_modules(self.maintainability_index)
cc = remove_test_modules(self.cyclomatic_complexity_metrics)
hal = remove_test_modules(self.halstead_metrics)
raw['docstring_ratio'] = raw.multiline_comment / raw.code
raw.sort_values('docstring_ratio', inplace=True)
html += raw.iplot(
x='fullpath',
kind='barh',
title='Line Count Metrics',
**params
).to_html()
html += mi.iplot(
x='fullpath',
kind='barh',
title='Maintainability Metrics',
**params
).to_html()
params['dimensions'] = (900, 500)
cols = ['cyclomatic_complexity', 'cyclomatic_rank']
html += cc[cols].iplot(
kind='hist',
bins=50,
title='Cyclomatic Metric Distributions',
**params
).to_html()
cols = [
'h1', 'h2', 'n1', 'n2', 'vocabulary', 'length', 'calculated_length',
'volume', 'difficulty', 'effort', 'time', 'bugs'
]
html += hal[cols]\
.rename(mapper=lambda x: lut[x], axis=1)\
.iplot(
kind='hist',
bins=50,
title='Halstead Metric Distributions',
**params)\
.to_html()
html += '\n</body>'
with open(fullpath, 'w') as f:
f.write(html)
return self
def write_tables(self, target_dir):
# type: (Union[str, Path]) -> RadonETL
'''
Writes metrics tables as HTML files to given directory.
Args:
target_dir (Path or str): Target directory.
Returns:
RadonETL: self.
'''
def write_table(data, target):
# type: (DataFrame, Path) -> None
html = data.to_html()
# make table sortable
script = '<script '
script += 'src="http://www.kryogenix.org/code/browser/sorttable/sorttable.js" '
script += 'type="text/javascript"></script>\n'
html = re.sub('class="dataframe"', 'class="sortable"', html)
html = script + html
with open(target, 'w') as f:
f.write(html)
data = self.data
raw = self.raw_metrics
mi = self.maintainability_index
cc = self.cyclomatic_complexity_metrics
hal = self.halstead_metrics
write_table(data, Path(target_dir, 'all_metrics.html'))
write_table(raw, Path(target_dir, 'raw_metrics.html'))
write_table(mi, Path(target_dir, 'maintainability_metrics.html'))
write_table(cc, Path(target_dir, 'cyclomatic_complexity_metrics.html'))
write_table(hal, Path(target_dir, 'halstead_metrics.html'))
return self | /rolling-pin-0.9.3.tar.gz/rolling-pin-0.9.3/rolling_pin/radon_etl.py | 0.790409 | 0.35145 | radon_etl.py | pypi |
from typing import Any, Type, TypeVar, Union # noqa: F401
from copy import deepcopy
from pathlib import Path
import os
from lunchbox.enforce import Enforce
import toml
from rolling_pin.blob_etl import BlobETL
from toml.decoder import TomlDecodeError
T = TypeVar('T', bound='TomlETL')
# ------------------------------------------------------------------------------
class TomlETL:
@classmethod
def from_string(cls, text):
# type: (Type[T], str) -> T
'''
Creates a TomlETL instance from a given TOML string.
Args:
text (str): TOML string.
Returns:
TomlETL: TomlETL instance.
'''
return cls(toml.loads(text))
@classmethod
def from_toml(cls, filepath):
# type: (Type[T], Union[str, Path]) -> T
'''
Creates a TomlETL instance from a given TOML file.
Args:
filepath (str or Path): TOML file.
Returns:
TomlETL: TomlETL instance.
'''
return cls(toml.load(filepath))
def __init__(self, data):
# type: (dict[str, Any]) -> None
'''
Creates a TomlETL instance from a given dictionary.
Args:
data (dict): Dictionary.
'''
self._data = data
def to_dict(self):
# type: () -> dict
'''
Converts instance to dictionary copy.
Returns:
dict: Dictionary copy of instance.
'''
return deepcopy(self._data)
def to_string(self):
# type: () -> str
'''
Converts instance to a TOML formatted string.
Returns:
str: TOML string.
'''
return toml.dumps(
self._data, encoder=toml.TomlArraySeparatorEncoder(separator=',')
)
def write(self, filepath):
# type: (Union[str, Path]) -> None
'''
Writes instance to given TOML file.
Args:
filepath (str or Path): Target filepath.
'''
filepath = Path(filepath)
os.makedirs(filepath.parent, exist_ok=True)
with open(filepath, 'w') as f:
toml.dump(
self._data,
f,
encoder=toml.TomlArraySeparatorEncoder(separator=',')
)
def edit(self, patch):
# type: (str) -> TomlETL
'''
Apply edit to internal data given TOML patch.
Patch is always of the form '[key]=[value]' and in TOML format.
Args:
patch (str): TOML patch to be applied.
Raises:
TOMLDecoderError: If patch cannot be decoded.
EnforceError: If '=' not found in patch.
Returns:
TomlETL: New TomlETL instance with edits.
'''
msg = 'Edit patch must be a TOML parsable key value snippet with a "=" '
msg += 'character.'
try:
toml.loads(patch)
except TomlDecodeError as e:
msg += ' ' + e.msg
raise TomlDecodeError(msg, e.doc, e.pos)
Enforce('=', 'in', patch, message=msg)
# ----------------------------------------------------------------------
key, val = patch.split('=', maxsplit=1)
val = toml.loads(f'x={val}')['x']
data = BlobETL(self._data, separator='.').to_flat_dict()
data[key] = val
data = BlobETL(data, separator='.').to_dict()
return TomlETL(data)
def delete(self, regex):
# type: (str) -> TomlETL
'''
Returns portion of data whose keys fo not match a given regular expression.
Args:
regex (str): Regular expression applied to keys.
Returns:
TomlETL: New TomlETL instance.
'''
data = BlobETL(self._data, separator='.') \
.query(regex, ignore_case=False, invert=True) \
.to_dict()
return TomlETL(data)
def search(self, regex):
# type: (str) -> TomlETL
'''
Returns portion of data whose keys match a given regular expression.
Args:
regex (str): Regular expression applied to keys.
Returns:
TomlETL: New TomlETL instance.
'''
data = BlobETL(self._data, separator='.') \
.query(regex, ignore_case=False) \
.to_dict()
return TomlETL(data) | /rolling-pin-0.9.3.tar.gz/rolling-pin-0.9.3/rolling_pin/toml_etl.py | 0.845911 | 0.317201 | toml_etl.py | pypi |
import logging
from src.models.strategy.blue_green.steps.check_active_step import CheckActiveStep
from src.models.strategy.blue_green.steps.check_autoscaling_size_step import (
CheckAutoscalingSizeStep,
)
from src.models.strategy.blue_green.steps.rereoute_traffic_step import ReRouteTrafficStep
from src.models.strategy.blue_green.steps.set_new_autoscaling_group_size import (
SetNewAutoscalingGroupSizeStep,
)
from src.models.strategy.blue_green.steps.set_old_autoscaling_group_size import (
SetOldAutoscalingGroupSizeStep,
)
from src.models.strategy.blue_green.steps.terminate_inactive_instances_step import (
TerminateInactiveInstancesStep,
)
from src.models.strategy.blue_green.steps.wait_until_new_instances_are_booted import (
WaitUntilNewInstancesAreBootedStep,
)
from src.models.strategy.blue_green.steps.wait_until_new_instances_are_healthy import (
WaitUntilNewInstancesAreHealthyStep,
)
from src.models.strategy.blue_green.steps.wait_until_new_instances_are_targeted import (
WaitUntilNewInstancesAreTargetedStep,
)
from src.models.strategy.stategy import Strategy
from src.utils import aws_load_balancer, aws_autoscaling_group, aws_target_group
from src.utils.aws_autoscaling_group import AutoscalingGroup
from src.utils.aws_load_balancer import LoadBalancer, ListenerRule
from src.utils.aws_target_group import TargetGroup
logger = logging.getLogger(__name__)
class BlueGreenStrategy(Strategy):
_load_balancer: LoadBalancer = None
_autoscaling_group_blue: AutoscalingGroup = None
_target_group_blue: TargetGroup = None
_listener_rule_blue: ListenerRule = None
_autoscaling_group_green: AutoscalingGroup = None
_target_group_green: TargetGroup = None
_listener_rule_green: ListenerRule = None
steps = [
CheckActiveStep,
CheckAutoscalingSizeStep,
TerminateInactiveInstancesStep,
SetNewAutoscalingGroupSizeStep,
WaitUntilNewInstancesAreBootedStep,
WaitUntilNewInstancesAreHealthyStep,
WaitUntilNewInstancesAreTargetedStep,
ReRouteTrafficStep,
SetOldAutoscalingGroupSizeStep,
]
def __init__(
self,
load_balancer_name: str,
autoscaling_group_blue_name: str,
target_group_blue_name: str,
listener_rule_blue_arn: str,
autoscaling_group_green_name: str,
target_group_green_name: str,
listener_rule_green_arn: str,
):
self.load_balancer_name = load_balancer_name
self.autoscaling_group_blue_name = autoscaling_group_blue_name
self.target_group_blue_name = target_group_blue_name
self.listener_rule_blue_arn = listener_rule_blue_arn
self.autoscaling_group_green_name = autoscaling_group_green_name
self.target_group_green_name = target_group_green_name
self.listener_rule_green_arn = listener_rule_green_arn
@property
def load_balancer(self) -> LoadBalancer:
if self._load_balancer is None:
self._load_balancer = aws_load_balancer.get(self.load_balancer_name)
return self._load_balancer
@property
def autoscaling_group_blue(self) -> AutoscalingGroup:
if self._autoscaling_group_blue is None:
self._autoscaling_group_blue = aws_autoscaling_group.get(
self.autoscaling_group_blue_name
)
return self._autoscaling_group_blue
@property
def autoscaling_group_green(self) -> AutoscalingGroup:
if self._autoscaling_group_green is None:
self._autoscaling_group_green = aws_autoscaling_group.get(
self.autoscaling_group_green_name
)
return self._autoscaling_group_green
@property
def target_group_blue(self) -> TargetGroup:
if self._target_group_blue is None:
self._target_group_blue = aws_target_group.get(self.target_group_blue_name)
return self._target_group_blue
@property
def target_group_green(self) -> TargetGroup:
if self._target_group_green is None:
self._target_group_green = aws_target_group.get(self.target_group_green_name)
return self._target_group_green
@property
def listener_rule_blue(self) -> ListenerRule:
if self._listener_rule_blue is None:
self._listener_rule_blue = aws_load_balancer.get_listener_rule(
self.listener_rule_blue_arn
)
return self._listener_rule_blue
@property
def listener_rule_green(self) -> ListenerRule:
if self._listener_rule_green is None:
self._listener_rule_green = aws_load_balancer.get_listener_rule(
self.listener_rule_green_arn
)
return self._listener_rule_green
def execute(self) -> bool:
for idx, step_class in enumerate(self.steps):
step = step_class(
idx + 1,
len(self.steps),
self.load_balancer,
self.autoscaling_group_blue,
self.target_group_blue,
self.listener_rule_blue,
self.autoscaling_group_green,
self.target_group_green,
self.listener_rule_green,
)
step.pre_check()
step.execute()
step.post_check()
return True | /rolling-replacer-1.2.0.tar.gz/rolling-replacer-1.2.0/src/models/strategy/blue_green/strategy.py | 0.754644 | 0.293848 | strategy.py | pypi |
def update_status_RN(propfile,targetnumber,targetname,status,visitdigit,visitN,filters,sheet_df,verbal=True):
"""
This function updates visits in the propfile given (targetnumber,targetname) with status R or N.
For R (repeat) status, the target is already existed in the propfile, but we would like to change the total number of its visits (visitN) by keeping the visit number's first digit (visitdigit) and filter info (filters) the same.
Arguments:
- propfile = path to .prop file
- targetnumber = str
- targetname = str
- status = str. It is in {'R','N'}
- visitdigit = str, 36-base one digit (i.e., 0...Z)
- visitN = int or str
- filters = str. It must be a correct keyword to search for a template library (e.g., 'visits_G280'). Use TemplateHandler to facilitate this.
- sheet_df = pandas.DataFrame of the observing sheet (i.e., Google sheet)
- verbal = bool
"""
if verbal: print('Start update_status_RN({0},{1},{2},{3},{4},{5},{6},{7},{8}) ...\n'.format(propfile,targetnumber,targetname,status,visitdigit,visitN,filters,'sheet_df',verbal))
from rolling_snapshot_proposal_editor.find_targetname_in_visits import find_targetname_in_visits
from rolling_snapshot_proposal_editor.remove_visit import remove_visit
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.add_visit import add_visit
from rolling_snapshot_proposal_editor.get_available_visitnumber import get_available_visitnumber
import numpy as np
lineindex_visitnumber_targetname_list = find_targetname_in_visits(propfile)
vn_list = []
for i in lineindex_visitnumber_targetname_list:
index,vn,tn = i
if targetname==tn: vn_list.append(vn)
if verbal: print('Target {0} {1} had {2} visits in the old proposal. It will be updated to {3} visits.\n'.format(targetnumber,targetname,len(vn_list),visitN))
niter = np.abs(len(vn_list) - int(visitN))
if len(vn_list) > int(visitN):
if verbal: print('Remove {0} visits from the old proposal.\n'.format(niter))
for i in np.arange(niter): remove_visit(propfile,propfile,vn_list[-1-i])
elif len(vn_list) < int(visitN):
if verbal: print('Add {0} visits from the old proposal.\n'.format(niter))
visitnumber_available = get_available_visitnumber(propfile,visitdigit,verbal)
for i in np.arange(niter):
visittemplate = TemplateHandler().templatedict[filters]
visitnumber = visitnumber_available[i]
add_visit(propfile=propfile,outname=propfile,visittemplate=visittemplate,visitnumber=visitnumber,targetname=targetname,line_index=None)
print('Visit {0} added ... \n'.format(visitnumber))
if verbal: print('Finish update_status_RN.\n') | /rolling_snapshot_proposal_editor-1.2.0a1-py3-none-any.whl/rolling_snapshot_proposal_editor/update_status_RN.py | 0.41739 | 0.430327 | update_status_RN.py | pypi |
```
import os,glob
t = glob.glob('*')
if 'tmp' in t:
os.system('rm -R tmp')
os.system('mkdir tmp')
##### Example: read Google sheet
from rolling_snapshot_proposal_editor.googlesheetreader import GoogleSheetReader
CREDENTIAL = '/Users/kbhirombhakdi/Downloads/client_secret_560341155382-s324iuntpd9djf1avojfh6u1e4shu012.apps.googleusercontent.com.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
SHEETID = '1edhJyvh4fLMZKOc2wonOoCCh8UCDkuJV_Wy5D2idme0'
RANGENAME = 'test'
sheet = GoogleSheetReader(scopes=SCOPES,sheetid=SHEETID,rangename=RANGENAME,credential=CREDENTIAL)
sheet.df
##### Example: removing target + associating visits
##### Test note: removing the last target number and its associating visits to check that the code can perform correctly.
from rolling_snapshot_proposal_editor.remove_fixed_target import remove_fixed_target
propfile = '../ASSETS/PHASE2/Rolling_16682_Week1v1_withsub.prop'
outname = './tmp/test.prop'
target_number = '9' # target number to be removed
json_template = '../ASSETS/JSON_TEMPLATE/fixed_target.json'
remove_fixed_target(propfile,target_number,json_template,outname,True)
##### Example: removing individual visits without removing the associating target
from rolling_snapshot_proposal_editor.remove_visit import remove_visit
visits = ['A1','A2','A3'] # visit numbers to be removed
for i in visits:
remove_visit(propfile=outname,outname=outname,visitnumber=i)
##### Example: adding new fixed target from a spreadsheet
from rolling_snapshot_proposal_editor.googlesheetreader import GoogleSheetReader
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.prepare_targetinfo import prepare_targetinfo
from rolling_snapshot_proposal_editor.add_fixed_target import add_fixed_target
CREDENTIAL = '/Users/kbhirombhakdi/Downloads/client_secret_560341155382-s324iuntpd9djf1avojfh6u1e4shu012.apps.googleusercontent.com.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
SHEETID = '1edhJyvh4fLMZKOc2wonOoCCh8UCDkuJV_Wy5D2idme0'
RANGENAME = 'test'
spreadsheet_df = GoogleSheetReader(scopes=SCOPES,sheetid=SHEETID,rangename=RANGENAME,credential=CREDENTIAL).df
read_spreadsheet_csv = False
targetnumber = '99' # target number to be added
json_template = TemplateHandler().templatedict['fixed_target']
outname = './tmp/test.prop'
targetinfo = prepare_targetinfo(spreadsheet=spreadsheet_df,read_spreadsheet_csv=read_spreadsheet_csv,targetnumber=targetnumber,json_template=json_template)
add_fixed_target(propfile=outname,targetinfo=targetinfo,outname=outname,line_index=None)
##### Example: adding new visits using a template
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.add_visit import add_visit
outname = './tmp/test.prop'
visittemplate = TemplateHandler().templatedict['visits_G280']
visitnumbers = ['Z1','Z2','Z3'] # visit numbers to be added
targetname = 'test' # target name to be added. If target name does not exist in the target list, the script will still run but the error will show up when openning it in APT software.
for visitnumber in visitnumbers:
add_visit(propfile=outname,outname=outname,visittemplate=visittemplate,visitnumber=visitnumber,targetname=targetname,line_index=None)
```
| /rolling_snapshot_proposal_editor-1.2.0a1-py3-none-any.whl/rolling_snapshot_proposal_editor/demo/demo_1.0.0.ipynb | 0.410874 | 0.153867 | demo_1.0.0.ipynb | pypi |
```
import os,glob
t = glob.glob('*')
if 'tmp' in t:
os.system('rm -R tmp')
os.system('mkdir tmp')
##### Example: read Google sheet
from rolling_snapshot_proposal_editor.googlesheetreader import GoogleSheetReader
CREDENTIAL = '/Users/kbhirombhakdi/Downloads/client_secret_560341155382-s324iuntpd9djf1avojfh6u1e4shu012.apps.googleusercontent.com.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
SHEETID = '1edhJyvh4fLMZKOc2wonOoCCh8UCDkuJV_Wy5D2idme0'
RANGENAME = 'test'
sheet = GoogleSheetReader(scopes=SCOPES,sheetid=SHEETID,rangename=RANGENAME,credential=CREDENTIAL)
sheet.df
##### Example: removing target + associating visits
##### Test note: removing the last target number and its associating visits to check that the code can perform correctly.
from rolling_snapshot_proposal_editor.remove_fixed_target import remove_fixed_target
propfile = '../ASSETS/PHASE2/Rolling_16682_Week1v1_withsub.prop'
outname = './tmp/test.prop'
target_number = '9' # target number to be removed
json_template = '../ASSETS/JSON_TEMPLATE/fixed_target.json'
remove_fixed_target(propfile,target_number,json_template,outname,True)
##### Example: removing individual visits without removing the associating target
from rolling_snapshot_proposal_editor.remove_visit import remove_visit
visits = ['A1','A2','A3'] # visit numbers to be removed
for i in visits:
remove_visit(propfile=outname,outname=outname,visitnumber=i)
##### Example: adding new fixed target from a spreadsheet
from rolling_snapshot_proposal_editor.googlesheetreader import GoogleSheetReader
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.prepare_targetinfo import prepare_targetinfo
from rolling_snapshot_proposal_editor.add_fixed_target import add_fixed_target
CREDENTIAL = '/Users/kbhirombhakdi/Downloads/client_secret_560341155382-s324iuntpd9djf1avojfh6u1e4shu012.apps.googleusercontent.com.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
SHEETID = '1edhJyvh4fLMZKOc2wonOoCCh8UCDkuJV_Wy5D2idme0'
RANGENAME = 'test'
spreadsheet_df = GoogleSheetReader(scopes=SCOPES,sheetid=SHEETID,rangename=RANGENAME,credential=CREDENTIAL).df
read_spreadsheet_csv = False
targetnumber = '99' # target number to be added
json_template = TemplateHandler().templatedict['fixed_target']
outname = './tmp/test.prop'
targetinfo = prepare_targetinfo(spreadsheet=spreadsheet_df,read_spreadsheet_csv=read_spreadsheet_csv,targetnumber=targetnumber,json_template=json_template)
add_fixed_target(propfile=outname,targetinfo=targetinfo,outname=outname,line_index=None)
##### Example: adding new visits using a template
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.add_visit import add_visit
outname = './tmp/test.prop'
visittemplate = TemplateHandler().templatedict['visits_G280']
visitnumbers = ['Z1','Z2','Z3'] # visit numbers to be added
targetname = 'test' # target name to be added. If target name does not exist in the target list, the script will still run but the error will show up when openning it in APT software.
for visitnumber in visitnumbers:
add_visit(propfile=outname,outname=outname,visittemplate=visittemplate,visitnumber=visitnumber,targetname=targetname,line_index=None)
```
| /rolling_snapshot_proposal_editor-1.2.0a1-py3-none-any.whl/rolling_snapshot_proposal_editor/demo/.ipynb_checkpoints/demo_1.0.0-checkpoint.ipynb | 0.410874 | 0.153867 | demo_1.0.0-checkpoint.ipynb | pypi |
def update_status_RN(propfile,targetnumber,targetname,status,visitdigit,visitN,filters,sheet_df,verbal=True):
"""
This function updates visits in the propfile given (targetnumber,targetname) with status R or N.
For R (repeat) status, the target is already existed in the propfile, but we would like to change the total number of its visits (visitN) by keeping the visit number's first digit (visitdigit) and filter info (filters) the same.
Arguments:
- propfile = path to .prop file
- targetnumber = str
- targetname = str
- status = str. It is in {'R','N'}
- visitdigit = str, 36-base one digit (i.e., 0...Z)
- visitN = int or str
- filters = str. It must be a correct keyword to search for a template library (e.g., 'visits_G280'). Use TemplateHandler to facilitate this.
- sheet_df = pandas.DataFrame of the observing sheet (i.e., Google sheet)
- verbal = bool
"""
if verbal: print('Start update_status_RN({0},{1},{2},{3},{4},{5},{6},{7},{8}) ...\n'.format(propfile,targetnumber,targetname,status,visitdigit,visitN,filters,'sheet_df',verbal))
from rolling_snapshot_proposal_editor.find_targetname_in_visits import find_targetname_in_visits
from rolling_snapshot_proposal_editor.remove_visit import remove_visit
from rolling_snapshot_proposal_editor.templatehandler import TemplateHandler
from rolling_snapshot_proposal_editor.add_visit import add_visit
from rolling_snapshot_proposal_editor.get_available_visitnumber import get_available_visitnumber
import numpy as np
lineindex_visitnumber_targetname_list = find_targetname_in_visits(propfile)
vn_list = []
for i in lineindex_visitnumber_targetname_list:
index,vn,tn = i
if targetname==tn: vn_list.append(vn)
if verbal: print('Target {0} {1} had {2} visits in the old proposal. It will be updated to {3} visits.\n'.format(targetnumber,targetname,len(vn_list),visitN))
niter = np.abs(len(vn_list) - int(visitN))
if len(vn_list) > int(visitN):
if verbal: print('Remove {0} visits from the old proposal.\n'.format(niter))
for i in np.arange(niter): remove_visit(propfile,propfile,vn_list[-1-i])
elif len(vn_list) < int(visitN):
if verbal: print('Add {0} visits from the old proposal.\n'.format(niter))
visitnumber_available = get_available_visitnumber(propfile,visitdigit,verbal)
for i in np.arange(niter):
visittemplate = TemplateHandler().templatedict[filters]
visitnumber = visitnumber_available[i]
add_visit(propfile=propfile,outname=propfile,visittemplate=visittemplate,visitnumber=visitnumber,targetname=targetname,line_index=None)
print('Visit {0} added ... \n'.format(visitnumber))
if verbal: print('Finish update_status_RN.\n') | /rolling_snapshot_proposal_editor-1.2.0a1-py3-none-any.whl/rolling_snapshot_proposal_editor/.ipynb_checkpoints/update_status_RN-checkpoint.py | 0.41739 | 0.430327 | update_status_RN-checkpoint.py | pypi |
from queue import Queue
from rolling_technical_indicators.model import Node
class ExponentialMovingAverage(Node):
def __init__(self, period):
self.value = None
self.alpha = 2/(period+1)
def put(self, data):
if self.value == None:
self.value = data
else:
self.value = self.alpha*data + (1-self.alpha)*self.value
def isFull(self):
return self.value != None
class MovingSum(Node):
def __init__(self, period):
self.queue = Queue(maxsize = period)
self.value = 0
def put(self, data):
if self.isFull():
oldestDataPoint = self.queue.get()
self.value -= oldestDataPoint
self.queue.put(data)
self.value += data
def isFull(self):
return self.queue.full()
class SimpleMovingAverage(Node):
def __init__(self, period):
self.movingSum = MovingSum(period)
self.period = period
self.value = None
def isFull(self):
return self.movingSum.isFull()
def calculate(self):
self.value = self.movingSum.get()/self.period
def add(self, data):
self.movingSum.put(data)
class StandardDeviation(Node):
def __init__(self, period):
self.movingAvg = SimpleMovingAverage(period)
self.squaredMovingSum = MovingSum(period)
self.movingSum = MovingSum(period)
self.period = period
self.value = None
def isFull(self):
return self.movingSum.isFull()
def add(self, data):
self.movingAvg.put(data)
self.squaredMovingSum.put(data**2)
self.movingSum.put(data)
def calculate(self):
a = self.squaredMovingSum.get()
b = self.movingAvg.get()
c = self.movingSum.get()
summation = a -2*b*c + self.period*b**2
self.value = (summation/(self.period-1))**.5
class SmoothedMovingAverage(ExponentialMovingAverage):
def __init__(self, period):
self.value = None
self.alpha = 1/period
class TrueRange(Node):
previousClose = None
currentHigh = None
currentLow = None
value = None
def add(self, record):
self.currentHigh = record.high
self.currentLow = record.low
def isFull(self):
return self.previousClose != None
def calculate(self):
self.value = max(self.currentHigh, self.previousClose) - min(self.currentLow, self.previousClose)
def store(self, record):
self.previousClose = record.close
class AverageTrueRange(Node):
def __init__(self, period):
self.trueRange = TrueRange()
self.smoothedMovingAvg = SmoothedMovingAverage(period)
def add(self, record):
self.trueRange.put(record)
def calculate(self):
tr = max(self.currentHigh, self.previousClose) - min(self.currentLow, self.previousClose)
self.smoothedMovingAvg.put(tr)
self.value = self.smoothedMovingAvg.get() | /rolling_technical_indicators-0.0.7.tar.gz/rolling_technical_indicators-0.0.7/rolling_technical_indicators/calc.py | 0.85223 | 0.515742 | calc.py | pypi |
from typing import NewType, Union, Dict
import numpy as np
from dataclasses import dataclass
from operator import itemgetter
import os
import datetime
import glob
from rolling_window.peristance import Peristance
@dataclass
class WindowOptions:
"""
Parameters
----------
period: this is the total size in rolling window in seconds - used in all window options
step: the jump between windows used in the hopping window option
"""
period: datetime.timedelta
step: datetime.timedelta
class RollingWindow(Peristance):
def __init__(
self,
*,
window_type: str,
datastructure: any,
sort: bool = False,
persistance: bool = False,
path: str = os.getcwd() + "/store",
window_options: WindowOptions
):
super().__init__()
self._window_types = ["sliding", "hopping", "batch"] # use enum - note here to remind me to change this to use enums
self.window_type = window_type
self.window_options = window_options
if not self._check_window_option():
raise AttributeError(
"The window type must be either 'sliding', 'hopping' or 'batch "
)
self.persistance = persistance
self.sort = sort
self.path = path
self.options = window_options
self.type_window = window_type
self.annotations = datastructure.__annotations__
if "time" not in self.annotations.keys():
raise AttributeError(
"datastructure must contain a field called time that accepts the unix-time of the data of type float"
)
self.datastructure_map = {
key: ii for ii, key in enumerate(sorted(self.annotations.keys()), 1)
}
self.datastructure_map["time"] = 0
self.dtype = [
(key, self.annotations[key])
for key, index in sorted(self.datastructure_map.items(), key=itemgetter(1))
]
self.data_dict = {}
async def add_data(
self, id: str, data: any
) -> None:
if id not in set(self.data_dict.keys()): # check the in memory database first
if (
len(glob.glob(id + ".hkl")) > 0
):
try:
self.data_dict = self.load(
id=id, data_dict=self.data_dict, path=self.path
)
except OSError:
pass
new_data = data.__dict__
new_data_array = [0] * (len(new_data))
for key, value in new_data.items():
new_data_array[self.datastructure_map[key]] = value
try:
self.data_dict[id] = np.vstack(
[
self.data_dict[id],
np.array([tuple(new_data_array)], dtype=self.dtype),
]
)
except:
self.data_dict[id] = np.array([tuple(new_data_array)], dtype=self.dtype)
if self.persistance is True:
await self.save(id)
async def save(self, id):
await super(RollingWindow, self).save(
**{"data_dict": self.data_dict, "path": self.path, "id": id}
)
def send(self, id: str) -> Union[np.ndarray, None]:
if self.sort is True:
self._sort_data(id) # make sure everything is in time order
if self.window_type is "sliding":
return self._sliding_filter(id)
elif self.window_type is "hopping":
return self._hopping_filter(id)
elif self.window_type is "batch":
return self._batch_filter(id)
def _sliding_filter(self, id):
"""Send the data for every time step when the amount of data in the window spans the specified time period"""
if (
self.data_dict[id]["time"][-1] - self.data_dict[id]["time"][0]
) >= self.window_options.period.seconds:
to_return = self.data_dict[id]
self.data_dict[id] = self.data_dict[id][1:]
return to_return
def _hopping_filter(self, id):
""""The hopping window hops by a given time frame every couple of seconds"""
if (
self.data_dict[id]["time"][-1] - self.data_dict[id]["time"][0]
) >= self.window_options.period.seconds:
boundary = (
self.data_dict[id]["time"][0][0] + self.window_options.step.seconds
)
to_return = self.data_dict[id]
sliced_array = self.data_dict[id][self.data_dict[id]["time"] >= boundary]
self.data_dict[id] = sliced_array.reshape(np.shape(sliced_array)[0], 1)
return to_return
def _batch_filter(self, id):
if (
self.data_dict[id]["time"][-1] - self.data_dict[id]["time"][0]
) >= self.window_options.period.seconds:
to_return = self.data_dict[id]
self.data_dict[id] = self.data_dict[id][
-1
] # include the last value in the batch to provide continuity
return to_return
def get_ids(self):
return self.data_dict.keys()
def _map_dataclass(self, dataclass_object):
pass
def _check_window_option(self):
return self.window_type in self._window_types
def _sort_data(self, id):
self.data_dict[id] = np.sort(self.data_dict[id], axis=0, order="time") | /rolling_window-0.1.2.tar.gz/rolling_window-0.1.2/rolling_window/rolling_window.py | 0.791821 | 0.236065 | rolling_window.py | pypi |
# RollTheLore
An *unseen servant* providing an *advantage* to DMs/GMs while creating their worlds.
[](https://travis-ci.com/geckon/rollthelore)
As a DM, have you ever needed to lower the *DC* for the world building skill
check? Did you need to create the right NPC for your players to interact with
and thought you could use a *divine intervention*? Were you out of ideas and
felt like an *inspiration die* or a bit of *luck* was all that you needed? This
tool will probably not fulfill all your *wish*es but it can at least provide
*guidance*.
RollTheLore is a tool from a DM to other DMs out there but it can also help
players as it's supposed to inspire you while not only creating your world,
a shop or a random NPC encounter but it can also be used to help with your
backstories.
As of now it can only randomly create NPCs but I have some ideas for generating
towns as well and who knows? Maybe there will be even more. RollTheLore is not
meant to be a perfect tool but rather a simple one that still can provide very
valuable ideas for your campaigns and stories. At least for me personaly it
works very well as I often need a few simple points I can use as inspiration
and build more fluff around it with both imagination and improvisation.
Sometimes the generated character doesn't make much sense but often that is
exactly the fun part. When I need an NPC for a story I like to *roll* a few of
them and pick one as a basis, then I build on that. It can also be used to
pre-roll a few NPCs in advance and then use them e.g. when your players decide
to enter a shop or address a person you hadn't planned beforehand.
Primarily RollTheLore is intended to be used with DnD 5e but it can very well
serve for other game systems as well. The imagination is what matters the most.
Please note that the tool is under development. Ideas, comments and bug reports are
welcome!
## Installation
```
pip install rollthelore
```
## Usage
```
$ rollnpc --help
Usage: rollnpc.py [OPTIONS]
Generate 'number' of NPCs and print them.
Options:
--adventurers / --no-adventurers
Generate adventurers or civilians?
-a, --age-allowed TEXT Allowed age(s).
-A, --age-disallowed TEXT Disallowed age(s).
-c, --class-allowed TEXT Allowed class(es).
-C, --class-disallowed TEXT Disallowed class(es).
--names-only Generate only NPC names
-n, --number INTEGER Number of NPCs to generate.
-r, --race-allowed TEXT Allowed race(s).
-R, --race-disallowed TEXT Disallowed race(s).
-s, --seed TEXT Seed number used to generate NPCs. The same
seed will produce the same results.
-t, --traits INTEGER RANGE Number of traits generated. [0<=x<=9]
--help Show this message and exit.
```
## Examples
```
$ rollnpc
Seed used: '3625060903250429453'. Run with '-s 3625060903250429453' to get the same result.
Name: Anfar
Age: older
Race: tabaxi
Class: barbarian
Appearance: artificial ear, subtle ring(s)
Personality: materialistic, dishonest
```
```
$ rollnpc -n3
Seed used: '3098691926526726649'. Run with '-s 3098691926526726649' to get the same result.
Name: Towerlock
Age: middle aged
Race: human
Class: cleric
Appearance: artificial finger(s), bruise(s)
Personality: wretched, bitter
Name: Leska
Age: young
Race: half-elf
Class: sorcerer
Appearance: visible Adam's apple, different leg length
Personality: scary, unreliable
Name: Marius
Age: old
Race: kobold
Class: warlock
Appearance: sexy, distinctive jewelry
Personality: tireless, decadent
```
```
$ rollnpc -n2 -r elf
Seed used: '8069506022788287187'. Run with '-s 8069506022788287187' to get the same result.
Name: Zerma
Age: older
Race: elf (dark - drow)
Class: rogue
Appearance: ugly, dreadlocks
Personality: gruesome, emotional
Name: Ryfar
Age: adult
Race: elf (wood)
Class: cleric
Appearance: light, horn(s)
Personality: childish, determined
```
```
$ rollnpc --traits 1
Seed used: '291255857363596163'. Run with '-s 291255857363596163' to get the same result.
Name: Enidin
Age: adult
Race: aasimar
Class: cleric
Appearance: receding hair
Personality: hardened
```
```
$ rollnpc -t 3
Seed used: '5732868273964053039'. Run with '-s 5732868273964053039' to get the same result.
Name: Letor
Age: older
Race: dragonborn
Class: sorcerer
Appearance: plump, abnormally short, short hair
Personality: bitter, scornful, sloppy
```
```
$ rollnpc --no-adventurers -n 2 -t 1
Seed used: '5305197205526584553'. Run with '-s 5305197205526584553' to get the same result.
Name: Yorjan
Age: adult
Race: tiefling
Appearance: big eyes
Personality: foolish
Name: Nalfar
Age: adult
Race: dragonborn
Appearance: dreadlocks
Personality: perverse
```
### Seeding
Let's say you generated this lovely duo and you want to keep it for the future.
```
$ rollnpc -n2
Seed used: '6095344300345411392'. Run with '-s 6095344300345411392' to get the same result.
Name: Macon
Age: older
Race: half-elf
Class: bard
Appearance: big eyes, muttonchops
Personality: intellectual, secretive
Name: Sirius
Age: very old
Race: human
Class: cleric
Appearance: different hand size, dimple in chin
Personality: speaks silently, hypochondriac
```
You can either save the whole text or just the seed and generate the same
data again like this:
```
$ rollnpc -n2 -s 6095344300345411392
Seed used: '6095344300345411392'. Run with '-s 6095344300345411392' to get the same result.
Name: Macon
Age: older
Race: half-elf
Class: bard
Appearance: big eyes, muttonchops
Personality: intellectual, secretive
Name: Sirius
Age: very old
Race: human
Class: cleric
Appearance: different hand size, dimple in chin
Personality: speaks silently, hypochondriac
```
| /rollthelore-0.3.2.tar.gz/rollthelore-0.3.2/README.md | 0.585101 | 0.80456 | README.md | pypi |
[](https://robertoprevato.visualstudio.com/rolog/_build/latest?definitionId=12) [](https://pypi.org/project/rolog/) [](https://robertoprevato.visualstudio.com/rolog/_build?definitionId=12)
# Async friendly logging classes for Python 3
**Features:**
* logging classes using `async/await` for logs
* handling of six logging levels, like in built-in logging module
* built-in support for flushing of log records (e.g. making a web request, or writing to a database, every __n__ records)
* flushing supports max retries, configurable delays, number of attempts, and fallback target in case of failure
* support for several targets per logger
* can be used to asynchronously log to different destinations (for example, web api integration, DBMS, etc.)
* logged records support any kind of desired arguments and data structures
* completely abstracted from __destination__ of log entries
* can be used with built-in `logging` module, for sync logging and to [use built-in logging classes](https://docs.python.org/3/library/logging.handlers.html#module-logging.handlers)
* integrated with [rodi dependency injection library](https://pypi.org/project/rodi/), to support injection of loggers by activated class name
## Installation
```bash
pip install rolog
```
## Classes and log levels

| Class | Description |
| ---------------------- | -------------------------------------------------------------------------------------------------------- |
| **LogLevel** | Int enum: _NONE, DEBUG, INFORMATION, WARNING, ERROR, CRITICAL_ |
| **LogTarget** | base for classes that are able to send log records to a certain destination |
| **Logger** | class responsible for creating log records and sending them to appropriate targets, by level |
| **LoggerFactory** | configuration class, responsible for holding configuration of targets and providing instances of loggers |
| **LogRecord** | log record created by loggers, sent to configured targets by a logger |
| **ExceptionLogRecord** | log record created by loggers, including exception information |
| **FlushLogTarget** | abstract class, derived of `LogTarget`, handling records in groups, storing them in memory |
### Basic use
As with the built-in `logging` module, `Logger` class is not meant to be instantiated directly, but rather obtained using a configured `LoggerFactory`.
Example:
```python
import asyncio
from rolog import LoggerFactory, Logger, LogTarget
class PrintTarget(LogTarget):
async def log(self, record):
await asyncio.sleep(.1)
print(record.message, record.args, record.data)
factory = LoggerFactory()
factory.add_target(PrintTarget())
logger = factory.get_logger(__name__)
loop = asyncio.get_event_loop()
async def example():
await logger.info('Lorem ipsum')
# log methods support any argument and keyword argument:
# these are stored in the instances of LogRecord, it is responsibility of LogTarget(s)
# to handle these extra parameters as desired
await logger.info('Hello, World!', 1, 2, 3, cool=True)
loop.run_until_complete(example())
```
## Flushing targets
`rolog` has built-in support for log targets that flush messages in groups, this is necessary to optimize for example
reducing the number of web requests when sending log records to a web api, or enabling bulk-insert inside a database.
Below is an example of flush target class that sends log records to some web api, in groups of `500`:
```python
from typing import List
from rolog import FlushLogTarget, LogRecord
class SomeLogApiFlushLogTarget(FlushLogTarget):
def __init__(self, http_client):
super().__init__()
self.http_client = http_client
async def log_records(self, records: List[LogRecord]):
# NB: implement here your own logic to make web requests to send log records
# to a web api, such as Azure Application Insights
# (see for example https://pypi.org/project/asynapplicationinsights/)
pass
```
Flush targets handle retries with configurable and progressive delays, when logging a group of records fails.
By default, in case of failure a flush target tries to log records __3 times__, using a progressive delay of __0.6 seconds * attempt number__,
finally falling back to a configurable fallback target if logging always failed. Warning messages are issued, using built-in
[`Warnings`](https://docs.python.org/3.1/library/warnings.html) module to notify of these failures.
These parameters are configurable using constructor parameters `fallback_target`, `max_size`, `retry_delay`, `progressive_delay`.
```python
class FlushLogTarget(LogTarget, ABC):
"""Base class for flushing log targets: targets that send the log records
(created by loggers) to the appropriate destination in groups."""
def __init__(self,
queue: Optional[Queue]=None,
max_length: int=500,
fallback_target: Optional[LogTarget]=None,
max_retries: int=3,
retry_delay: float=0.6,
progressive_delay: bool=True):
```
### Flushing when application stops
Since flushing targets hold log records in memory before flushing them, it's necessary to flush when an application stops.
Assuming that a single `LoggerFactory` is configured in the configuration root of an application, this
can be done conveniently, by calling the `dispose` method of the logger factory.
```python
# on application shutdown:
await logger_factory.dispose()
```
## Dependency injection
`rolog` is integrated with [rodi dependency injection library](https://pypi.org/project/rodi/), to support injection of loggers per activated class name.
When a class that expects a parameter of `rolog.Logger` type is activated, it receives a logger for the category of the class name itself.
For more information, please refer to the [dedicated page in project wiki](https://github.com/RobertoPrevato/rolog/wiki/Dependency-injection-with-rodi).
## Documentation
Please refer to documentation in the project wiki: [https://github.com/RobertoPrevato/rolog/wiki](https://github.com/RobertoPrevato/rolog/wiki).
## Develop and run tests locally
```bash
pip install -r dev_requirements.txt
# run tests using automatic discovery:
pytest
```
| /rolog-1.0.2.tar.gz/rolog-1.0.2/README.md | 0.682574 | 0.823896 | README.md | pypi |
# pip install msparser, path.py
from __future__ import print_function, division
try:
import msparser
except ImportError:
print('Need to install msparser.\npip install msparser\n')
raise
try:
from path import Path
except ImportError:
print('Need to install path.py.\npip install path.py\n')
raise
import matplotlib.pyplot as plt
import re
import matplotlib.patches as patches
import numpy as np
import argparse
def formatMem(bytes, memUnit):
if memUnit == 'MB':
exp = 2
elif memUnit == 'KB':
exp = 1
else:
raise NotImplementedError('Unknown memory unit')
return bytes/1024**exp
def tex_escape(text):
"""
:param text: a plain text message
:return: the message escaped to appear correctly in LaTeX
"""
conv = {
'&': r'\&',
'%': r'\%',
'$': r'\$',
'#': r'\#',
'_': r'\_',
'{': r'\{',
'}': r'\}',
'~': r'\textasciitilde{}',
'^': r'\^{}',
'\\': r'\textbackslash{}',
'<': r'\textless ',
'>': r'\textgreater ',
}
regex = re.compile('|'.join(re.escape(key) for key in sorted(conv.keys(), key = lambda item: - len(item))))
textNew = regex.sub(lambda match: conv[match.group()], text)
return textNew
def plotMem(massifFile,
filter, # filter by timer name
minTimeDiff, # filter by difference in beginning and end
minMemDiff, # filter by change in memory usage
shortTimers, # exclude short timers
memUnit='MB', # unit for memory
displayTimers=True
):
# first parse the log file valgrind created for us
data = msparser.parse_file(massifFile)
massifFile = Path(massifFile)
cmd = data['cmd']
timeUnit = data['time_unit']
snapshots = data['snapshots']
times = []
memHeap = []
for s in snapshots:
try:
times.append(s['time'])
memHeap.append(formatMem(s['mem_heap'], memUnit))
except:
pass
# now parse all the snapshot pairs we took in the timers
# (We compile MueLu with MueLu_TIMEMONITOR_MASSIF_SNAPSHOTS=1 )
snapshotPairs = []
for f in Path('.').glob(massifFile+"*start.out"):
fEnd = f.replace('start.out', 'stop.out')
label = Path(f).basename().stripext().replace('.start', '')
label = label.replace(massifFile.basename()+'.', '')
try:
label, counter = label.rsplit('.', 1)
except:
pass
try:
data = msparser.parse_file(f)
dataEnd = msparser.parse_file(fEnd)
assert data['time_unit'] == timeUnit
assert dataEnd['time_unit'] == timeUnit
data = data['snapshots']
dataEnd = dataEnd['snapshots']
assert(len(data)) == 1
assert(len(dataEnd)) == 1
assert data[0]['time'] <= dataEnd[0]['time'], f
data[0]['label'] = label
data[0]['counter'] = counter
data[0]['mem_heap'] = formatMem(data[0]['mem_heap'], memUnit)
dataEnd[0]['mem_heap'] = formatMem(dataEnd[0]['mem_heap'], memUnit)
times.append(data[0]['time'])
times.append(dataEnd[0]['time'])
memHeap.append(data[0]['mem_heap'])
memHeap.append(dataEnd[0]['mem_heap'])
snapshotPairs += [(data[0], dataEnd[0])]
except FileNotFoundError:
print(f)
# sort the snapshots
times = np.array(times)
memHeap = np.array(memHeap)
idx = np.argsort(times)
print('maximum heap memory usage: {}'.format(memHeap.max()))
times = times[idx]
memHeap = memHeap[idx]
times = times[memHeap > minMemDiff]
memHeap = memHeap[memHeap > minMemDiff]
assert(len(times) > 0)
# plot the curve of memory usage
plt.plot(times, memHeap, '-x')
if displayTimers:
# now, filter through the snapshot pairs
# otherwise, the plot becomes very messy
filter = re.compile(filter)
told = (-2*minTimeDiff, -2*minTimeDiff)
snapshotPairsNew = []
for i, pair in enumerate(sorted(snapshotPairs, key=lambda x: x[0]['time'])):
if (filter.search(pair[0]['label']) and
abs(pair[0]['mem_heap']-pair[1]['mem_heap']) > minMemDiff):
t = [pair[0]['time'], pair[1]['time']]
if (abs(t[0]-told[0]) < minTimeDiff and abs(t[1]-told[1]) < minTimeDiff):
print('Timers "{}" and "{}" seems to coincide'.format(nameold, pair[0]['label']))
continue
if (t[1]-t[0] < shortTimers):
continue
told = t
nameold = pair[0]['label']
snapshotPairsNew.append(pair)
snapshotPairs = snapshotPairsNew
# stack the snapshot pairs
height = max(memHeap)/len(snapshotPairs)
for i, pair in enumerate(sorted(snapshotPairs, key=lambda x: x[0]['time'])):
plt.gca().add_patch(patches.Rectangle((pair[0]['time'], i*height),
pair[1]['time']-pair[0]['time'],
height, alpha=0.5, facecolor='red'))
plt.text(pair[0]['time'], (i+0.5)*height, "%r"%pair[0]['label'])
# add vertical lines at start and end for each timer
plt.plot([pair[0]['time'], pair[0]['time']], [0, max(memHeap)], '-', c='grey', alpha=0.5)
plt.plot([pair[1]['time'], pair[1]['time']], [0, max(memHeap)], '-', c='grey', alpha=0.5)
# add circles on these lines for memory usage at beginning and end
plt.scatter([pair[0]['time'], pair[1]['time']],
[pair[0]['mem_heap'], pair[1]['mem_heap']], c='r')
plt.xlabel(timeUnit)
plt.ylabel(memUnit)
plt.title("%r"%cmd)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="""Plot memory profile from massif.
Massif spits out a log file in the form "massif.out.PID".
If MueLu is compiler with MueLu_TIMEMONITOR_MASSIF_SNAPSHOTS=1, the
corresponding snapshots of the form "massif.out.PID.Timer" for the timers
are also included.""",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('massifFile', nargs=1, help='massif log file, something like "massif.out.PID"')
parser.add_argument('--minMemDiff',
help='filter out timers that have small change in memory usage',
type=float,
default=0.05)
parser.add_argument('--minTimeDiff',
help='filter out timers that coincide up to this many instructions',
type=int,
default=-1)
parser.add_argument('--shortTimers',
help='filter out timers that have fewer instructions than this',
type=int,
default=-1)
parser.add_argument('--filter',
help='regexp to filter for timers to include',
type=str,
default='')
parser.add_argument('--memUnit',
help='memory unit (KB, MB)',
type=str,
default='MB')
args = parser.parse_args()
massifFile = args.massifFile[0]
data = msparser.parse_file(massifFile)
timeUnit = data['time_unit']
if timeUnit == 'i':
if args.shortTimers is -1:
args.shortTimers = 5e7
if args.minTimeDiff is -1:
args.minTimeDiff = 1e7
elif timeUnit == 'ms':
if args.shortTimers is -1:
args.shortTimers = 90
if args.minTimeDiff is -1:
args.minTimeDiff = 20
else:
raise NotImplementedError()
plotMem(massifFile,
args.filter, args.minTimeDiff, args.minMemDiff,
args.shortTimers, args.memUnit)
plt.show() | /roltrilinos-0.0.9.tar.gz/roltrilinos-0.0.9/packages/teuchos/core/utils/plotMassifMemoryUsage.py | 0.596081 | 0.428712 | plotMassifMemoryUsage.py | pypi |
# Required Software
The following list of software is required by the Windows build scripts. The build scripts
assume this software will be installed in specific locations, so before installing it would
be worthwhile to browse the scripts for these locations. Alternatively, you could download
and install the software wherever you want - just be sure to copy and update the build scripts
with the correct locations.
**[CMake][1]** - required to setup the build configuration. (Tested using version 3.8.1)
**[Ninja][2]** - required to parallelize the build on Windows. (Tested using version 1.7.2)
**[Visual Studio 2015][3]** - required to build packages.
**[Microsoft MPI][4]** - required to create MPI builds of packages. (Tested using version 8.1.12438.1084)
**Perl** - required by some packages for testing. (Tested using [Strawberry Perl][5] version 5.24.1.1)
**[Git][6]** - required for the scripts to update source code.
**CLAPACK** - required TPL dependency for some packages (Tested using version 3.2.1). Note that if a
pre-built version is not available, you must [build it from source code][7].
[1]: https://cmake.org/download/
[2]: https://ninja-build.org/
[3]: https://www.visualstudio.com/
[4]: https://msdn.microsoft.com/en-us/library/bb524831(v=vs.85).aspx
[5]: http://strawberryperl.com/
[6]: https://git-scm.com/
[7]: http://icl.cs.utk.edu/lapack-for-windows/clapack/
# Script Usage
There are two scripts intended for the typical user - *task_driver_windows.bat* and
*create_windows_package.bat*. For details about additional scripts, please refer to the
[Script File Summary](#Script-File-Summary) section below.
##### task_driver_windows.bat
This script builds and tests Trilinos packages for both Debug and Release configurations,
following the Trilinos package-by-package testing paradigm. The packages are built as static
libraries, and the results are submitted to [CDash][8]. This script may be launched from
the command-line as is, without any additional arguments.
##### create_windows_package.bat
This script creates a ZIP package of static libraries and header files from the set of Trilinos
packages specified in the script. The expected syntax to use the script is:
`create_windows_package.bat <Build Type> <Base Directory>`
where `<Build Type>` specifies the build configuration (Release or Debug) and
`<Base Directory>` specifies a root working directory where the script will clone/update
the Trilinos repository as a sub-directory, create a build sub-directory to do the actual build,
and place resulting output files and ZIP package. For example,
```
create_windows_package.bat Debug C:\path\to\DebugPackageDir
```
would create a Debug build of static libraries, and at the end of the script the base directory
would have the following contents:
```
C:\path\to\DebugPackageDir
- build (Build directory)
- Trilinos (Source code directory)
- update_output.txt (Output from the Git update step)
- configure_output.txt (Output from the Configure step)
- build_output.txt (Output from the build step)
- package_output.txt (Output from the package step)
- trilinos-setup-12.3.zip (Resulting ZIP package)
```
[8]: https://testing.sandia.gov/cdash
# Script File Summary
##### TrilinosCTestDriverCore.windows.msvc.cmake
CMake script that sets the options common to both the Debug and Release configurations of a
Trilinos build on Windows using Visual Studio, including the specific Trilinos packages to be
built. This script follows the Trilinos package-by-package testing paradigm. This script
assumes specific build tools exist (e.g., Ninja, MSVC14, MPI, etc.) and assumes where
they will be located on Windows.
##### create_windows_package.bat
Windows batch script that creates a ZIP package of static libraries and header files. It
updates or clones the latest version of Trilinos as necessary, sets up a build environment
for MSVC14, configures and runs CMake, then builds the specified Trilinos packages using Ninja
and MSVC. After the build is complete, it creates the ZIP package using CPack. The output
for each step of the process is recorded in individual files (e.g., configure_output.txt,
build_output.txt, etc.) within the specified working directory.
##### ctest_windows_mpi_debug.cmake
CMake file that sets the build configuration for a Debug build before calling
*TrilinosCTestDriverCore.windows.msvc.cmake*.
##### ctest_windows_mpi_release.cmake
CMake file that sets the build configuration for a Release build before calling
*TrilinosCTestDriverCore.windows.msvc.cmake*
##### task_driver_windows.bat
Windows batch script that sets important environment variables before running
*ctest_windows_mpi_debug.cmake* and *ctest_windows_mpi_release.cmake* using CTest.
# Notes and Issues
- The scripts assume they are starting from a clean environment. All necessary environment
variables are set by the scripts.
- When installing Perl on Windows, it likes to insert itself into the system or user PATH
environment variable. This can have unintended consequences when running CMake, the most
notable of which is that CMake will (wrongly) assume certain executables in Perl are part
of the compiler. As a result, CMake won't find the correct MSVC compiler and linker, and
the configure step will fail. If possible, try to keep Perl out of the PATH when using
these scripts, or else modify the PATH before running the scripts to remove Perl.
- As noted in Trilinos Pull Request [#1197][i1], the /bigobj compiler flag is necessary for
debug builds, and is included in the build scripts.
- At the time of writing, the Zoltan tests fail on Windows as noted in Trilinos Issue [#1440][i2].
- The packages are currently built as static libraries. If shared libraries are desired, an
effort would need to be made to resolve certain build issues. Also, care would need to be
taken to ensure PATHs are setup correctly for the libraries to find each other during the
build and testing.
[i1]: https://github.com/trilinos/Trilinos/pull/1197
[i2]: https://github.com/trilinos/Trilinos/issues/1440
| /roltrilinos-0.0.9.tar.gz/roltrilinos-0.0.9/cmake/ctest/drivers/windows/README.md | 0.427038 | 0.74644 | README.md | pypi |
"""Tools for accuracy and error evaluation."""
__all__ = [
"projection_error",
"frobenius_error",
"lp_error",
"Lp_error",
]
import numpy as np
from scipy import linalg as la
from ..pre import projection_error
def _absolute_and_relative_error(X, Y, norm):
"""Compute the absolute and relative errors between X and Y, where Y is an
approximation to X:
absolute_error = ||X - Y||,
relative_error = ||X - Y|| / ||X|| = absolute_error / ||X||,
with ||X|| defined by norm(X).
"""
norm_of_data = norm(X)
absolute_error = norm(X - Y)
return absolute_error, absolute_error / norm_of_data
def frobenius_error(X, Y):
"""Compute the absolute and relative Frobenius-norm errors between two
snapshot sets X and Y, where Y is an approximation to X:
absolute_error = ||X - Y||_F,
relative_error = ||X - Y||_F / ||X||_F.
Parameters
----------
X : (n, k)
"True" data. Each column is one snapshot, i.e., X[:, j] is the data
at some time t[j].
Y : (n, k)
An approximation to X, i.e., Y[:, j] approximates X[:, j] and
corresponds to some time t[j].
Returns
-------
abs_err : float
Absolute error ||X - Y||_F.
rel_err : float
Relative error ||X - Y||_F / ||X||_F.
"""
# Check dimensions.
if X.shape != Y.shape:
raise ValueError("truth X and approximation Y not aligned")
if X.ndim != 2:
raise ValueError("X and Y must be two-dimensional")
# Compute the errors.
return _absolute_and_relative_error(X, Y, lambda Z: la.norm(Z, ord="fro"))
def lp_error(X, Y, p=2, normalize=False):
"""Compute the absolute and relative lp-norm errors between two snapshot
sets X and Y, where Y is an approximation to X:
absolute_error_j = ||X_j - Y_j||_p,
relative_error_j = ||X_j - Y_j||_p / ||X_j||_p.
Parameters
----------
X : (n, k) or (n,) ndarray
"True" data. Each column is one snapshot, i.e., X[:, j] is the data
at some time t[j]. If one-dimensional, all of X is a single snapshot.
Y : (n, k) or (n,) ndarray
An approximation to X, i.e., Y[:, j] approximates X[:, j] and
corresponds to some time t[j]. If one-dimensional, all of Y is a
single snapshot approximation.
p : float
Order of the lp norm (default p=2 is the Euclidean norm). Used as
the `ord` argument for scipy.linalg.norm(); see options at
docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.norm.html.
normalize : bool
If true, compute the normalized absolute error instead of the relative
error, defined by
Normalized_absolute_error_j = ||X_j - Y_j||_2 / max_j{||X_j||_2}.
Returns
-------
abs_err : (k,) ndarray or float
Absolute error of each pair of snapshots X[:, j] and Y[:, j]. If X
and Y are one-dimensional, X and Y are treated as single snapshots, so
the error is a float.
rel_err : (k,) ndarray or float
Relative or normed absolute error of each pair of snapshots X[:, j]
and Y[:, j]. If X and Y are one-dimensional, X and Y are treated as
single snapshots, so the error is a float.
"""
# Check p.
if not np.isscalar(p) or p <= 0:
raise ValueError("norm order p must be positive (np.inf ok)")
# Check dimensions.
if X.shape != Y.shape:
raise ValueError("truth X and approximation Y not aligned")
if X.ndim not in (1, 2):
raise ValueError("X and Y must be one- or two-dimensional")
# Compute the error.
norm_of_data = la.norm(X, ord=p, axis=0)
if normalize:
norm_of_data = norm_of_data.max()
absolute_error = la.norm(X - Y, ord=p, axis=0)
return absolute_error, absolute_error / norm_of_data
def Lp_error(X, Y, t=None, p=2):
"""Compute the absolute and relative Lp-norm error (with respect to time)
between two snapshot sets X and Y where Y is an approximation to X:
absolute_error = ||X - Y||_{L^p},
relative_error = ||X - Y||_{L^p} / ||X||_{L^p},
where
||Z||_{L^p} = (int_{t} ||z(t)||_{p} dt)^{1/p}, p < infinity,
||Z||_{L^p} = sup_{t}||z(t)||_{p}, p = infinity.
The trapezoidal rule is used to approximate the integrals (for finite p).
This error measure is only consistent for data sets where each snapshot
represents function values, i.e., X[:, j] = [q(t1), q(t2), ..., q(tk)]^T.
Parameters
----------
X : (n, k) or (k,) ndarray
"True" data corresponding to time t. Each column is one snapshot,
i.e., X[:, j] is the data at time t[j]. If one-dimensional, each entry
is one snapshot.
Y : (n, k) or (k,) ndarray
An approximation to X, i.e., Y[:, j] approximates X[:, j] and
corresponds to time t[j]. If one-dimensional, each entry is one
snapshot.
t : (k,) ndarray
Time domain of the data X and the approximation Y.
Required unless p == np.inf.
p : float > 0
Order of the Lp norm. May be infinite (np.inf).
Returns
-------
abs_err : float
Absolute error ||X - Y||_{L^p}.
rel_err : float
Relative error ||X - Y||_{L^p} / ||X||_{L^p}.
"""
# Check p.
if not np.isscalar(p) or p <= 0:
raise ValueError("norm order p must be positive (np.inf ok)")
# Check dimensions.
if X.shape != Y.shape:
raise ValueError("truth X and approximation Y not aligned")
if X.ndim == 1:
X = np.atleast_2d(X)
Y = np.atleast_2d(Y)
elif X.ndim > 2:
raise ValueError("X and Y must be one- or two-dimensional")
# Pick the norm based on p.
if 0 < p < np.inf:
if t is None:
raise ValueError("time t required for p < infinty")
if t.ndim != 1:
raise ValueError("time t must be one-dimensional")
if X.shape[-1] != t.shape[0]:
raise ValueError("truth X not aligned with time t")
def pnorm(Z):
return (np.trapz(np.sum(np.abs(Z)**p, axis=0), t))**(1/p)
else: # p == np.inf
def pnorm(Z):
return np.max(np.abs(Z), axis=0).max()
# Compute the error.
return _absolute_and_relative_error(X, Y, pnorm) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/post/_errors.py | 0.954393 | 0.698921 | _errors.py | pypi |
__all__ = []
import numpy as np
import scipy.interpolate
from ... import pre
from ...utils import hdf5_savehandle, hdf5_loadhandle
from .._base import _BaseParametricROM
from ..nonparametric._base import _NonparametricOpInfROM
from .. import operators
class _InterpolatedOpInfROM(_BaseParametricROM):
"""Base class for parametric reduced-order models where the parametric
dependence of operators are handled with elementwise interpolation, i.e,
A(µ)[i,j] = Interpolator([µ1, µ2, ...], [A1[i,j], A2[i,j], ...])(µ).
where µ1, µ2, ... are parameter values and A1, A2, ... are the
corresponding operator matrices, e.g., A1 = A(µ1). That is, individual
operators is learned for each training parameter, and those operators are
interpolated elementwise to construct operators for new parameter values.
"""
# Must be specified by child classes.
_ModelFitClass = NotImplemented
def __init__(self, modelform, InterpolatorClass="auto"):
"""Set the model form (ROM structure) and interpolator type.
Parameters
----------
modelform : str
Structure of the reduced-order model. See the class docstring.
InterpolatorClass : type or str
Class for elementwise operator interpolation. Must obey the syntax
>>> interpolator = InterpolatorClass(data_points, data_values)
>>> interpolator_evaluation = interpolator(new_data_point)
Convenience options:
* "cubicspline": scipy.interpolate.CubicSpline (p = 1)
* "linear": scipy.interpolate.LinearNDInterpolator (p > 1)
* "auto" (default): choose based on the parameter dimension.
"""
_BaseParametricROM.__init__(self, modelform)
# Validate the _ModelFitClass.
if not issubclass(self._ModelFitClass, _NonparametricOpInfROM):
raise RuntimeError("invalid _ModelFitClass "
f"'{self._ModelFitClass.__name__}'")
# Save the interpolator class.
self.__autoIC = (InterpolatorClass == "auto")
if InterpolatorClass == "cubicspline":
self.InterpolatorClass = scipy.interpolate.CubicSpline
elif InterpolatorClass == "linear":
self.InterpolatorClass = scipy.interpolate.LinearNDInterpolator
elif not isinstance(InterpolatorClass, str):
self.InterpolatorClass = InterpolatorClass
elif not self.__autoIC:
raise ValueError("invalid InterpolatorClass "
f"'{InterpolatorClass}'")
# Properties --------------------------------------------------------------
@property
def s(self):
"""Number of training parameter samples, i.e., the number of data
points in the interpolation scheme.
"""
return self.__s
def __len__(self):
"""Number of training parameter samples, i.e., the number of ROMS to
interpolate between.
"""
return self.s
# Fitting -----------------------------------------------------------------
def _check_parameters(self, parameters):
"""Extract the parameter dimension and ensure it is consistent
across parameter samples.
"""
shape = np.shape(parameters[0])
if any(np.shape(param) != shape for param in parameters):
raise ValueError("parameter dimension inconsistent across samples")
self.__s = len(parameters)
self._set_parameter_dimension(parameters)
# If required, select the interpolator based on parameter dimension.
if self.__autoIC:
if self.p == 1:
self.InterpolatorClass = scipy.interpolate.CubicSpline
else:
self.InterpolatorClass = scipy.interpolate.LinearNDInterpolator
def _split_operator_dict(self, known_operators):
"""Unzip the known operators dictionary into separate dictionaries,
one for each parameter sample. For example:
{ [
"A": [A1, A2, A3], {"A": A1, "H": H1, "B": B},
"H": [H1, None, H3], --> {"A": A2, "B": B},
"B": B {"A": A3, "H": H3, "B": B}
} ]
Also check that the right number of operators is specified.
Parameters
----------
known_operators : dict or None
Maps modelform keys to list of s operators.
Returns
-------
known_operators_list : list(dict) or None
List of s dictionarys mapping modelform keys to single operators.
"""
if known_operators is None:
return [None] * self.s
if not isinstance(known_operators, dict):
raise TypeError("known_operators must be a dictionary")
# Check that each dictionary value is a list.
for key in known_operators.keys():
val = known_operators[key]
if isinstance(val, np.ndarray) and val.shape[0] != self.s:
# Special case: single operator matrix given, not a list.
# TODO: if r == s this could be misinterpreted.
known_operators[key] = [val] * self.s
elif not isinstance(val, list):
raise TypeError("known_operators must be a dictionary mapping "
"a string to a list of ndarrays")
# Check length of each list in the dictionary.
if not all(len(val) == self.s for val in known_operators.values()):
raise ValueError("known_operators dictionary must map a modelform "
f"key to a list of s = {self.s} ndarrays")
# "Unzip" the dictionary of lists to a list of dictionaries.
return [
{key: known_operators[key][i]
for key in known_operators.keys()
if known_operators[key][i] is not None}
for i in range(self.s)
]
def _check_number_of_training_datasets(self, datasets):
"""Ensure that each data set has the same number of entries as
the number of parameter samples.
Parameters
----------
datasets: list of (ndarray, str) tuples
Datasets paired with labels, e.g., [(Q, "states"), (dQ, "ddts")].
"""
for data, label in datasets:
if len(data) != self.s:
raise ValueError(f"len({label}) = {len(data)} "
f"!= {self.s} = len(parameters)")
def _process_fit_arguments(self, basis, parameters, states, lhss, inputs,
regularizers, known_operators):
"""Do sanity checks, extract dimensions, and check data sizes."""
# Intialize reset.
self._clear() # Clear all data (basis and operators).
self.basis = basis # Store basis and (hence) reduced dimension.
# Validate parameters and set parameter dimension / num training sets.
self._check_parameters(parameters)
# Replace any None arguments with [None, None, ..., None] (s times).
if states is None:
states = [None] * self.s
if lhss is None:
lhss = [None] * self.s
if inputs is None:
inputs = [None] * self.s
# Interpret regularizers argument.
_reg = regularizers
if _reg is None or np.isscalar(_reg) or len(_reg) != self.s:
regularizers = [regularizers] * self.s
# Separate known operators into one dictionary per parameter sample.
if isinstance(known_operators, list):
knownops_list = known_operators
else:
knownops_list = self._split_operator_dict(known_operators)
# Check that the number of training sets is consistent.
self._check_number_of_training_datasets([
(parameters, "parameters"),
(states, "states"),
(lhss, self._LHS_ARGNAME),
(inputs, "inputs"),
(regularizers, "regularizers"),
(knownops_list, "known_operators"),
])
return states, lhss, inputs, regularizers, knownops_list
def _interpolate_roms(self, parameters, roms):
"""Interpolate operators from a collection of non-parametric ROMs.
Parameters
----------
parameters : (s, p) ndarray or (s,) ndarray
Parameter values corresponding to the training data, either
s p-dimensional vectors or s scalars (parameter dimension p = 1).
roms : list of s ROM objects (of a class derived from _BaseROM)
Trained non-parametric reduced-order models.
"""
# Ensure that all ROMs are trained.
for rom in roms:
if not isinstance(rom, self._ModelFitClass):
raise TypeError("expected roms of type "
f"{self._ModelFitClass.__name__}")
rom._check_is_trained()
# Extract dimensions from the ROMs and check for consistency.
if self.basis is None:
self.r = roms[0].r
if 'B' in self.modelform:
self.m = roms[0].m
for rom in roms:
if rom.modelform != self.modelform:
raise ValueError("ROMs to interpolate must have "
f"modelform='{self.modelform}'")
if rom.r != self.r:
raise ValueError("ROMs to interpolate must have equal "
"dimensions (inconsistent r)")
if rom.m != self.m:
raise ValueError("ROMs to interpolate must have equal "
"dimensions (inconsistent m)")
# Extract the operators from the individual ROMs.
for key in self.modelform:
attr = f"{key}_"
ops = [getattr(rom, attr).entries for rom in roms]
if all(np.all(ops[0] == op) for op in ops):
# This operator does not depend on the parameters.
OperatorClass = operators.nonparametric_operators[key]
setattr(self, attr, OperatorClass(ops[0]))
else:
# This operator varies with the parameters (so interpolate).
OperatorClass = operators.interpolated_operators[key]
setattr(self, attr, OperatorClass(parameters, ops,
self.InterpolatorClass))
def fit(self, basis, parameters, states, lhss, inputs=None,
regularizers=0, known_operators=None):
"""Learn the reduced-order model operators from data.
Parameters
----------
basis : (n, r) ndarray or None
Basis for the linear reduced space (e.g., POD basis matrix).
If None, statess and lhss are assumed to already be projected.
parameters : (s, p) ndarray or (s,) ndarray
Parameter values corresponding to the training data, either
s p-dimensional vectors or s scalars (parameter dimension p = 1).
states : list of s (n, k) or (r, k) ndarrays
State snapshots for each parameter value: `states[i]` corresponds
to `parameters[i]` and contains column-wise state data, i.e.,
`states[i][:, j]` is a single snapshot.
Data may be either full order (n rows) or reduced order (r rows).
lhss : list of s (n, k) or (r, k) ndarrays
Left-hand side data for ROM training corresponding to each
parameter value: `lhss[i]` corresponds to `parameters[i]` and
contains column-wise left-hand side data, i.e., `lhss[i][:, j]`
corresponds to the state snapshot `states[i][:, j]`.
Data may be either full order (n rows) or reduced order (r rows).
* Steady: forcing function.
* Discrete: column-wise next iteration
* Continuous: time derivative of the state
inputs : list of s (m, k) or (k,) ndarrays or None
Inputs for ROM training corresponding each parameter value:
`inputs[i]` corresponds to `parameters[i]` and contains
column-wise input data, i.e., `inputs[i][:, j]` corresponds to the
state snapshot `states[i][:, j]`.
If m = 1 (scalar input), then each `inputs[i]` may be a one-
dimensional array.
This argument is required if 'B' is in `modelform` but must be
None if 'B' is not in `modelform`.
regularizers : list of s (float >= 0, (d, d) ndarray, or r of these)
Tikhonov regularization factor(s) for each parameter value:
`regularizers[i]` is the regularization factor for the regression
using data corresponding to `parameters[i]`. See lstsq.solve().
Here, d is the number of unknowns in each decoupled least-squares
problem, e.g., d = r + m when `modelform`="AB".
known_operators : dict or None
Dictionary of known full-order operators at each parameter value.
Corresponding reduced-order operators are computed directly
through projection; remaining operators are inferred from data.
Keys must match the modelform; values are a list of s ndarrays:
* 'c': (n,) constant term c.
* 'A': (n, n) linear state matrix A.
* 'H': (n, n**2) quadratic state matrix H.
* 'G': (n, n**3) cubic state matrix G.
* 'B': (n, m) input matrix B.
If operators are known for some parameter values but not others,
use None whenever the operator must be inferred, e.g., for
parameters = [µ1, µ2, µ3, µ4, µ5], if A1, A3, and A4 are known
linear state operators at µ1, µ3, and µ4, respectively, set
known_operators = {'A': [A1, None, A3, A4, None]}.
For known operators (e.g., A) that do not depend on the parameters,
known_operators = {'A': [A, A, A, A, A]} and
known_operators = {'A': A} are equivalent.
Returns
-------
self
"""
args = self._process_fit_arguments(basis, parameters,
states, lhss, inputs,
regularizers, known_operators)
states, lhss, inputs, regularizers, knownops_list = args
# Distribute training data to individual OpInf problems.
nonparametric_roms = [
self._ModelFitClass(self.modelform).fit(
self.basis,
states[i], lhss[i], inputs[i],
regularizers[i], knownops_list[i]
) for i in range(self.s)
]
# TODO: split into _[construct/evaluate]_solver() paradigm?
# If so, move dimension extraction to construct_solver().
# Construct interpolated operators.
self._interpolate_roms(parameters, nonparametric_roms)
return self
def set_interpolator(self, InterpolatorClass):
"""Construct the interpolators for the operator entries.
Use this method to change the interpolator after calling fit().
Parameters
----------
InterpolatorClass : type
Class for the elementwise interpolation. Must obey the syntax
>>> interpolator = InterpolatorClass(data_points, data_values)
>>> interpolator_evaluation = interpolator(new_data_point)
This is usually a class from scipy.interpolate.
"""
for key in self.modelform:
op = getattr(self, f"{key}_")
if operators.is_parametric_operator(op):
op.set_interpolator(InterpolatorClass)
# Model persistence -------------------------------------------------------
def save(self, savefile, save_basis=True, overwrite=False):
"""Serialize the ROM, saving it in HDF5 format.
The model can then be loaded with the load() class method.
Parameters
----------
savefile : str
File to save to, with extension '.h5' (HDF5).
savebasis : bool
If True, save the basis as well as the reduced operators.
If False, only save reduced operators.
overwrite : bool
If True and the specified file already exists, overwrite the file.
If False and the specified file already exists, raise an error.
"""
self._check_is_trained()
with hdf5_savehandle(savefile, overwrite=overwrite) as hf:
# Store ROM modelform.
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["modelform"] = self.modelform
# Store basis (optionally) if it exists.
if (self.basis is not None) and save_basis:
meta.attrs["BasisClass"] = self.basis.__class__.__name__
self.basis.save(hf.create_group("basis"))
# Store reduced operators.
for key, op in zip(self.modelform, self):
if "parameters" not in hf:
hf.create_dataset("parameters", data=op.parameters)
hf.create_dataset(f"operators/{key}_", data=op.matrices)
@classmethod
def load(cls, loadfile, InterpolatorClass):
"""Load a serialized ROM from an HDF5 file, created previously from
a ROM object's save() method.
Parameters
----------
loadfile : str
File to load from, which should end in '.h5'.
InterpolatorClass : type or str
Class for elementwise operator interpolation. Must obey the syntax
>>> interpolator = InterpolatorClass(data_points, data_values)
>>> interpolator_evaluation = interpolator(new_data_point)
Convenience options:
* "cubicspline": scipy.interpolate.CubicSpline (p = 1)
* "linear": scipy.interpolate.LinearNDInterpolator (p > 1)
* "auto" (default): choose based on the parameter dimension.
Returns
-------
model : _NonparametricOpInfROM
Trained reduced-order model.
"""
with hdf5_loadhandle(loadfile) as hf:
if "meta" not in hf:
raise ValueError("invalid save format (meta/ not found)")
if "operators" not in hf:
raise ValueError("invalid save format (operators/ not found)")
if "parameters" not in hf:
raise ValueError("invalid save format (parameters/ not found)")
# Load metadata.
modelform = hf["meta"].attrs["modelform"]
basis = None
# Load basis if present.
if "basis" in hf:
BasisClassName = hf["meta"].attrs["BasisClass"]
basis = getattr(pre, BasisClassName).load(hf["basis"])
# Load operators.
parameters = hf["parameters"][:]
ops = {}
for key in modelform:
attr = f"{key}_"
op = hf[f"operators/{attr}"][:]
if op.ndim == (1 if key == "c" else 2):
# This is a nonparametric operator.
OpClass = operators.nonparametric_operators[key]
ops[attr] = OpClass(op)
else:
# This is a parametric operator.
OpClass = operators.interpolated_operators[key]
ops[attr] = OpClass(parameters, op, InterpolatorClass)
# Construct the model.
return cls(modelform, InterpolatorClass)._set_operators(basis, **ops) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/core/interpolate/_base.py | 0.711531 | 0.364014 | _base.py | pypi |
__all__ = [
"is_operator",
"is_parametric_operator",
]
import abc
import numpy as np
def is_operator(op):
"""Return True if `op` is a valid Operator object."""
return isinstance(op, (
_BaseNonparametricOperator,
_BaseParametricOperator)
)
def is_parametric_operator(op):
"""Return True if `op` is a valid ParametricOperator object."""
return isinstance(op, _BaseParametricOperator)
class _BaseNonparametricOperator(abc.ABC):
"""Base class for reduced-order model operators that do not depend on
external parameters. Call the instantiated object to evaluate the operator
on an input.
Attributes
----------
entries : ndarray
Actual NumPy array representing the operator.
shape : tuple
Shape of the operator entries array.
"""
# Properties --------------------------------------------------------------
@property
def entries(self):
"""Discrete representation of the operator."""
return self.__entries
@property
def shape(self):
"""Shape of the operator."""
return self.entries.shape
def __getitem__(self, key):
"""Slice into the entries of the operator."""
return self.entries[key]
def __eq__(self, other):
"""Return True if two Operator objects are numerically equal."""
if not isinstance(other, self.__class__):
return False
if self.shape != other.shape:
return False
return np.all(self.entries == other.entries)
# Abstract methods --------------------------------------------------------
@abc.abstractmethod
def __init__(self, entries):
"""Set operator entries."""
self.__entries = entries
@abc.abstractmethod # pragma: no cover
def evaluate(self, *args, **kwargs):
"""Apply the operator mapping to the given states / inputs."""
raise NotImplementedError
def __call__(self, *args, **kwargs):
"""Apply the operator mapping to the given states / inputs."""
return self.evaluate(*args, **kwargs)
# Validation --------------------------------------------------------------
@staticmethod
def _validate_entries(entries):
"""Ensure argument is a NumPy array and screen for NaN, Inf entries."""
if not isinstance(entries, np.ndarray):
raise TypeError("operator entries must be NumPy array")
if np.any(np.isnan(entries)):
raise ValueError("operator entries must not be NaN")
elif np.any(np.isinf(entries)):
raise ValueError("operator entries must not be Inf")
class _BaseParametricOperator(abc.ABC):
"""Base class for reduced-order model operators that depend on external
parameters. Calling the instantiated object with an external parameter
results in a non-parametric operator:
>>> parametric_operator = MyParametricOperator(init_args)
>>> nonparametric_operator = parametric_operator(parameter_value)
>>> isinstance(nonparametric_operator, _BaseNonparametricOperator)
True
"""
# Properties --------------------------------------------------------------
# Must be specified by child classes.
_OperatorClass = NotImplemented
@property
def OperatorClass(self):
"""Class of nonparametric operator to represent this parametric
operator at a particular parameter, a subclass of
core.operators._BaseNonparametricOperator:
>>> type(MyParametricOperator(init_args)(parameter_value)).
"""
return self._OperatorClass
@property
def p(self):
"""Dimension of the parameter space, i.e., individual parameters are
ndarrays of shape (p,) (or scalars if p = 1)
"""
return self.__p
# Abstract methods --------------------------------------------------------
@abc.abstractmethod
def __init__(self):
"""Validate the OperatorClass.
Child classes must implement this method, which should set and
validate attributes needed to construct the parametric operator.
"""
self.__p = None
# Validate the OperatorClass.
if not issubclass(self.OperatorClass, _BaseNonparametricOperator):
raise RuntimeError("invalid OperatorClass "
f"'{self._OperatorClass.__name__}'")
@abc.abstractmethod
def __call__(self, parameter): # pragma: no cover
"""Return the nonparametric operator corresponding to the parameter,
of type self.OperatorClass.
"""
raise NotImplementedError
# Input validation (shape checking) ---------------------------------------
@staticmethod
def _check_shape_consistency(iterable, prefix="operator matrix"):
"""Ensure that each array in `iterable` has the same shape."""
shape = np.shape(iterable[0])
if any(np.shape(A) != shape for A in iterable):
raise ValueError(f"{prefix} shapes inconsistent")
def _set_parameter_dimension(self, parameters):
"""Extract and save the dimension of the parameter space."""
shape = np.shape(parameters)
if len(shape) == 1:
self.__p = 1
elif len(shape) == 2:
self.__p = shape[1]
else:
raise ValueError("parameter values must be scalars or 1D arrays")
def _check_parameter_dimension(self, parameter):
"""Ensure a new parameter has the expected shape."""
if self.p is None:
raise RuntimeError("parameter dimension p not set")
if np.atleast_1d(parameter).shape[0] != self.p:
raise ValueError(f"expected parameter of shape ({self.p:d},)") | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/core/operators/_base.py | 0.934694 | 0.636494 | _base.py | pypi |
__all__ = [
"InterpolatedConstantOperator",
"InterpolatedLinearOperator",
"InterpolatedQuadraticOperator",
# "InterpolatedCrossQuadraticOperator",
"InterpolatedCubicOperator",
"interpolated_operators",
]
import numpy as np
from ._base import _BaseParametricOperator
from ._nonparametric import (ConstantOperator,
LinearOperator,
QuadraticOperator,
# CrossQuadraticOperator,
CubicOperator)
# Base class ==================================================================
class _InterpolatedOperator(_BaseParametricOperator):
"""Base class for parametric operators where the parameter dependence
is handled with element-wise interpolation, i.e.,
A(µ)[i,j] = Interpolator([µ1, µ2, ...], [A1[i,j], A2[i,j], ...])(µ).
Here A1 is the operator matrix corresponding to the parameter µ1, etc.
The matrix A(µ) for a given µ is constructed by calling the object.
Attributes
----------
parameters : list of `nterms` scalars or (p,) ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator matrices corresponding to the `parameters`, i.e.,
`matrices[i]` are the operator entries at the value `parameters[i]`.
p : int
Dimension of the parameter space.
shape : tuple
Shape of the operator entries.
OperatorClass : class
Class of operator to construct, a subclass of
core.operators._BaseNonparametricOperator.
interpolator : scipy.interpolate class (or similar)
Object that constructs the operator at new parameter values, i.e.,
>>> new_operator_entries = interpolator(new_parameter)
"""
# Abstract method implementation ------------------------------------------
def __init__(self, parameters, matrices, InterpolatorClass):
"""Construct the elementwise operator interpolator.
Parameters
----------
parameters : list of `nterms` scalars or 1D ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator entries corresponding to the `parameters`.
InterpolatorClass : type
Class for the elementwise interpolation. Must obey the syntax
>>> interpolator = InterpolatorClass(data_points, data_values)
>>> interpolator_evaluation = interpolator(new_data_point)
This is usually a class from scipy.interpolate.
"""
_BaseParametricOperator.__init__(self)
# Ensure there are the same number of parameter samples and matrices.
n_params, n_matrices = len(parameters), len(matrices)
if n_params != n_matrices:
raise ValueError(f"{n_params} = len(parameters) "
f"!= len(matrices) = {n_matrices}")
# Preprocess matrices (e.g., nan/inf checking, compression as needed)
matrices = [self.OperatorClass(A).entries for A in matrices]
# Ensure parameter / matrix shapes are consistent.
self._check_shape_consistency(parameters, "parameter sample")
self._check_shape_consistency(matrices, "operator matrix")
self._set_parameter_dimension(parameters)
# Construct the spline.
self.__parameters = parameters
self.__matrices = matrices
self.set_interpolator(InterpolatorClass)
def __call__(self, parameter):
"""Return the nonparametric operator corresponding to the parameter."""
self._check_parameter_dimension(parameter)
return self.OperatorClass(self.interpolator(parameter))
# Properties --------------------------------------------------------------
@property
def parameters(self):
"""Parameter values at which the operators matrices are known."""
return self.__parameters
@property
def matrices(self):
"""Operator matrices corresponding to the parameters."""
return self.__matrices
@property
def shape(self):
"""Shape: shape of the operator matrices to interpolate."""
return self.matrices[0].shape
def set_interpolator(self, InterpolatorClass):
"""Construct the interpolator for the operator entries.
Parameters
----------
InterpolatorClass : type
Class for the elementwise interpolation. Must obey the syntax
>>> interpolator = InterpolatorClass(data_points, data_values)
>>> interpolator_evaluation = interpolator(new_data_point)
This is usually a class from scipy.interpolate.
"""
self.__interpolator = InterpolatorClass(self.parameters, self.matrices)
@property
def interpolator(self):
"""Interpolator object for evaluating the operator at a parameter."""
return self.__interpolator
def __len__(self):
"""Length: number of data points for the interpolation."""
return len(self.matrices)
def __eq__(self, other):
"""Test whether the parameter samples and operator matrices of two
InterpolatedOperator objects are numerically equal.
"""
if not isinstance(other, self.__class__):
return False
if len(self) != len(other):
return False
if self.shape != other.shape:
return False
paramshape = np.shape(self.parameters[0])
if paramshape != np.shape(other.parameters[1]):
return False
if any(not np.all(left == right)
for left, right in zip(self.parameters,
other.parameters)):
return False
return all(np.all(left == right)
for left, right in zip(self.matrices, other.matrices))
# Public interpolated operator classes ========================================
class InterpolatedConstantOperator(_InterpolatedOperator):
"""Constant operator with elementwise interpolation, i.e.,
c(µ) = Interpolator([µ1, µ2, ...], [c1[i,j], c2[i,j], ...])(µ),
where c1 is the operator vector corresponding to the parameter µ1, etc.
The vector c(µ) for a given µ is constructed by calling the object.
Attributes
----------
parameters : list of `nterms` scalars or (p,) ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator matrices corresponding to the `parameters`, i.e.,
`matrices[i]` are the operator entries at the value `parameters[i]`.
p : int
Dimension of the parameter space.
shape : tuple
Shape of the operator entries.
OperatorClass : class
Class of operator to construct, a subclass of
core.operators._BaseNonparametricOperator.
interpolator : scipy.interpolate class (or similar)
Object that constructs the operator at new parameter values, i.e.,
>>> new_operator_entries = interpolator(new_parameter)
"""
_OperatorClass = ConstantOperator
class InterpolatedLinearOperator(_InterpolatedOperator):
"""Linear operator with elementwise interpolation, i.e.,
A(µ) = Interpolator([µ1, µ2, ...], [A1[i,j], A2[i,j], ...])(µ),
where A1 is the operator matrix corresponding to the parameter µ1, etc.
The matrix A(µ) for a given µ is constructed by calling the object.
Attributes
----------
parameters : list of `nterms` scalars or (p,) ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator matrices corresponding to the `parameters`, i.e.,
`matrices[i]` are the operator entries at the value `parameters[i]`.
p : int
Dimension of the parameter space.
shape : tuple
Shape of the operator entries.
OperatorClass : class
Class of operator to construct, a subclass of
core.operators._BaseNonparametricOperator.
interpolator : scipy.interpolate class (or similar)
Object that constructs the operator at new parameter values, i.e.,
>>> new_operator_entries = interpolator(new_parameter)
"""
_OperatorClass = LinearOperator
class InterpolatedQuadraticOperator(_InterpolatedOperator):
"""Quadratic operator with elementwise interpolation, i.e.,
H(µ) = Interpolator([µ1, µ2, ...], [H1[i,j], H2[i,j], ...])(µ),
where H1 is the operator matrix corresponding to the parameter µ1, etc.
The matrix H(µ) for a given µ is constructed by calling the object.
Attributes
----------
parameters : list of `nterms` scalars or (p,) ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator matrices corresponding to the `parameters`, i.e.,
`matrices[i]` are the operator entries at the value `parameters[i]`.
p : int
Dimension of the parameter space.
shape : tuple
Shape of the operator entries.
OperatorClass : class
Class of operator to construct, a subclass of
core.operators._BaseNonparametricOperator.
interpolator : scipy.interpolate class (or similar)
Object that constructs the operator at new parameter values, i.e.,
>>> new_operator_entries = interpolator(new_parameter)
"""
_OperatorClass = QuadraticOperator
class InterpolatedCubicOperator(_InterpolatedOperator):
"""Cubic operator with elementwise interpolation, i.e.,
G(µ) = Interpolator([µ1, µ2, ...], [G1[i,j], G2[i,j], ...])(µ),
where G1 is the operator matrix corresponding to the parameter µ1, etc.
The matrix G(µ) for a given µ is constructed by calling the object.
Attributes
----------
parameters : list of `nterms` scalars or (p,) ndarrays
Parameter values at which the operators matrices are known.
matrices : list of `nterms` ndarray, all of the same shape.
Operator matrices corresponding to the `parameters`, i.e.,
`matrices[i]` are the operator entries at the value `parameters[i]`.
p : int
Dimension of the parameter space.
shape : tuple
Shape of the operator entries.
OperatorClass : class
Class of operator to construct, a subclass of
core.operators._BaseNonparametricOperator.
interpolator : scipy.interpolate class (or similar)
Object that constructs the operator at new parameter values, i.e.,
>>> new_operator_entries = interpolator(new_parameter)
"""
_OperatorClass = CubicOperator
# Dictionary relating modelform keys to operator classes.
interpolated_operators = {
"c": InterpolatedConstantOperator,
"A": InterpolatedLinearOperator,
"H": InterpolatedQuadraticOperator,
"G": InterpolatedCubicOperator,
"B": InterpolatedLinearOperator,
} | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/core/operators/_interpolate.py | 0.916901 | 0.740948 | _interpolate.py | pypi |
"""Base class for nonparametric Operator Inference reduced-order models."""
__all__ = []
import numpy as np
from .._base import _BaseROM
from ... import lstsq, pre
from ...utils import kron2c, kron3c, hdf5_savehandle, hdf5_loadhandle
class _NonparametricOpInfROM(_BaseROM):
"""Base class for nonparametric Operator Inference reduced-order models."""
# Properties --------------------------------------------------------------
@property
def operator_matrix_(self):
"""r x d(r, m) Operator matrix O_ = [ c_ | A_ | H_ | G_ | B_ ]."""
self._check_is_trained()
return np.column_stack([op.entries for op in self])
@property
def data_matrix_(self):
"""k x d(r, m) Data matrix D = [ 1 | Q^T | (Q ⊗ Q)^T | ... ]."""
if hasattr(self, "solver_"):
return self.solver_.A if (self.solver_ is not None) else None
raise AttributeError("data matrix not constructed (call fit())")
# Fitting -----------------------------------------------------------------
def _check_training_data_shapes(self, datasets):
"""Ensure that each data set has the same number of columns and a
valid number of rows (as determined by the basis).
Parameters
----------
datasets: list of (ndarray, str) tuples
Datasets paired with labels, e.g., [(Q, "states"), (dQ, "ddts")].
"""
data0, label0 = datasets[0]
for data, label in datasets:
if label == "inputs":
if self.m != 1: # inputs.shape = (m, k)
if data.ndim != 2:
raise ValueError("inputs must be two-dimensional "
"(m > 1)")
if data.shape[0] != self.m:
raise ValueError(f"inputs.shape[0] = {data.shape[0]} "
f"!= {self.m} = m")
else: # inputs.shape = (1, k) or (k,)
if data.ndim not in (1, 2):
raise ValueError("inputs must be one- or "
"two-dimensional (m = 1)")
if data.ndim == 2 and data.shape[0] != 1:
raise ValueError("inputs.shape != (1, k) (m = 1)")
else:
if data.ndim != 2:
raise ValueError(f"{label} must be two-dimensional")
if data.shape[0] not in (self.n, self.r):
raise ValueError(f"{label}.shape[0] != n or r "
f"(n={self.n}, r={self.r})")
if data.shape[-1] != data0.shape[-1]:
raise ValueError(f"{label}.shape[-1] = {data.shape[-1]} "
f"!= {data0.shape[-1]} = {label0}.shape[-1]")
def _process_fit_arguments(self, basis, states, lhs, inputs,
known_operators):
"""Prepare training data for Operator Inference by clearing old data,
storing the basis, extracting dimensions, projecting known operators,
validating data sizes, and projecting training data.
Parameters
----------
basis : (n, r) ndarray or None
Basis for the linear reduced space (e.g., POD basis matrix).
If None, states and lhs are assumed to already be projected.
states : (n, k) or (r, k) ndarray
Column-wise snapshot training data. Each column is one snapshot,
either full order (n rows) or projected to reduced order (r rows).
lhs : (n, k) or (r, k) ndarray
Left-hand side data for ROM training. Each column corresponds to
one snapshot, either full order (n rows) or reduced order (r rows).
* Steady: forcing function.
* Discrete: column-wise next iteration
* Continuous: time derivative of the state
inputs : (m, k) or (k,) ndarray or None
Column-wise inputs corresponding to the snapshots. May be a
one-dimensional array if m=1 (scalar input). Required if 'B' is
in `modelform`; must be None if 'B' is not in `modelform`.
Returns
-------
states_ : (r, k) ndarray
Projected state snapshots.
lhs_ : (r, k) ndarray
Projected left-hand-side data.
"""
# Clear all data (basis and operators).
self._clear()
# Store basis and (hence) reduced dimension.
self.basis = basis
# Validate / project any known operators.
self._project_operators(known_operators)
if len(self._projected_operators_) == len(self.modelform):
# Fully intrusive case, nothing to learn with OpInf.
return None, None
# Get state and input dimensions if needed.
if self.basis is None:
self.r = states.shape[0]
self._check_inputargs(inputs, "inputs")
to_check = [(states, "states"), (lhs, self._LHS_ARGNAME)]
if 'B' in self.modelform:
if self.m is None:
self.m = 1 if inputs.ndim == 1 else inputs.shape[0]
to_check.append((inputs, "inputs"))
# Ensure training datasets have consistent sizes.
self._check_training_data_shapes(to_check)
# Encode states and lhs in the reduced subspace (if needed).
states_ = self.encode(states, "states")
lhs_ = self.encode(lhs, self._LHS_ARGNAME)
# Subtract known data from the lhs data.
for key in self._projected_operators_:
if key == 'c': # Known constant term.
lhs_ = lhs_ - np.outer(self.c_(), np.ones(states_.shape[1]))
elif key == 'B': # Known input term.
lhs_ = lhs_ - self.B_(np.atleast_2d(inputs))
else: # Known linear/quadratic/cubic term.
lhs_ = lhs_ - getattr(self, f"{key}_")(states_)
return states_, lhs_
def _assemble_data_matrix(self, states_, inputs):
"""Construct the Operator Inference data matrix D from projected data.
If modelform="cAHB", this is D = [1 | Q_.T | (Q_ ⊗ Q_).T | U.T],
where Q_ = states_ and U = inputs.
Parameters
----------
states_ : (r, k) ndarray
Column-wise projected snapshot training data.
inputs : (m, k) or (k,) ndarray or None
Column-wise inputs corresponding to the snapshots. May be a
one-dimensional array if m=1 (scalar input).
Returns
-------
D : (k, d(r, m)) ndarray
Operator Inference data matrix (no regularization).
"""
to_infer = {key for key in self.modelform
if key not in self._projected_operators_}
D = []
if 'c' in to_infer: # Constant term.
D.append(np.ones((states_.shape[1], 1)))
if 'A' in to_infer: # Linear state term.
D.append(states_.T)
if 'H' in to_infer: # (compact) Quadratic state term.
D.append(kron2c(states_).T)
if 'G' in to_infer: # (compact) Cubic state term.
D.append(kron3c(states_).T)
if 'B' in to_infer: # Linear input term.
D.append(np.atleast_2d(inputs).T)
return np.hstack(D)
def _extract_operators(self, Ohat):
"""Extract and save the inferred operators from the block-matrix
solution to the least-squares problem.
Parameters
----------
Ohat : (r, d(r, m)) ndarray
Block matrix of ROM operator coefficients, the transpose of the
solution to the Operator Inference linear least-squares problem.
"""
to_infer = {key for key in self.modelform
if key not in self._projected_operators_}
i = 0
if 'c' in to_infer: # Constant term (one-dimensional).
self.c_ = Ohat[:, i:i+1][:, 0]
i += 1
if 'A' in to_infer: # Linear state matrix.
self.A_ = Ohat[:, i:i+self.r]
i += self.r
if 'H' in to_infer: # (compact) Qudadratic state matrix.
_r2 = self._r2
self.H_ = Ohat[:, i:i+_r2]
i += _r2
if 'G' in to_infer: # (compact) Cubic state matrix.
_r3 = self._r3
self.G_ = Ohat[:, i:i+_r3]
i += _r3
if 'B' in to_infer: # Linear input matrix.
self.B_ = Ohat[:, i:i+self.m]
i += self.m
return
def _construct_solver(self, basis, states, lhs,
inputs=None, regularizer=0, known_operators=None):
"""Construct a solver object mapping the regularizer to solutions
of the Operator Inference least-squares problem.
Parameters
----------
basis : (n, r) ndarray or None
Basis for the linear reduced space (e.g., POD basis matrix).
If None, states and lhs are assumed to already be projected.
states : (n, k) or (r, k) ndarray
Column-wise snapshot training data. Each column is one snapshot,
either full order (n rows) or projected to reduced order (r rows).
lhs : (n, k) or (r, k) ndarray
Left-hand side data for ROM training. Each column corresponds to
one snapshot, either full order (n rows) or reduced order (r rows).
* Steady: forcing function.
* Discrete: column-wise next iteration
* Continuous: time derivative of the state
inputs : (m, k) or (k,) ndarray or None
Column-wise inputs corresponding to the snapshots. May be a
one-dimensional array if m=1 (scalar input). Required if 'B' is
in `modelform`; must be None if 'B' is not in `modelform`.
regularizer : float >= 0, (d, d) ndarray or list of r of these
Tikhonov regularization factor(s); see lstsq.solve(). Here, d
is the number of unknowns in each decoupled least-squares problem,
e.g., d = r + m when `modelform`="AB". This parameter is used here
only to determine the correct type of solver.
known_operators : dict
Dictionary of known full-order or reduced-order operators.
Corresponding reduced-order operators are computed directly
through projection; remaining operators are inferred.
Keys must match the modelform, values are ndarrays:
* 'c': (n,) constant term c.
* 'A': (n, n) linear state matrix A.
* 'H': (n, n**2) quadratic state matrix H.
* 'G': (n, n**3) cubic state matrix G.
* 'B': (n, m) input matrix B.
"""
states_, lhs_ = self._process_fit_arguments(basis, states, lhs, inputs,
known_operators)
# Fully intrusive case (nothing to learn).
if states_ is lhs_ is None:
self.solver_ = None
return
D = self._assemble_data_matrix(states_, inputs)
self.solver_ = lstsq.solver(D, lhs_.T, regularizer)
def _evaluate_solver(self, regularizer):
"""Evaluate the least-squares solver with the given regularizer.
Parameters
----------
regularizer : float >= 0, (d, d) ndarray or list of r of these
Tikhonov regularization factor(s); see lstsq.solve(). Here, d
is the number of unknowns in each decoupled least-squares problem,
e.g., d = r + m when `modelform`="AB".
"""
# Fully intrusive case (nothing to learn).
if self.solver_ is None:
return
OhatT = self.solver_.predict(regularizer)
self._extract_operators(np.atleast_2d(OhatT.T))
def fit(self, basis, states, lhs, inputs=None,
regularizer=0, known_operators=None):
"""Learn the reduced-order model operators from data.
Parameters
----------
basis : (n, r) ndarray or None
Basis for the linear reduced space (e.g., POD basis matrix).
If None, states and lhs are assumed to already be projected.
states : (n, k) or (r, k) ndarray
Column-wise snapshot training data. Each column is one snapshot,
either full order (n rows) or projected to reduced order (r rows).
lhs : (n, k) or (r, k) ndarray
Left-hand side data for ROM training. Each column corresponds to
one snapshot, either full order (n rows) or reduced order (r rows).
* Steady: forcing function.
* Discrete: column-wise next iteration
* Continuous: time derivative of the state
inputs : (m, k) or (k,) ndarray or None
Column-wise inputs corresponding to the snapshots. May be a
one-dimensional array if m=1 (scalar input). Required if 'B' is
in `modelform`; must be None if 'B' is not in `modelform`.
regularizer : float >= 0, (d, d) ndarray or list of r of these
Tikhonov regularization factor(s); see lstsq.solve(). Here, d
is the number of unknowns in each decoupled least-squares problem,
e.g., d = r + m when `modelform`="AB".
known_operators : dict or None
Dictionary of known full-order operators.
Corresponding reduced-order operators are computed directly
through projection; remaining operators are inferred from data.
Keys must match the modelform; values are ndarrays:
* 'c': (n,) constant term c.
* 'A': (n, n) linear state matrix A.
* 'H': (n, n**2) quadratic state matrix H.
* 'G': (n, n**3) cubic state matrix G.
* 'B': (n, m) input matrix B.
Returns
-------
self
"""
self._construct_solver(basis, states, lhs, inputs, regularizer,
known_operators)
self._evaluate_solver(regularizer)
return self
# Model persistence -------------------------------------------------------
def save(self, savefile, save_basis=True, overwrite=False):
"""Serialize the ROM, saving it in HDF5 format.
The model can then be loaded with the load() class method.
Parameters
----------
savefile : str
File to save to, with extension '.h5' (HDF5).
savebasis : bool
If True, save the basis as well as the reduced operators.
If False, only save reduced operators.
overwrite : bool
If True and the specified file already exists, overwrite the file.
If False and the specified file already exists, raise an error.
"""
self._check_is_trained()
with hdf5_savehandle(savefile, overwrite=overwrite) as hf:
# Store ROM modelform.
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["modelform"] = self.modelform
# Store basis (optionally) if it exists.
if (self.basis is not None) and save_basis:
meta.attrs["BasisClass"] = self.basis.__class__.__name__
self.basis.save(hf.create_group("basis"))
# Store reduced operators.
for key, op in zip(self.modelform, self):
hf.create_dataset(f"operators/{key}_", data=op.entries)
@classmethod
def load(cls, loadfile):
"""Load a serialized ROM from an HDF5 file, created previously from
a ROM object's save() method.
Parameters
----------
loadfile : str
File to load from, which should end in '.h5'.
Returns
-------
model : _NonparametricOpInfROM
Trained reduced-order model.
"""
with hdf5_loadhandle(loadfile) as hf:
if "meta" not in hf:
raise ValueError("invalid save format (meta/ not found)")
if "operators" not in hf:
raise ValueError("invalid save format (operators/ not found)")
# Load metadata.
modelform = hf["meta"].attrs["modelform"]
basis = None
# Load basis if present.
if "basis" in hf:
BasisClassName = hf["meta"].attrs["BasisClass"]
basis = getattr(pre, BasisClassName).load(hf["basis"])
# Load operators.
operators = {f"{key}_": hf[f"operators/{key}_"][:]
for key in modelform}
# Construct the model.
return cls(modelform)._set_operators(basis, **operators) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/core/nonparametric/_base.py | 0.879328 | 0.457621 | _base.py | pypi |
"""Operator Inference least-squares solvers with Tikhonov regularization."""
__all__ = [
"SolverL2",
"SolverL2Decoupled",
"SolverTikhonov",
"SolverTikhonovDecoupled",
"solver",
"solve",
]
import warnings
import numpy as np
import scipy.linalg as la
from ._base import _BaseSolver
# Solver classes ==============================================================
class _BaseTikhonovSolver(_BaseSolver):
"""Base solver for regularized linear least-squares problems of the form
sum_{i} min_{x_i} ||Ax_i - b_i||^2 + ||P_i x_i||^2.
"""
def __init__(self):
"""Initialize attributes."""
self.__A, self.__B = None, None
# Properties: matrices ----------------------------------------------------
@property
def A(self):
"""(k, d) ndarray: "left-hand side" data matrix."""
return self.__A
@A.setter
def A(self, A):
raise AttributeError("can't set attribute (call fit())")
@property
def B(self):
"""(k, r) ndarray: "right-hand side" matrix B = [ b_1 | ... | b_r ]."""
return self.__B
@B.setter
def B(self, B):
raise AttributeError("can't set attribute (call fit())")
# Properties: matrix dimensions -------------------------------------------
@property
def k(self):
"""int > 0 : number of equations in the least-squares problem
(number of rows of A).
"""
return self.A.shape[0] if self.A is not None else None
@property
def d(self):
"""int > 0 : number of unknowns to learn in each problem
(number of columns of A).
"""
return self.A.shape[1] if self.A is not None else None
@property
def r(self):
"""int > 0: number of independent least-squares problems
(number of columns of B).
"""
return self.B.shape[1] if self.B is not None else None
# Validation --------------------------------------------------------------
def _process_fit_arguments(self, A, B):
"""Verify the dimensions of A and B are consistent with the expression
AX - B, then save A and B.
Parameters
----------
A : (k, d) ndarray
The "left-hand side" matrix.
B : (k, r) ndarray
The "right-hand side" matrix B = [ b_1 | b_2 | ... | b_r ].
"""
# Extract the dimensions of A (and ensure A is two-dimensional).
k, d = A.shape
if k < d:
warnings.warn("original least-squares system is underdetermined!",
la.LinAlgWarning, stacklevel=2)
self.__A = A
# Check dimensions of b.
if B.ndim == 1:
B = B.reshape((-1, 1))
if B.ndim != 2:
raise ValueError("`B` must be one- or two-dimensional")
if B.shape[0] != k:
raise ValueError("inputs not aligned: A.shape[0] != B.shape[0]")
self.__B = B
def _check_is_trained(self, attr=None):
"""Raise an AttributeError if fit() has not been called."""
trained = (self.A is not None) and (self.B is not None)
if attr is not None:
trained *= hasattr(self, attr)
if not trained:
raise AttributeError("lstsq solver not trained (call fit())")
# Post-processing ---------------------------------------------------------
def cond(self):
"""Calculate the 2-norm condition number of the data matrix A."""
self._check_is_trained()
return np.linalg.cond(self.A)
def misfit(self, X):
"""Calculate the data misfit (residual) of the non-regularized problem
for each column of B = [ b_1 | ... | b_r ].
Parameters
----------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
Returns
-------
resids : (r,) ndarray or float (r = 1)
Data misfits ||Ax_i - b_i||_2^2, i = 1, ..., r.
"""
self._check_is_trained()
if self.r == 1 and X.ndim == 1:
X = X.reshape((-1, 1))
if X.shape != (self.d, self.r):
raise ValueError(f"X.shape = {X.shape} != "
f"{(self.d, self.r)} = (d, r)")
resids = np.sum((self.A @ X - self.B)**2, axis=0)
return resids[0] if self.r == 1 else resids
class SolverL2(_BaseTikhonovSolver):
"""Solve the l2-norm ordinary least-squares problem with L2 regularization:
sum_{i} min_{x_i} ||Ax_i - b_i||_2^2 + ||λx_i||_2^2, λ ≥ 0,
or, written in the Frobenius norm,
min_{X} ||AX - B||_F^2 + ||λX||_F^2, λ ≥ 0.
"""
# Validation --------------------------------------------------------------
def _process_regularizer(self, regularizer):
"""Validate the regularization hyperparameter and return
regularizer^2."""
if not np.isscalar(regularizer):
raise TypeError("regularization hyperparameter must be a scalar")
if regularizer < 0:
raise ValueError("regularization hyperparameter must be "
"non-negative")
return regularizer**2
# Helper methods ----------------------------------------------------------
def _inv_svals(self, regularizer):
"""Compute the regularized inverse singular value matrix,
Σ^* = Σ (Σ^2 + (λ^2)I)^{-1}. Note Σ^* = Σ^{-1} for λ = 0.
"""
regularizer2 = self._process_regularizer(regularizer)
svals = self._svals
return 1/svals if regularizer2 == 0 else svals/(svals**2+regularizer2)
# Main methods ------------------------------------------------------------
def fit(self, A, B):
"""Take the SVD of A in preparation to solve the least-squares problem.
Parameters
----------
A : (k, d) ndarray
The "left-hand side" matrix.
B : (k, r) ndarray
The "right-hand side" matrix B = [ b_1 | b_2 | ... | b_r ].
"""
self._process_fit_arguments(A, B)
# Compute the SVD of A and save what is needed to solve the problem.
U, svals, Vt = la.svd(self.A, full_matrices=False)
self._V = Vt.T
self._svals = svals
self._UtB = U.T @ self.B
return self
def predict(self, regularizer):
"""Solve the least-squares problem with the non-negative scalar
regularization hyperparameter λ.
Parameters
----------
regularizer : float ≥ 0
Scalar regularization hyperparameter.
Returns
-------
X : (d, r) or (d,) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
The result is flattened to a one-dimensional array if r = 1.
"""
self._check_is_trained("_V")
svals_inv = self._inv_svals(regularizer).reshape((-1, 1))
X = self._V @ (svals_inv * self._UtB) # X = V svals_inv U.T B
return np.ravel(X) if self.r == 1 else X
# Post-processing ---------------------------------------------------------
def cond(self):
"""Calculate the 2-norm condition number of the data matrix A."""
self._check_is_trained("_svals")
return abs(self._svals.max() / self._svals.min())
def regcond(self, regularizer):
"""Compute the 2-norm condition number of the regularized data matrix.
Parameters
----------
regularizer : float ≥ 0
Scalar regularization hyperparameter.
Returns
-------
rc : float ≥ 0
cond([A.T | λI.T].T), computed from filtered singular values of A.
"""
self._check_is_trained("_svals")
svals2 = self._svals**2 + self._process_regularizer(regularizer)
return np.sqrt(svals2.max() / svals2.min())
def residual(self, X, regularizer):
"""Calculate the residual of the regularized problem for each column of
B = [ b_1 | ... | b_r ], i.e., ||Ax_i - b_i||_2^2 + ||λx_i||_2^2.
Parameters
----------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
regularizer : float ≥ 0
Scalar regularization hyperparameter.
Returns
-------
resids : (r,) ndarray or float (r = 1)
Residuals ||Ax_i - b_i||_2^2 + ||λx_i||_2^2, i = 1, ..., r.
"""
self._check_is_trained()
regularizer2 = self._process_regularizer(regularizer)
return self.misfit(X) + regularizer2*np.sum(X**2, axis=0)
class SolverL2Decoupled(SolverL2):
"""Solve r independent l2-norm ordinary least-squares problems, each with
the same data matrix but different L2 regularizations,
min_{x_i} ||Ax_i - b_i||_2^2 + ||λ_i x_i||_2^2, λ_i > 0.
"""
# Validation --------------------------------------------------------------
def _check_regularizers(self, regularizers):
if len(regularizers) != self.r:
raise ValueError("len(regularizers) != number of columns of B")
# Main methods ------------------------------------------------------------
def predict(self, regularizers):
"""Solve the least-squares problem with regularization hyperparameters
regularizers.
Parameters
----------
regularizers : sequence of r floats or (r,) ndarray
Scalar regularization hyperparameters, one for each column of B.
Returns
-------
X : (d, r) or (d,) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
The result is flattened to a one-dimensional array if r = 1.
"""
self._check_is_trained("_V")
self._check_regularizers(regularizers)
# Allocate space for the solution.
X = np.empty((self.d, self.r))
# Solve each independent regularized lstsq problem (iteratively).
for j, regularizer in enumerate(regularizers):
svals_inv = self._inv_svals(regularizer)
# X = V svals_inv U.T B
X[:, j] = self._V @ (svals_inv * self._UtB[:, j])
return np.ravel(X) if self.r == 1 else X
# Post-processing ---------------------------------------------------------
def regcond(self, regularizers):
"""Compute the 2-norm condition number of each regularized data matrix.
Parameters
----------
regularizers : sequence of r floats or (r,) ndarray
Scalar regularization hyperparameters, one for each column of B.
Returns
-------
rcs : (r,) ndarray
cond([A.T | (λ_i I).T].T), i = 1, ..., r, computed from filtered
singular values of the data matrix A.
"""
self._check_is_trained("_svals")
self._check_regularizers(regularizers)
regularizer_2s = np.array([
self._process_regularizer(lm) for lm in regularizers])
svals2 = self._svals**2 + regularizer_2s.reshape((-1, 1))
return np.sqrt(svals2.max(axis=1) / svals2.min(axis=1))
def residual(self, X, regularizers):
"""Calculate the residual of the regularized problem for each column of
B = [ b_1 | ... | b_r ], i.e., ||Ax_i - b_i||_2^2 + ||λ_i x_i||_2^2.
Parameters
----------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
regularizers : sequence of r floats or (r,) ndarray
Scalar regularization hyperparameters, one for each column of B.
Returns
-------
resids : (r,) ndarray
Residuals ||Ax_i - b_i||_2^2 + ||λ_i x_i||_2^2, i = 1, ..., r.
"""
self._check_is_trained()
self._check_regularizers(regularizers)
regularizer_2s = np.array([
self._process_regularizer(lm) for lm in regularizers])
return self.misfit(X) + regularizer_2s*np.sum(X**2, axis=0)
class SolverTikhonov(_BaseTikhonovSolver):
"""Solve the l2-norm ordinary least-squares problem with Tikhonov
regularization:
sum_{i} min_{x_i} ||Ax_i - b_i||_2^2 + ||Px_i||_2^2, P > 0 (SPD).
or, written in the Frobenius norm,
min_{X} ||AX - B||_F^2 + ||PX||_F^2, P > 0 (SPD).
"""
# Validation --------------------------------------------------------------
def _process_regularizer(self, P):
"""Validate the type and shape of the regularizer."""
# TODO: allow sparse P.
if not isinstance(P, np.ndarray):
raise TypeError("regularization matrix must be a NumPy array")
# One-dimensional input (diagonals of the regularization matrix).
if P.shape == (self.d,):
if np.any(P < 0):
raise ValueError("diagonal P must be positive semi-definite")
return np.diag(P)
# Two-dimensional input (the regularization matrix).
elif P.shape != (self.d, self.d):
raise ValueError("P.shape != (d, d) or (d,) where d = A.shape[1]")
return P
# Helper methods ----------------------------------------------------------
def _lhs(self, P):
"""Expand P if needed and compute A.T A + P.T P, the left-hand side of
the modified Normal equations for Tikhonov-regularized least squares.
"""
P = self._process_regularizer(P)
return P, self._AtA + (P.T @ P)
# Main methods ------------------------------------------------------------
def fit(self, A, B):
"""Prepare to solve the least-squares problem via the normal equations.
Parameters
----------
A : (k, d) ndarray
The "left-hand side" matrix.
B : (k, r) ndarray
The "right-hand side" matrix B = [ b_1 | b_2 | ... | b_r ].
"""
self._process_fit_arguments(A, B)
# Compute both sides of the Normal equations.
self._rhs = self.A.T @ self.B
self._AtA = self.A.T @ self.A
return self
def predict(self, P, trynormal=True):
"""Solve the least-squares problem with regularization matrix P.
Parameters
----------
P : (d, d) or (d,) ndarray
Regularization matrix (or the diagonals of the regularization
matrix if one-dimensional).
trynormal : bool
If True, attempt to solve the problem via the normal equations,
falling back on a full least-squares solver if the problem is
too ill-conditioned. If False, skip the normal equations attempt.
Returns
-------
X : (d, r) or (d,) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
The result is flattened to a one-dimensional array if r = 1.
"""
self._check_is_trained("_AtA")
P, lhs = self._lhs(P)
if trynormal:
try:
with warnings.catch_warnings():
warnings.filterwarnings("error", category=la.LinAlgWarning)
# Attempt to solve the problem via the normal equations.
X = la.solve(lhs, self._rhs, assume_a="pos")
except (la.LinAlgError, la.LinAlgWarning):
# For ill-conditioned normal equations, use la.lstsq().
print("normal equations solve failed, switching lstsq solver")
trynormal = False
if not trynormal:
Bpad = np.vstack((self.B, np.zeros((self.d, self.r))))
X = la.lstsq(np.vstack((self.A, P)), Bpad)[0]
return np.ravel(X) if self.r == 1 else X
# Post-processing ---------------------------------------------------------
def regcond(self, P):
"""Compute the 2-norm condition number of the regularized data matrix.
Parameters
----------
P : (d, d) or (d,) ndarray
Regularization matrix (or the diagonals of the regularization
matrix if one-dimensional).
Returns
-------
rc : float ≥ 0
cond([A.T | P.T].T), computed as sqrt(cond(A.T A + P.T P)).
"""
self._check_is_trained("_AtA")
return np.sqrt(np.linalg.cond(self._lhs(P)[1]))
def residual(self, X, P):
"""Calculate the residual of the regularized problem for each column of
B = [ b_1 | ... | b_r ], i.e., ||Ax_i - b_i||_2^2 + ||Px_i||_2^2.
Parameters
----------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
P : (d, d) or (d,) ndarray
Regularization matrix (or the diagonals of the regularization
matrix if one-dimensional).
Returns
-------
resids : (r,) ndarray or float (r = 1)
Residuals ||Ax_i - b_i||_2^2 + ||Px_i||_2^2, i = 1, ..., r.
"""
self._check_is_trained()
P = self._process_regularizer(P)
return self.misfit(X) + np.sum((P @ X)**2, axis=0)
class SolverTikhonovDecoupled(SolverTikhonov):
"""Solve r independent l2-norm ordinary least-squares problems, each with
the same data matrix but a different Tikhonov regularizer,
min_{x_i} ||Ax_i - b_i||_2^2 + ||P_i x_i||_2^2.
"""
# Validation --------------------------------------------------------------
def _check_Ps(self, Ps):
"""Validate Ps."""
if len(Ps) != self.r:
raise ValueError("len(Ps) != number of columns of B")
# Main methods ------------------------------------------------------------
def predict(self, Ps):
"""Solve the least-squares problems with regularization matrices Ps.
Parameters
----------
Ps : sequence of r (d, d) or (d,) ndarrays
Regularization matrices (or the diagonals of the regularization
matrices if one-dimensional), one for each column of B.
Returns
-------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
"""
self._check_is_trained("_AtA")
self._check_Ps(Ps)
# Allocate space for the solution.
X = np.empty((self.d, self.r))
# Solve each independent problem (iteratively for now).
Bpad = None
for j, [P, rhs] in enumerate(zip(Ps, self._rhs.T)):
P, lhs = self._lhs(P)
with warnings.catch_warnings():
warnings.filterwarnings("error", category=la.LinAlgWarning)
try:
# Attempt to solve the problem via the normal equations.
X[:, j] = la.solve(lhs, self._rhs[:, j], assume_a="pos")
except (la.LinAlgError, la.LinAlgWarning):
# For ill-conditioned normal equations, use la.lstsq().
if Bpad is None:
Bpad = np.vstack((self.B, np.zeros((self.d, self.r))))
X[:, j] = la.lstsq(np.vstack((self.A, P)), Bpad[:, j])[0]
return X
# Post-processing ---------------------------------------------------------
def regcond(self, Ps):
"""Compute the 2-norm condition number of each regularized data matrix.
Parameters
----------
Ps : sequence of r (d, d) or (d,) ndarrays
Regularization matrices (or the diagonals of the regularization
matrices if one-dimensional), one for each column of B.
Returns
-------
rcs : float ≥ 0
cond([A.T | P_i.T].T), i = 1, ..., r, computed as
sqrt(cond(A.T A + P_i.T P_i)).
"""
self._check_is_trained("_AtA")
self._check_Ps(Ps)
return np.array([np.sqrt(np.linalg.cond(self._lhs(P)[1])) for P in Ps])
def residual(self, X, Ps):
"""Calculate the residual of the regularized problem for each column of
B = [ b_1 | ... | b_r ], i.e., ||Ax_i - b_i||_2^2 + ||P_i x_i||_2^2.
Parameters
----------
X : (d, r) ndarray
Least-squares solution X = [ x_1 | ... | x_r ]; each column is the
solution to the subproblem with the corresponding column of B.
Ps : sequence of r (d, d) or (d,) ndarrays
Regularization matrices (or the diagonals of the regularization
matrices if one-dimensional), one for each column of B.
Returns
-------
resids : (r,) ndarray
Residuals ||Ax_i - b_i||_2^2 + ||P_i x_i||_2^2, i = 1, ..., r.
"""
self._check_is_trained()
self._check_Ps(Ps)
misfit = self.misfit(X)
Pxs = np.array([np.sum((P @ X[:, j])**2) for j, P in enumerate(Ps)])
return misfit + Pxs
# Convenience functions =======================================================
def solver(A, B, P):
"""Select and initialize an appropriate solver for the ordinary least-
squares problem with Tikhonov regularization,
sum_{i} min_{x_i} ||Ax_i - b_i||^2 + ||P_i x_i||^2.
Parameters
----------
A : (k, d) ndarray
The "left-hand side" matrix.
B : (k, r) ndarray
The "right-hand side" matrix B = [ b_1 | b_2 | ... | b_r ].
P : float >= 0 or ndarray of shapes (r,), (d,), (d, d), (r, d), (r, d, d)
Tikhonov regularization hyperparameter(s). The regularization matrix
in the least-squares problem depends on the format of the argument:
* float >= 0: `P`*I, a scaled identity matrix.
* (d,) ndarray: diag(P), a diagonal matrix.
* (d, d) ndarray: the matrix `P`.
* sequence of length r : the jth entry in the sequence is the
regularization hyperparameter for the jth column of `b`. Only
valid if `b` is two-dimensional and has exactly r columns.
Returns
-------
solver
Least-squares solver object, with a predict() method mapping the
regularization factor to the least-squares solution.
"""
d = A.shape[1]
if B.ndim == 1:
B = B.reshape((-1, 1))
# P is a scalar: single L2-regularized problem.
if np.isscalar(P):
solver = SolverL2()
# P is a sequence of r scalars: decoupled L2-regularized problems.
elif np.shape(P) == (B.shape[1],):
solver = SolverL2Decoupled()
# P is a dxd matrix (or a 1D array of length d for diagonal P):
# single Tikhonov-regularized problem.
elif isinstance(P, np.ndarray) and (P.shape in [(d,), (d, d)]):
solver = SolverTikhonov()
# P is a sequence of r matrices: decoupled Tikhonov-regularized problems.
elif np.shape(P) in [(B.shape[1], d), (B.shape[1], d, d)]:
solver = SolverTikhonovDecoupled()
else:
raise ValueError("invalid or misaligned input P")
return solver.fit(A, B)
def solve(A, B, P=0):
"""Solve the l2-norm Tikhonov-regularized ordinary least-squares problem
sum_{i} min_{x_i} ||Ax_i - b_i||^2 + ||P_i x_i||^2.
Parameters
----------
A : (k, d) ndarray
The "left-hand side" matrix.
B : (k, r) ndarray
The "right-hand side" matrix B = [ b_1 | b_2 | ... | b_r ].
P : float >= 0 or ndarray of shapes (r,), (d,), (d, d), (r, d), (r, d, d)
Tikhonov regularization hyperparameter(s). The regularization matrix
in the least-squares problem depends on the format of the argument:
* float >= 0: `P`*I, a scaled identity matrix.
* (d,) ndarray: diag(P), a diagonal matrix.
* (d, d) ndarray: the matrix `P`.
* sequence of length r : the jth entry in the sequence is the
regularization hyperparameter for the jth column of `b`. Only
valid if `b` is two-dimensional and has exactly r columns.
Returns
-------
x : (d,) or (d, r) ndarray
Least-squares solution. If `b` is a two-dimensional array, then
each column is a solution to the regularized least-squares problem
with the corresponding column of b.
"""
return solver(A, B, P).predict(P) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/lstsq/_tikhonov.py | 0.954605 | 0.629461 | _tikhonov.py | pypi |
"""Least-squares solvers for the Operator Inference problem."""
from ._tikhonov import *
# TODO: rename lstsq_size() -> size()
def lstsq_size(modelform, r, m=0, affines=None):
"""Calculate the number of columns in the operator matrix O in the Operator
Inference least-squares problem. This is also the number of columns in the
data matrix D.
Parameters
---------
modelform : str containing 'c', 'A', 'H', 'G', and/or 'B'
The structure of the desired reduced-order model. Each character
indicates the presence of a different term in the model:
'c' : Constant term c
'A' : Linear state term Ax.
'H' : Quadratic state term H(x⊗x).
'G' : Cubic state term G(x⊗x⊗x).
'B' : Input term Bu.
For example, modelform=="AB" means f(x,u) = Ax + Bu.
r : int
The dimension of the reduced order model.
m : int
The dimension of the inputs of the model.
Must be zero unless 'B' is in `modelform`.
affines : dict(str -> list(callables))
Functions that define the structures of the affine operators.
Keys must match the modelform:
* 'c': Constant term c(µ).
* 'A': Linear state matrix A(µ).
* 'H': Quadratic state matrix H(µ).
* 'G': Cubic state matrix G(µ).
* 'B': linear Input matrix B(µ).
For example, if the constant term has the affine structure
c(µ) = θ1(µ)c1 + θ2(µ)c2 + θ3(µ)c3, then 'c' -> [θ1, θ2, θ3].
Returns
-------
ncols : int
The number of columns in the Operator Inference least-squares problem.
"""
if 'B' in modelform and m == 0:
raise ValueError("argument m > 0 required since 'B' in modelform")
if 'B' not in modelform and m != 0:
raise ValueError(f"argument m={m} invalid since 'B' in modelform")
if affines is None:
affines = {}
qs = [(len(affines[op]) if (op in affines and op in modelform)
else 1 if op in modelform else 0) for op in "cAHGB"]
rs = [1, r, r*(r+1)//2, r*(r+1)*(r+2)//6, m]
return sum(qq*rr for qq, rr in zip(qs, rs)) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/lstsq/__init__.py | 0.773388 | 0.769427 | __init__.py | pypi |
"""Utilities for HDF5 file interaction."""
__all__ = [
"hdf5_savehandle",
"hdf5_loadhandle",
]
import os
import h5py
class _hdf5_filehandle:
"""Get a handle to an open HDF5 file to read or write to.
Parameters
----------
filename : str of h5py File/Group handle
* str : Name of the file to interact with.
* h5py File/Group handle : handle to part of an already open HDF5 file.
mode : str
Type of interaction for the HDF5 file.
* "save" : Open the file for writing only.
* "load" : Open the file for reading only.
overwrite : bool
If True, overwrite the file if it already exists. If False,
raise a FileExistsError if the file already exists.
Only applies when mode = "save".
"""
def __init__(self, filename, mode, overwrite=False):
"""Open the file handle."""
if isinstance(filename, h5py.HLObject):
# `filename` is already an open HDF5 file.
self.file_handle = filename
self.close_when_done = False
elif mode == "save":
# `filename` is the name of a file to create for writing.
if not filename.endswith(".h5"):
filename += ".h5"
if os.path.isfile(filename) and not overwrite:
raise FileExistsError(f"{filename} (overwrite=True to ignore)")
self.file_handle = h5py.File(filename, 'w')
self.close_when_done = True
elif mode == "load":
# `filename` is the name of an existing file to read from.
if not os.path.isfile(filename):
raise FileNotFoundError(filename)
self.file_handle = h5py.File(filename, 'r')
self.close_when_done = True
else:
raise ValueError(f"invalid mode '{mode}'")
def __enter__(self):
"""Return the handle to the open HDF5 file."""
return self.file_handle
def __exit__(self, exc_type, exc_value, exc_traceback):
"""CLose the file if needed."""
if self.close_when_done:
self.file_handle.close()
if exc_type:
raise
class hdf5_savehandle(_hdf5_filehandle):
"""Get a handle to an open HDF5 file to write to.
Parameters
----------
savefile : str of h5py File/Group handle
* str : Name of the file to save to. Extension ".h5" is appended.
* h5py File/Group handle : handle to part of an already open HDF5 file
to save data to.
overwrite : bool
If True, overwrite the file if it already exists. If False,
raise a FileExistsError if the file already exists.
>>> with hdf5_savehandle("file_to_save_to.h5") as hf:
... hf.create_dataset(...)
"""
def __init__(self, savefile, overwrite):
return _hdf5_filehandle.__init__(self, savefile, "save", overwrite)
class hdf5_loadhandle(_hdf5_filehandle):
"""Get a handle to an open HDF5 file to read from.
Parameters
----------
loadfile : str of h5py File/Group handle
* str : Name of the file to read from. Extension ".h5" is appended.
* h5py File/Group handle : handle to part of an already open HDF5 file
to read data from.
>>> with hdf5_loadhandle("file_to_read_from.h5") as hf:
... data = hf[...]
"""
def __init__(self, loadfile):
return _hdf5_filehandle.__init__(self, loadfile, "load") | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/utils/_hdf5.py | 0.864081 | 0.301401 | _hdf5.py | pypi |
"""Private mixin class for transfomers and basis with multivariate states."""
import numpy as np
class _MultivarMixin:
"""Private mixin class for transfomers and basis with multivariate states.
Parameters
----------
num_variables : int
Number of variables represented in a single snapshot (number of
individual transformations to learn). The dimension `n` of the
snapshots must be evenly divisible by num_variables; for example,
num_variables=3 means the first n entries of a snapshot correspond to
the first variable, and the next n entries correspond to the second
variable, and the last n entries correspond to the third variable.
variable_names : list of num_variables strings, optional
Names for each of the `num_variables` variables.
Defaults to "variable 1", "variable 2", ....
Attributes
----------
n : int
Total dimension of the snapshots (all variables).
ni : int
Dimension of individual variables, i.e., ni = n / num_variables.
Notes
-----
Child classes must set `n` in their fit() methods.
"""
def __init__(self, num_variables, variable_names=None):
"""Store variable information."""
if not np.isscalar(num_variables) or num_variables < 1:
raise ValueError("num_variables must be a positive integer")
self.__num_variables = num_variables
self.variable_names = variable_names
self.__n = None
# Properties --------------------------------------------------------------
@property
def num_variables(self):
"""Number of variables represented in a single snapshot."""
return self.__num_variables
@property
def variable_names(self):
"""Names for each of the `num_variables` variables."""
return self.__variable_names
@variable_names.setter
def variable_names(self, names):
if names is None:
names = [f"variable {i+1}" for i in range(self.num_variables)]
if not isinstance(names, list) or len(names) != self.num_variables:
raise TypeError("variable_names must be a list of"
f" length {self.num_variables}")
self.__variable_names = names
@property
def n(self):
"""Total dimension of the snapshots (all variables)."""
return self.__n
@n.setter
def n(self, nn):
"""Set the total and individual variable dimensions."""
if nn % self.num_variables != 0:
raise ValueError("n must be evenly divisible by num_variables")
self.__n = nn
@property
def ni(self):
"""Dimension of individual variables, i.e., ni = n / num_variables."""
return None if self.n is None else self.n // self.num_variables
# Convenience methods -----------------------------------------------------
def get_varslice(self, var):
"""Get the indices (as a slice) where the specified variable resides.
Parameters
----------
var : int or str
Index or name of the variable to extract.
Returns
-------
s : slice
Slice object for accessing the specified variable, i.e.,
variable = state[s] for a single snapshot or
variable = states[:, s] for a collection of snapshots.
"""
if var in self.variable_names:
var = self.variable_names.index(var)
return slice(var*self.ni, (var + 1)*self.ni)
def get_var(self, var, states):
"""Extract the ith variable from the states.
Parameters
----------
var : int or str
Index or name of the variable to extract.
states : (n, ...) ndarray
Returns
-------
states_var : ndarray, shape (n, num_states)
"""
self._check_shape(states)
return states[..., self.get_varslice(var)]
def _check_shape(self, Q):
"""Verify the shape of the snapshot set Q."""
if Q.shape[0] != self.n:
raise ValueError(f"states.shape[0] = {Q.shape[0]:d} "
f"!= {self.num_variables} * {self.ni} "
"= num_variables * n_i") | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/_multivar.py | 0.951785 | 0.795499 | _multivar.py | pypi |
__all__ = [
"reproject_discrete",
"reproject_continuous",
]
import numpy as np
# Reprojection schemes ========================================================
def reproject_discrete(f, basis, init, niters, inputs=None):
"""Sample re-projected trajectories of the discrete dynamical system
q_{j+1} = f(q_{j}, u_{j}), q_{0} = q0.
Parameters
----------
f : callable mapping (n,) ndarray (and (m,) ndarray) to (n,) ndarray
Function defining the (full-order) discrete dynamical system. Accepts
a full-order state vector and (optionally) an input vector and returns
another full-order state vector.
basis : (n, r) ndarray
Basis for the low-dimensional linear subspace (e.g., POD basis).
init : (n,) ndarray
Initial condition for the iteration in the high-dimensional space.
niters : int
The number of iterations to do.
inputs : (m, niters-1) or (niters-1) ndarray
Control inputs, one for each iteration beyond the initial condition.
Returns
-------
states_reprojected : (r, niters) ndarray
Re-projected state trajectories in the projected low-dimensional space.
"""
# Validate and extract dimensions.
n, r = basis.shape
if init.shape != (n,):
raise ValueError("basis and initial condition not aligned")
# Create the solution array and fill in the initial condition.
states_ = np.empty((r, niters))
states_[:, 0] = basis.T @ init
# Run the re-projection iteration.
if inputs is None:
for j in range(niters-1):
states_[:, j+1] = basis.T @ f(basis @ states_[:, j])
elif inputs.ndim == 1:
for j in range(niters-1):
states_[:, j+1] = basis.T @ f(basis @ states_[:, j], inputs[j])
else:
for j in range(niters-1):
states_[:, j+1] = basis.T @ f(basis @ states_[:, j], inputs[:, j])
return states_
def reproject_continuous(f, basis, states, inputs=None):
"""Sample re-projected trajectories of the continuous system of ODEs
dq / dt = f(t, q(t), u(t)), q(0) = q0.
Parameters
----------
f : callable mapping (n,) ndarray (and (m,) ndarray) to (n,) ndarray
Function defining the (full-order) differential equation. Accepts a
full-order state vector and (optionally) an input vector and returns
another full-order state vector.
basis : (n, r) ndarray
Basis for the low-dimensional linear subspace.
states : (n, k) ndarray
State trajectories (training data).
inputs : (m, k) or (k,) ndarray
Control inputs corresponding to the state trajectories.
Returns
-------
states_reprojected : (r, k) ndarray
Re-projected state trajectories in the projected low-dimensional space.
ddts_reprojected : (r, k) ndarray
Re-projected velocities in the projected low-dimensional space.
"""
# Validate and extract dimensions.
if states.shape[0] != basis.shape[0]:
raise ValueError("states and basis not aligned, first dimension "
f"{states.shape[0]} != {basis.shape[0]}")
n, r = basis.shape
k = states.shape[1]
# Create the solution arrays.
states_ = basis.T @ states
ddts_ = np.empty((r, k))
# Run the re-projection iteration.
if inputs is None:
for j in range(k):
ddts_[:, j] = basis.T @ f(basis @ states_[:, j])
elif inputs.ndim == 1:
for j in range(k):
ddts_[:, j] = basis.T @ f(basis @ states_[:, j], inputs[j])
else:
for j in range(k):
ddts_[:, j] = basis.T @ f(basis @ states_[:, j], inputs[:, j])
return states_, ddts_ | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/_reprojection.py | 0.948965 | 0.777912 | _reprojection.py | pypi |
"""Finite-difference schemes for estimating snapshot time derivatives."""
__all__ = [
"ddt_uniform",
"ddt_nonuniform",
"ddt",
]
import numpy as np
# Finite difference stencils ==================================================
def _fwd4(y, dt):
"""Compute the first derivative of a uniformly-spaced-in-time array with a
fourth-order forward difference scheme.
Parameters
----------
y : (5, ...) ndarray
Data to differentiate. The derivative is taken along the first axis.
dt : float
Time step (the uniform spacing).
Returns
-------
dy0 : float or (...) ndarray
Approximate derivative of y at the first entry, i.e., dy[0] / dt.
"""
return (-25*y[0] + 48*y[1] - 36*y[2] + 16*y[3] - 3*y[4]) / (12*dt)
def _fwd6(y, dt):
"""Compute the first derivative of a uniformly-spaced-in-time array with a
sixth-order forward difference scheme.
Parameters
----------
y : (7, ...) ndarray
Data to differentiate. The derivative is taken along the first axis.
dt : float
Time step (the uniform spacing).
Returns
-------
dy0 : float or (...) ndarray
Approximate derivative of y at the first entry, i.e., dy[0] / dt.
"""
return (- 147*y[0] + 360*y[1] - 450*y[2]
+ 400*y[3] - 225*y[4] + 72*y[5] - 10*y[6]) / (60*dt)
# Main routines ===============================================================
def ddt_uniform(states, dt, order=2):
"""Approximate the time derivatives for a chunk of snapshots that are
uniformly spaced in time.
Parameters
----------
states : (n, k) ndarray
States to estimate the derivative of. The jth column is a snapshot
that corresponds to the jth time step, i.e., states[:, j] = x(t[j]).
dt : float
The time step between the snapshots, i.e., t[j+1] - t[j] = dt.
order : int {2, 4, 6}
The order of the derivative approximation.
See https://en.wikipedia.org/wiki/Finite_difference_coefficient.
Returns
-------
ddts : (n, k) ndarray
Approximate time derivative of the snapshot data. The jth column is
the derivative dx / dt corresponding to the jth snapshot, states[:, j].
"""
# Check dimensions and input types.
if states.ndim != 2:
raise ValueError("states must be two-dimensional")
if not np.isscalar(dt):
raise TypeError("time step dt must be a scalar (e.g., float)")
if order == 2:
return np.gradient(states, dt, edge_order=2, axis=1)
Q = states
ddts = np.empty_like(states)
n, k = states.shape
if order == 4:
# Central difference on interior.
ddts[:, 2:-2] = (Q[:, :-4]
- 8*Q[:, 1:-3] + 8*Q[:, 3:-1]
- Q[:, 4:])/(12*dt)
# Forward / backward differences on the front / end.
for j in range(2):
ddts[:, j] = _fwd4(Q[:, j:j+5].T, dt) # Forward
ddts[:, -j-1] = -_fwd4(Q[:, -j-5:k-j].T[::-1], dt) # Backward
elif order == 6:
# Central difference on interior.
ddts[:, 3:-3] = (- Q[:, :-6] + 9*Q[:, 1:-5]
- 45*Q[:, 2:-4] + 45*Q[:, 4:-2]
- 9*Q[:, 5:-1] + Q[:, 6:]) / (60*dt)
# Forward / backward differences on the front / end.
for j in range(3):
ddts[:, j] = _fwd6(Q[:, j:j+7].T, dt) # Forward
ddts[:, -j-1] = -_fwd6(Q[:, -j-7:k-j].T[::-1], dt) # Backward
else:
raise NotImplementedError(f"invalid order '{order}'; "
"valid options: {2, 4, 6}")
return ddts
def ddt_nonuniform(states, t):
"""Approximate the time derivatives for a chunk of snapshots with a
second-order finite difference scheme.
Parameters
----------
states : (n, k) ndarray
States to estimate the derivative of. The jth column is a snapshot
that corresponds to the jth time step, i.e., states[:, j] = x(t[j]).
t : (k,) ndarray
The times corresponding to the snapshots. May not be uniformly spaced.
See ddt_uniform() for higher-order computation in the case of
evenly-spaced-in-time snapshots.
Returns
-------
ddts : (n, k) ndarray
Approximate time derivative of the snapshot data. The jth column is
the derivative dx / dt corresponding to the jth snapshot, states[:, j].
"""
# Check dimensions.
if states.ndim != 2:
raise ValueError("states must be two-dimensional")
if t.ndim != 1:
raise ValueError("time t must be one-dimensional")
if states.shape[-1] != t.shape[0]:
raise ValueError("states not aligned with time t")
# Compute the derivative with a second-order difference scheme.
return np.gradient(states, t, edge_order=2, axis=-1)
def ddt(states, *args, **kwargs):
"""Approximate the time derivatives for a chunk of snapshots with a finite
difference scheme. Calls ddt_uniform() or ddt_nonuniform(), depending on
the arguments.
Parameters
----------
states : (n, k) ndarray
States to estimate the derivative of. The jth column is a snapshot
that corresponds to the jth time step, i.e., states[:, j] = x(t[j]).
Additional parameters
---------------------
dt : float
The time step between the snapshots, i.e., t[j+1] - t[j] = dt.
order : int {2, 4, 6} (optional)
The order of the derivative approximation.
See https://en.wikipedia.org/wiki/Finite_difference_coefficient.
OR
t : (k,) ndarray
The times corresponding to the snapshots. May or may not be uniformly
spaced.
Returns
-------
ddts : (n, k) ndarray
Approximate time derivative of the snapshot data. The jth column is
the derivative dx / dt corresponding to the jth snapshot, states[:, j].
"""
n_args = len(args) # Number of other positional args.
n_kwargs = len(kwargs) # Number of keyword args.
n_total = n_args + n_kwargs # Total number of other args.
if n_total == 0:
raise TypeError("at least one other argument required (dt or t)")
elif n_total == 1: # There is only one other argument.
if n_kwargs == 1: # It is a keyword argument.
arg_name = list(kwargs.keys())[0]
if arg_name == "dt":
func = ddt_uniform
elif arg_name == "t":
func = ddt_nonuniform
elif arg_name == "order":
raise TypeError("keyword argument 'order' requires float "
"argument dt")
else:
raise TypeError("ddt() got unexpected keyword argument "
f"'{arg_name}'")
elif n_args == 1: # It is a positional argument.
arg = args[0]
if isinstance(arg, float): # arg = dt.
func = ddt_uniform
elif isinstance(arg, np.ndarray): # arg = t; do uniformity test.
func = ddt_nonuniform
else:
raise TypeError(f"invalid argument type '{type(arg)}'")
elif n_total == 2: # There are two other argumetns: dt, order.
func = ddt_uniform
else:
raise TypeError("ddt() takes 2 or 3 positional arguments "
f"but {n_total+1} were given")
return func(states, *args, **kwargs) | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/_finite_difference.py | 0.949506 | 0.738952 | _finite_difference.py | pypi |
"""Base transformer class."""
import abc
class _BaseTransformer(abc.ABC):
"""Abstract base class for all transformer classes."""
# Main routines -----------------------------------------------------------
def fit(self, states):
"""Learn (but do not apply) the transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n.
Returns
-------
self
"""
self.fit_transform(states)
return self
@abc.abstractmethod
def transform(self, states, inplace=False):
"""Apply the learned transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n.
inplace : bool
If True, overwrite the input data during the transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
"""
raise NotImplementedError # pragma: no cover
@abc.abstractmethod
def fit_transform(self, states, inplace=False):
"""Learn and apply the transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n.
inplace : bool
If True, overwrite the input data during the transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
"""
raise NotImplementedError # pragma: no cover
@abc.abstractmethod
def inverse_transform(self, states_transformed, inplace=False):
"""Apply the inverse of the learned transformation.
Parameters
----------
states_transformed : (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
inplace : bool
If True, overwrite the input data during inverse transformation.
If False, create a copy of the data to untransform.
Returns
-------
states: (n, k) ndarray
Matrix of k untransformed snapshots of dimension n.
"""
raise NotImplementedError # pragma: no cover
# Model persistence -------------------------------------------------------
def save(self, *args, **kwargs):
"""Save the transformer to an HDF5 file."""
raise NotImplementedError("use pickle/joblib")
@classmethod
def load(cls, *args, **kwargs):
"""Load a transformer from an HDF5 file."""
raise NotImplementedError("use pickle/joblib")
def _check_is_transformer(obj):
"""Raise a RuntimeError if `obj` cannot be used as a transformer."""
for mtd in _BaseTransformer.__abstractmethods__:
if not hasattr(obj, mtd):
raise TypeError(f"transformer missing required method {mtd}()") | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/transform/_base.py | 0.969656 | 0.672883 | _base.py | pypi |
"""Tools for preprocessing state snapshot data."""
__all__ = [
"shift",
"scale",
"SnapshotTransformer",
"SnapshotTransformerMulti",
]
import numpy as np
from ...errors import LoadfileFormatError
from ...utils import hdf5_savehandle, hdf5_loadhandle
from .._multivar import _MultivarMixin
from ._base import _BaseTransformer
# Functional paradigm =========================================================
def shift(states, shift_by=None):
"""Shift the columns of `states` by a vector.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot.
shift_by : (n,) or (n, 1) ndarray
Vector that is the same size as a single snapshot. If None,
set to the mean of the columns of `states`.
Returns
-------
states_shifted : (n, k) ndarray
Shifted state matrix, i.e.,
states_shifted[:, j] = states[:, j] - shift_by for j = 0, ..., k-1.
shift_by : (n,) ndarray
Shift factor, returned only if shift_by=None.
Since this is a one-dimensional array, it must be reshaped to be
applied to a matrix (e.g., states_shifted + shift_by.reshape(-1, 1)).
Examples
--------
# Shift Q by its mean, then shift Y by the same mean.
>>> Q_shifted, qbar = pre.shift(Q)
>>> Y_shifted = pre.shift(Y, qbar)
# Shift Q by its mean, then undo the transformation by an inverse shift.
>>> Q_shifted, qbar = pre.shift(Q)
>>> Q_again = pre.shift(Q_shifted, -qbar)
"""
# Check dimensions.
if states.ndim != 2:
raise ValueError("argument `states` must be two-dimensional")
# If not shift_by factor is provided, compute the mean column.
learning = (shift_by is None)
if learning:
shift_by = np.mean(states, axis=1)
elif shift_by.ndim != 1:
raise ValueError("argument `shift_by` must be one-dimensional")
# Shift the columns by the mean.
states_shifted = states - shift_by.reshape((-1, 1))
return (states_shifted, shift_by) if learning else states_shifted
def scale(states, scale_to, scale_from=None):
"""Scale the entries of the snapshot matrix `states` from the interval
[scale_from[0], scale_from[1]] to [scale_to[0], scale_to[1]].
Scaling algorithm follows sklearn.preprocessing.MinMaxScaler.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots to be scaled. Each column is a single snapshot.
scale_to : (2,) tuple
Desired minimum and maximum of the scaled data.
scale_from : (2,) tuple
Minimum and maximum of the snapshot data. If None, learn the scaling:
scale_from[0] = min(states); scale_from[1] = max(states).
Returns
-------
states_scaled : (n, k) ndarray
Scaled snapshot matrix.
scaled_to : (2,) tuple
Bounds that the snapshot matrix was scaled to, i.e.,
scaled_to[0] = min(states_scaled); scaled_to[1] = max(states_scaled).
Only returned if scale_from = None.
scaled_from : (2,) tuple
Minimum and maximum of the snapshot data, i.e., the bounds that
the data was scaled from. Only returned if scale_from = None.
Examples
--------
# Scale Q to [-1, 1] and then scale Y with the same transformation.
>>> Qscaled, scaled_to, scaled_from = pre.scale(Q, (-1, 1))
>>> Yscaled = pre.scale(Y, scaled_to, scaled_from)
# Scale Q to [0, 1], then undo the transformation by an inverse scaling.
>>> Qscaled, scaled_to, scaled_from = pre.scale(Q, (0, 1))
>>> Q_again = pre.scale(Qscaled, scaled_from, scaled_to)
"""
# If no scale_from bounds are provided, learn them.
learning = (scale_from is None)
if learning:
scale_from = np.min(states), np.max(states)
# Check scales.
if len(scale_to) != 2:
raise ValueError("scale_to must have exactly 2 elements")
if len(scale_from) != 2:
raise ValueError("scale_from must have exactly 2 elements")
# Do the scaling.
mini, maxi = scale_to
xmin, xmax = scale_from
scl = (maxi - mini)/(xmax - xmin)
states_scaled = states*scl + (mini - xmin*scl)
return (states_scaled, scale_to, scale_from) if learning else states_scaled
# Object-oriented paradigm ====================================================
class SnapshotTransformer(_BaseTransformer):
"""Process snapshots by centering and/or scaling (in that order).
Parameters
----------
center : bool
If True, shift the snapshots by the mean training snapshot.
scaling : str or None
If given, scale (non-dimensionalize) the centered snapshot entries.
* 'standard': standardize to zero mean and unit standard deviation.
* 'minmax': minmax scaling to [0, 1].
* 'minmaxsym': minmax scaling to [-1, 1].
* 'maxabs': maximum absolute scaling to [-1, 1] (no shift).
* 'maxabssym': maximum absolute scaling to [-1, 1] (with mean shift).
byrow : bool
If True, scale each row of the snapshot matrix separately when a
scaling is specified. Otherwise, scale the entire matrix at once.
verbose : bool
If True, print information upon learning a transformation.
Attributes
----------
n : int
Dimension of the snapshots.
mean_ : (n,) ndarray
Mean training snapshot. Only recorded if center = True.
scale_ : float or (n,) ndarray
Multiplicative factor of scaling (the a of q -> aq + b).
Only recorded if scaling != None.
If byrow = True, a different factor is applied to each row.
shift_ : float or (n,) ndarray
Additive factor of scaling (the b of q -> aq + b).
Only recorded if scaling != None.
If byrow = True, a different factor is applied to each row.
Notes
-----
Snapshot centering (center=True):
Q' = Q - mean(Q, axis=1);
Guarantees mean(Q', axis=1) = [0, ..., 0].
Standard scaling (scaling='standard'):
Q' = (Q - mean(Q)) / std(Q);
Guarantees mean(Q') = 0, std(Q') = 1.
Min-max scaling (scaling='minmax'):
Q' = (Q - min(Q))/(max(Q) - min(Q));
Guarantees min(Q') = 0, max(Q') = 1.
Symmetric min-max scaling (scaling='minmaxsym'):
Q' = (Q - min(Q))*2/(max(Q) - min(Q)) - 1
Guarantees min(Q') = -1, max(Q') = 1.
Maximum absolute scaling (scaling='maxabs'):
Q' = Q / max(abs(Q));
Guarantees mean(Q') = mean(Q) / max(abs(Q)), max(abs(Q')) = 1.
Min-max absolute scaling (scaling='maxabssym'):
Q' = (Q - mean(Q)) / max(abs(Q - mean(Q)));
Guarantees mean(Q') = 0, max(abs(Q')) = 1.
"""
_VALID_SCALINGS = {
"standard",
"minmax",
"minmaxsym",
"maxabs",
"maxabssym",
}
_table_header = " | min | mean | max | std\n" \
"----|------------|------------|------------|------------"
def __init__(self, center=False, scaling=None, byrow=False, verbose=False):
"""Set transformation hyperparameters."""
# Initialize properties to default values.
self.__center = False
self.__scaling = None
self.__byrow = False
self.__verbose = False
# Set properties to specified values.
self.center = center
self.scaling = scaling
self.byrow = byrow
self.verbose = verbose
def _clear(self):
"""Delete all learned attributes."""
for attr in ("mean_", "scale_", "shift_"):
if hasattr(self, attr):
delattr(self, attr)
# Properties --------------------------------------------------------------
@property
def center(self):
"""Snapshot mean-centering directive (bool)."""
return self.__center
@center.setter
def center(self, ctr):
"""Set the centering directive, resetting the transformation."""
if ctr not in (True, False):
raise TypeError("'center' must be True or False")
if ctr != self.__center:
self._clear()
self.__center = ctr
@property
def scaling(self):
"""Entrywise scaling (non-dimensionalization) directive.
* None: no scaling.
* 'standard': standardize to zero mean and unit standard deviation.
* 'minmax': minmax scaling to [0, 1].
* 'minmaxsym': minmax scaling to [-1, 1].
* 'maxabs': maximum absolute scaling to [-1, 1] (no shift).
* 'maxabssym': maximum absolute scaling to [-1, 1] (mean shift).
"""
return self.__scaling
@scaling.setter
def scaling(self, scl):
"""Set the scaling strategy, resetting the transformation."""
if scl is None:
self._clear()
self.__scaling = scl
return
if not isinstance(scl, str):
raise TypeError("'scaling' must be of type 'str'")
if scl not in self._VALID_SCALINGS:
opts = ", ".join([f"'{v}'" for v in self._VALID_SCALINGS])
raise ValueError(f"invalid scaling '{scl}'; "
f"valid options are {opts}")
if scl != self.__scaling:
self._clear()
self.__scaling = scl
@property
def byrow(self):
"""If True, scale snapshots by row, not as a whole unit."""
return self.__byrow
@byrow.setter
def byrow(self, by):
"""Set the row-wise scaling directive, resetting the transformation."""
if by is not self.byrow:
self._clear()
self.__byrow = bool(by)
@property
def verbose(self):
"""If True, print information upon learning a transformation."""
return self.__verbose
@verbose.setter
def verbose(self, vbs):
self.__verbose = bool(vbs)
def __eq__(self, other):
"""Test two SnapshotTransformers for equality."""
if not isinstance(other, self.__class__):
return False
for attr in ("center", "scaling", "byrow"):
if getattr(self, attr) != getattr(other, attr):
return False
if hasattr(self, "n") and hasattr(other, "n") and self.n != other.n:
return False
if self.center and hasattr(self, "mean_"):
if not hasattr(other, "mean_"):
return False
if not np.all(self.mean_ == other.mean_):
return False
if self.scaling and hasattr(self, "scale_"):
for attr in ("scale_", "shift_"):
if not hasattr(other, attr):
return False
if not np.all(getattr(self, attr) == getattr(other, attr)):
return False
return True
# Printing ----------------------------------------------------------------
@staticmethod
def _statistics_report(Q):
"""Return a string of basis statistics about a data set."""
return " | ".join([f"{f(Q):>10.3e}"
for f in (np.min, np.mean, np.max, np.std)])
def __str__(self):
"""String representation: scaling type + centering bool."""
out = ["Snapshot transformer"]
trained = self._is_trained()
if trained:
out.append(f"(n = {self.n:d})")
if self.center:
out.append("with mean-snapshot centering")
if self.scaling:
out.append(f"and '{self.scaling}' scaling")
elif self.scaling:
out.append(f"with '{self.scaling}' scaling")
if not trained:
out.append("(call fit_transform() to train)")
return ' '.join(out)
def __repr__(self):
"""Unique ID + string representation."""
uniqueID = f"<{self.__class__.__name__} object at {hex(id(self))}>"
return f"{uniqueID}\n{str(self)}"
# Main routines -----------------------------------------------------------
def _check_shape(self, Q):
"""Verify the shape of the snapshot set Q."""
if Q.shape[0] != self.n:
raise ValueError(f"states.shape[0] = {Q.shape[0]:d} "
f"!= {self.n} = n")
def _is_trained(self):
"""Return True if transform() and inverse_transform() are ready."""
if not hasattr(self, "n"):
return False
if self.center and not hasattr(self, "mean_"):
return False
if self.scaling and any(not hasattr(self, attr)
for attr in ("scale_", "shift_")):
return False
return True
def fit_transform(self, states, inplace=False):
"""Learn and apply the transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n.
inplace : bool
If True, overwrite the input data during transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
"""
if states.ndim != 2:
raise ValueError("2D array required to fit transformer")
self.n = states.shape[0]
Y = states if inplace else states.copy()
axis = (1 if self.byrow else None)
# Record statistics of the training data.
if self.verbose:
report = ["No transformation learned"]
report.append(self._table_header)
report.append(f"Q | {self._statistics_report(Y)}")
# Center the snapshots by the mean training snapshot.
if self.center:
self.mean_ = np.mean(Y, axis=1)
Y -= self.mean_.reshape((-1, 1))
if self.verbose:
report[0] = "Learned mean centering Q -> Q'"
report.append(f"Q' | {self._statistics_report(Y)}")
# Scale (non-dimensionalize) the centered snapshot entries.
if self.scaling:
# Standard: Q' = (Q - mu)/sigma
if self.scaling == "standard":
mu = np.mean(Y, axis=axis)
sigma = np.std(Y, axis=axis)
self.scale_ = 1/sigma
self.shift_ = -mu*self.scale_
# Min-max: Q' = (Q - min(Q))/(max(Q) - min(Q))
elif self.scaling == "minmax":
Ymin = np.min(Y, axis=axis)
Ymax = np.max(Y, axis=axis)
self.scale_ = 1/(Ymax - Ymin)
self.shift_ = -Ymin*self.scale_
# Symmetric min-max: Q' = (Q - min(Q))*2/(max(Q) - min(Q)) - 1
elif self.scaling == "minmaxsym":
Ymin = np.min(Y, axis=axis)
Ymax = np.max(Y, axis=axis)
self.scale_ = 2/(Ymax - Ymin)
self.shift_ = -Ymin*self.scale_ - 1
# MaxAbs: Q' = Q / max(abs(Q))
elif self.scaling == "maxabs":
self.scale_ = 1/np.max(np.abs(Y), axis=axis)
self.shift_ = 0 if axis is None else np.zeros(self.n)
# maxabssym: Q' = (Q - mean(Q)) / max(abs(Q - mean(Q)))
elif self.scaling == "maxabssym":
mu = np.mean(Y, axis=axis)
Y -= (mu if axis is None else mu.reshape((-1, 1)))
self.scale_ = 1/np.max(np.abs(Y), axis=axis)
self.shift_ = -mu*self.scale_
Y += (mu if axis is None else mu.reshape((-1, 1)))
else: # pragma nocover
raise RuntimeError(f"invalid scaling '{self.scaling}'")
# Apply the scaling.
Y *= (self.scale_ if axis is None else self.scale_.reshape(-1, 1))
Y += (self.shift_ if axis is None else self.shift_.reshape(-1, 1))
if self.verbose:
if self.center:
report[0] += f" and {self.scaling} scaling Q' -> Q''"
else:
report[0] = f"Learned {self.scaling} scaling Q -> Q''"
report.append(f"Q'' | {self._statistics_report(Y)}")
if self.verbose:
print('\n'.join(report) + '\n')
return Y
def transform(self, states, inplace=False):
"""Apply the learned transformation.
Parameters
----------
states : (n, k) or (n,) ndarray
Matrix of k snapshots where each column is a snapshot of dimension
n, or a single snapshot of dimension n.
inplace : bool
If True, overwrite the input data during transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
"""
if not self._is_trained():
raise AttributeError("transformer not trained "
"(call fit_transform())")
self._check_shape(states)
Y = states if inplace else states.copy()
# Center the snapshots by the mean training snapshot.
if self.center is True:
Y -= (self.mean_.reshape((-1, 1)) if Y.ndim > 1 else self.mean_)
# Scale (non-dimensionalize) the centered snapshot entries.
if self.scaling is not None:
_flip = self.byrow and Y.ndim > 1
Y *= (self.scale_.reshape((-1, 1)) if _flip else self.scale_)
Y += (self.shift_.reshape((-1, 1)) if _flip else self.shift_)
return Y
def inverse_transform(self, states_transformed, inplace=False):
"""Apply the inverse of the learned transformation.
Parameters
----------
states_transformed : (n, k) ndarray
Matrix of k transformed snapshots of dimension n.
inplace : bool
If True, overwrite the input data during inverse transformation.
If False, create a copy of the data to untransform.
Returns
-------
states: (n, k) ndarray
Matrix of k untransformed snapshots of dimension n.
"""
if not self._is_trained():
raise AttributeError("transformer not trained "
"(call fit_transform())")
self._check_shape(states_transformed)
Y = states_transformed if inplace else states_transformed.copy()
# Unscale (re-dimensionalize) the data.
if self.scaling:
Y -= self.shift_
Y /= self.scale_
# Uncenter the unscaled snapshots.
if self.center:
Y += (self.mean_.reshape((-1, 1)) if Y.ndim > 1 else self.mean_)
return Y
# Model persistence -------------------------------------------------------
def save(self, savefile, overwrite=False):
"""Save the current transformer to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the transformer in.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
with hdf5_savehandle(savefile, overwrite) as hf:
# Store transformation hyperparameter metadata.
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["center"] = self.center
meta.attrs["scaling"] = self.scaling if self.scaling else False
meta.attrs["byrow"] = self.byrow
meta.attrs["verbose"] = self.verbose
# Store learned transformation parameters.
if hasattr(self, "n"):
hf.create_dataset("dimension/n", data=[self.n])
if self.center and hasattr(self, "mean_"):
hf.create_dataset("transformation/mean_", data=self.mean_)
if self.scaling and hasattr(self, "scale_"):
scale = self.scale_ if self.byrow else [self.scale_]
shift = self.shift_ if self.byrow else [self.shift_]
hf.create_dataset("transformation/scale_", data=scale)
hf.create_dataset("transformation/shift_", data=shift)
@classmethod
def load(cls, loadfile):
"""Load a SnapshotTransformer from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the transformer was stored (via save()).
Returns
-------
SnapshotTransformer
"""
with hdf5_loadhandle(loadfile) as hf:
# Load transformation hyperparameters.
if "meta" not in hf:
raise LoadfileFormatError("invalid save format "
"(meta/ not found)")
scl = hf["meta"].attrs["scaling"]
transformer = cls(center=hf["meta"].attrs["center"],
scaling=scl if scl else None,
byrow=hf["meta"].attrs["byrow"],
verbose=hf["meta"].attrs["verbose"])
# Load learned transformation parameters.
if "dimension" in hf:
transformer.n = hf["dimension/n"][0]
if transformer.center and "transformation/mean_" in hf:
transformer.mean_ = hf["transformation/mean_"][:]
if transformer.scaling and "transformation/scale_" in hf:
ind = slice(None) if transformer.byrow else 0
transformer.scale_ = hf["transformation/scale_"][ind]
transformer.shift_ = hf["transformation/shift_"][ind]
return transformer
class SnapshotTransformerMulti(_BaseTransformer, _MultivarMixin):
"""Transformer for multivariate snapshots.
Groups multiple SnapshotTransformers for the centering and/or scaling
(in that order) of individual variables.
Parameters
----------
num_variables : int
Number of variables represented in a single snapshot (number of
individual transformations to learn). The dimension `n` of the
snapshots must be evenly divisible by num_variables; for example,
num_variables=3 means the first n entries of a snapshot correspond to
the first variable, and the next n entries correspond to the second
variable, and the last n entries correspond to the third variable.
center : bool OR list of num_variables bools
If True, shift the snapshots by the mean training snapshot.
If a list, center[i] is the centering directive for the ith variable.
scaling : str, None, OR list of length num_variables
If given, scale (non-dimensionalize) the centered snapshot entries.
If a list, scaling[i] is the scaling directive for the ith variable.
* 'standard': standardize to zero mean and unit standard deviation.
* 'minmax': minmax scaling to [0, 1].
* 'minmaxsym': minmax scaling to [-1, 1].
* 'maxabs': maximum absolute scaling to [-1, 1] (no shift).
* 'maxabssym': maximum absolute scaling to [-1, 1] (mean shift).
variable_names : list of num_variables strings
Names for each of the `num_variables` variables.
Defaults to 'x1', 'x2', ....
verbose : bool
If True, print information upon learning a transformation.
Attributes
----------
transformers : list of num_variables SnapshotTransformers
Transformers for each snapshot variable.
n : int
Total dimension of the snapshots (all variables).
ni : int
Dimension of individual variables, i.e., ni = n / num_variables.
Notes
-----
See SnapshotTransformer for details on available transformations.
Examples
--------
# Center first and third variables and minmax scale the second variable.
>>> stm = SnapshotTransformerMulti(3, center=(True, False, True),
... scaling=(None, "minmax", None))
# Center 6 variables and scale the final variable with a standard scaling.
>>> stm = SnapshotTransformerMulti(6, center=True,
... scaling=(None, None, None,
... None, None, "standard"))
# OR
>>> stm = SnapshotTransformerMulti(6, center=True, scaling=None)
>>> stm[-1].scaling = "standard"
"""
def __init__(self, num_variables, center=False, scaling=None,
variable_names=None, verbose=False):
"""Interpret hyperparameters and initialize transformers."""
_MultivarMixin.__init__(self, num_variables, variable_names)
def _process_arg(attr, name, dtype):
"""Validation for centering and scaling directives."""
if isinstance(attr, dtype):
attr = (attr,) * num_variables
if len(attr) != num_variables:
raise ValueError(f"len({name}) = {len(attr)} "
f"!= {num_variables} = num_variables")
return attr
# Process and store transformation directives.
centers = _process_arg(center, "center", bool)
scalings = _process_arg(scaling, "scaling", (type(None), str))
# Initialize transformers.
self.transformers = [SnapshotTransformer(center=ctr, scaling=scl,
byrow=False, verbose=False)
for ctr, scl in zip(centers, scalings)]
self.verbose = verbose
# Properties --------------------------------------------------------------
@property
def center(self):
"""Snapshot mean-centering directive."""
return tuple(st.center for st in self.transformers)
@property
def scaling(self):
"""Entrywise scaling (non-dimensionalization) directive.
* None: no scaling.
* 'standard': standardize to zero mean and unit standard deviation.
* 'minmax': minmax scaling to [0, 1].
* 'minmaxsym': minmax scaling to [-1, 1].
* 'maxabs': maximum absolute scaling to [-1, 1] (no shift).
* 'maxabssym': maximum absolute scaling to [-1, 1] (mean shift).
"""
return tuple(st.scaling for st in self.transformers)
@property
def variable_names(self):
"""Names for each of the `num_variables` variables."""
return self.__variable_names
@variable_names.setter
def variable_names(self, names):
if names is None:
names = [f"variable {i+1}" for i in range(self.num_variables)]
if not isinstance(names, list) or len(names) != self.num_variables:
raise TypeError("variable_names must be list of"
f" length {self.num_variables}")
self.__variable_names = names
@property
def verbose(self):
"""If True, print information about upon learning a transformation."""
return self.__verbose
@verbose.setter
def verbose(self, vbs):
"""Set verbosity of all transformers (uniformly)."""
self.__verbose = bool(vbs)
for st in self.transformers:
st.verbose = self.__verbose
@property
def mean_(self):
"""Mean training snapshot across all transforms ((n,) ndarray)."""
if not self._is_trained():
return None
zeros = np.zeros(self.ni)
return np.concatenate([(st.mean_ if st.center else zeros)
for st in self.transformers])
def __getitem__(self, key):
"""Get the transformer for variable i."""
return self.transformers[key]
def __setitem__(self, key, obj):
"""Set the transformer for variable i."""
if not isinstance(obj, SnapshotTransformer):
raise TypeError("assignment object must be SnapshotTransformer")
self.transformers[key] = obj
def __len__(self):
"""Length = number of variables."""
return len(self.transformers)
def __eq__(self, other):
"""Test two SnapshotTransformerMulti objects for equality."""
if not isinstance(other, self.__class__):
return False
if self.num_variables != other.num_variables:
return False
return all(t1 == t2 for t1, t2 in zip(self.transformers,
other.transformers))
# Printing ----------------------------------------------------------------
def __str__(self):
"""String representation: centering and scaling directives."""
out = [f"{self.num_variables}-variable snapshot transformer"]
namelength = max(len(name) for name in self.variable_names)
for name, st in zip(self.variable_names, self.transformers):
out.append(f"* {{:>{namelength}}} | {st}".format(name))
return '\n'.join(out)
def __repr__(self):
"""Unique ID + string representation."""
uniqueID = f"<{self.__class__.__name__} object at {hex(id(self))}>"
return f"{uniqueID}\n{str(self)}"
# Main routines -----------------------------------------------------------
def _is_trained(self):
"""Return True if transform() and inverse_transform() are ready."""
return all(st._is_trained() for st in self.transformers)
def _apply(self, method, Q, inplace):
"""Apply a method of each transformer to the corresponding chunk of Q.
"""
Ys = []
for st, var, name in zip(self.transformers,
np.split(Q, self.num_variables, axis=0),
self.variable_names):
if method is SnapshotTransformer.fit_transform and self.verbose:
print(f"{name}:")
Ys.append(method(st, var, inplace=inplace))
return Q if inplace else np.concatenate(Ys, axis=0)
def fit_transform(self, states, inplace=False):
"""Learn and apply the transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n;
this dimension must be evenly divisible by `num_variables`.
inplace : bool
If True, overwrite the input data during transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed n-dimensional snapshots.
"""
if states.ndim != 2:
raise ValueError("2D array required to fit transformer")
self.n = states.shape[0]
Y = self._apply(SnapshotTransformer.fit_transform, states, inplace)
return Y
def transform(self, states, inplace=False):
"""Apply the learned transformation.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a snapshot of dimension n;
this dimension must be evenly divisible by `num_variables`.
inplace : bool
If True, overwrite the input data during transformation.
If False, create a copy of the data to transform.
Returns
-------
states_transformed: (n, k) ndarray
Matrix of k transformed n-dimensional snapshots.
"""
if not self._is_trained():
raise AttributeError("transformer not trained "
"(call fit_transform())")
self._check_shape(states)
return self._apply(SnapshotTransformer.transform, states, inplace)
def inverse_transform(self, states_transformed, inplace=False):
"""Apply the inverse of the learned transformation.
Parameters
----------
states_transformed : (n, k) ndarray
Matrix of k transformed n-dimensional snapshots.
inplace : bool
If True, overwrite the input data during inverse transformation.
If False, create a copy of the data to untransform.
Returns
-------
states: (n, k) ndarray
Matrix of k untransformed n-dimensional snapshots.
"""
if not self._is_trained():
raise AttributeError("transformer not trained "
"(call fit_transform())")
self._check_shape(states_transformed)
return self._apply(SnapshotTransformer.inverse_transform,
states_transformed, inplace)
# Model persistence -------------------------------------------------------
def save(self, savefile, overwrite=False):
"""Save the current transformers to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the transformer in.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
with hdf5_savehandle(savefile, overwrite) as hf:
# Metadata
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["num_variables"] = self.num_variables
meta.attrs["verbose"] = self.verbose
meta.attrs["variable_names"] = self.variable_names
for i in range(self.num_variables):
self.transformers[i].save(hf.create_group(f"variable{i+1}"))
@classmethod
def load(cls, loadfile):
"""Load a SnapshotTransformerMulti object from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the transformer was stored (via save()).
Returns
-------
SnapshotTransformerMulti
"""
with hdf5_loadhandle(loadfile) as hf:
# Load transformation hyperparameters.
if "meta" not in hf:
raise LoadfileFormatError("invalid save format "
"(meta/ not found)")
num_variables = hf["meta"].attrs["num_variables"]
verbose = hf["meta"].attrs["verbose"]
names = hf["meta"].attrs["variable_names"].tolist()
stm = cls(num_variables, variable_names=names, verbose=verbose)
# Initialize individual transformers.
for i in range(num_variables):
group = f"variable{i+1}"
if group not in hf:
raise LoadfileFormatError("invalid save format "
f"({group}/ not found)")
stm[i] = SnapshotTransformer.load(hf[group])
return stm | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/transform/_shiftscale.py | 0.972125 | 0.58519 | _shiftscale.py | pypi |
"""Tools for basis computation and reduced-dimension selection."""
__all__ = [
"PODBasis",
"PODBasisMulti",
"pod_basis",
"svdval_decay",
"cumulative_energy",
"residual_energy",
"projection_error",
]
import h5py
import numpy as np
import scipy.linalg as la
import scipy.sparse.linalg as spla
import sklearn.utils.extmath as sklmath
import matplotlib.pyplot as plt
from ...errors import LoadfileFormatError
from ...utils import hdf5_savehandle, hdf5_loadhandle
from .. import transform
from ._linear import LinearBasis, LinearBasisMulti
class PODBasis(LinearBasis):
"""Proper othogonal decomposition basis, derived from the principal left
singular vectors of a collection of states, Q:
svd(Q) = V S W^T --> POD basis = V[:, :r].
The low-dimensional approximation is linear:
q = Vr @ q_ := sum([Vr[:, j]*q_[j] for j in range(Vr.shape[1])])
(full_state = basis * reduced_state).
Parameters
----------
transformer : Transformer or None
Transformer for pre-processing states before dimensionality reduction.
economize : bool
If True, throw away basis vectors beyond the first `r` whenever
the `r` attribute is changed.
Attributes
----------
n : int
Dimension of the state space (size of each basis vector).
r : int
Dimension of the basis (number of basis vectors in the representation).
shape : tulpe
Dimensions (n, r).
entries : (n, r) ndarray
Entries of the basis matrix Vr.
svdvals : (k,) or (r,) ndarray
Singular values of the training data.
dual : (n, r) ndarray
Right singular vectors of the data.
"""
def __init__(self, transformer=None, economize=False):
"""Initialize an empty basis and set the transformer."""
self.__r = None
self.__entries = None
self.__svdvals = None
self.__dual = None
self.economize = bool(economize)
LinearBasis.__init__(self, transformer)
# TODO: inner product weight matrix.
# Dimension selection -----------------------------------------------------
def __shrink_stored_entries_to(self, r):
if self.entries is not None and r is not None:
self.__entries = self.__entries[:, :r].copy()
self.__dual = self.__dual[:, :r].copy()
@property
def r(self):
"""Dimension of the basis, i.e., the number of basis vectors."""
return self.__r
@r.setter
def r(self, r):
"""Set the reduced dimension."""
if r is None:
self.__r = None
return
if self.entries is None:
raise AttributeError("empty basis (call fit() first)")
if self.__entries.shape[1] < r:
raise ValueError(f"only {self.__entries.shape[1]:d} "
"basis vectors stored")
self.__r = r
# Forget higher-order basis vectors.
if self.economize:
self.__shrink_stored_entries_to(r)
@property
def economize(self):
"""If True, throw away basis vectors beyond the first `r` whenever
the `r` attribute is changed."""
return self.__economize
@economize.setter
def economize(self, econ):
"""Set the economize flag."""
self.__economize = bool(econ)
if self.__economize:
self.__shrink_stored_entries_to(self.r)
def set_dimension(self, r=None,
cumulative_energy=None, residual_energy=None):
"""Set the basis dimension, i.e., the number of basis vectors.
Parameters
----------
r : int or None
Number of basis vectors to include in the basis.
cumulative_energy : float or None
Cumulative energy threshold. If provided and r=None, choose the
smallest number of basis vectors so that the cumulative singular
value energy exceeds the given threshold.
residual_energy : float or None
Residual energy threshold. If provided, r=None, and
cumulative_energy=None, choose the smallest number of basis vectors
so that the residual singular value energy is less than the given
threshold.
Returns
-------
r : int
Selected basis dimension.
"""
if r is None:
self._check_svdvals_exist()
svdvals2 = self.svdvals**2
cum_energy = np.cumsum(svdvals2) / np.sum(svdvals2)
if cumulative_energy is not None:
r = int(np.searchsorted(cum_energy, cumulative_energy)) + 1
elif residual_energy is not None:
r = np.count_nonzero(1 - cum_energy > residual_energy) + 1
else:
r = self.entries.shape[1]
self.r = r
@property
def rmax(self):
"""Total number of stored basis vectors, i.e., the maximum value of r.
Always the same as the dimension r if economize=True.
"""
return None if self.__entries is None else self.__entries.shape[1]
# Properties --------------------------------------------------------------
@property
def entries(self):
"""Entries of the basis."""
return None if self.__entries is None else self.__entries[:, :self.r]
@property
def shape(self):
"""Dimensions of the basis (state_dimension, reduced_dimension)."""
return None if self.entries is None else (self.n, self.r)
@property
def svdvals(self):
"""Singular values of the training data."""
return self.__svdvals
@property
def dual(self):
"""Leading *right* singular vectors."""
return None if self.__dual is None else self.__dual[:, :self.r]
# Fitting -----------------------------------------------------------------
@staticmethod
def _validate_rank(states, r):
"""Validate the rank `r` (if given)."""
rmax = min(states.shape)
if r is not None and (r > rmax or r < 1):
raise ValueError(f"invalid POD rank r = {r} (need 1 ≤ r ≤ {rmax})")
def _store_svd(self, V, svals, Wt):
"""Store SVD components as private attributes."""
self.__entries = V
self.__svdvals = np.sort(svals)[::-1] if svals is not None else None
self.__dual = Wt.T if Wt is not None else None
def fit(self, states,
r=None, cumulative_energy=None, residual_energy=None, **options):
"""Compute the POD basis of rank r corresponding to the states
via the compact/thin singular value decomposition (scipy.linalg.svd()).
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot of
dimension n. If the basis has a transformer, the states are
transformed (and the transformer is updated) before computing
the basis entries.
r : int or None
Number of vectors to include in the basis.
If None, use the largest possible basis (r = min{n, k}).
cumulative_energy : float or None
Cumulative energy threshold. If provided and r=None, choose the
smallest number of basis vectors so that the cumulative singular
value energy exceeds the given threshold.
residual_energy : float or None
Residual energy threshold. If provided, r=None, and
cumulative_energy=None, choose the smallest number of basis vectors
so that the residual singular value energy is less than the given
threshold.
options
Additional parameters for scipy.linalg.svd().
Notes
-----
This method computes the full singular value decomposition of `states`.
The method fit_randomized() uses a randomized SVD.
"""
self._validate_rank(states, r)
# Transform states.
if self.transformer is not None:
states = self.transformer.fit_transform(states)
# Compute the complete compact SVD and store the results.
V, svdvals, Wt = la.svd(states, full_matrices=False, **options)
self._store_svd(V, svdvals, Wt)
self.set_dimension(r, cumulative_energy, residual_energy)
return self
def fit_randomized(self, states, r, **options):
"""Compute the POD basis of rank r corresponding to the states
via the randomized singular value decomposition
(sklearn.utils.extmath.randomized_svd()).
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot of
dimension n. If the basis has a transformer, the states are
transformed (and the transformer is updated) before computing
the basis entries.
r : int
Number of vectors to include in the basis.
options
Additional parameters for sklearn.utils.extmath.randomized_svd().
Notes
-----
This method uses an iterative method to approximate a partial singular
value decomposition, which can be useful for very large n.
The method fit() computes the full singular value decomposition.
"""
self._validate_rank(states, r)
# Transform the states.
if self.transformer is not None:
states = self.transformer.fit_transform(states)
# Compute the randomized SVD and store the results.
if "random_state" not in options:
options["random_state"] = None
V, svdvals, Wt = sklmath.randomized_svd(states, r, **options)
self._store_svd(V, svdvals, Wt)
self.set_dimension(r)
return self
# Visualization -----------------------------------------------------------
def _check_svdvals_exist(self):
"""Raise an AttributeError if there are no singular values stored."""
if self.svdvals is None:
raise AttributeError("no singular value data (call fit() first)")
def plot_svdval_decay(self, threshold=None, normalize=True, ax=None):
"""Plot the normalized singular value decay.
Parameters
----------
threshold : float or None
Cutoff value to mark on the plot.
normalize : bool
If True, normalize so that the maximum singular value is 1.
ax : plt.Axes or None
Matplotlib Axes to plot on. If None, a new figure is created.
Returns
-------
ax : plt.Axes
Matplotlib Axes for the plot.
"""
self._check_svdvals_exist()
if ax is None:
ax = plt.figure().add_subplot(111)
singular_values = self.svdvals
if normalize:
singular_values = singular_values / singular_values[0]
j = np.arange(1, singular_values.size + 1)
ax.semilogy(j, singular_values, "k*", ms=10, mew=0, zorder=3)
ax.set_xlim((0, j.size))
if threshold is not None:
rank = np.count_nonzero(singular_values > threshold)
ax.axhline(threshold, color="gray", linewidth=.5)
ax.axvline(rank, color="gray", linewidth=.5)
# TODO: label lines with text.
ax.set_xlabel("Singular value index")
ax.set_ylabel(("Normalized s" if normalize else '') + "ingular values")
return ax
def plot_residual_energy(self, threshold=None, ax=None):
"""Plot the residual singular value energy decay, defined by
residual_j = sum(svdvals[j+1:]**2) / sum(svdvals**2).
Parameters
----------
threshold : 0 ≤ float ≤ 1 or None
Cutoff value to mark on the plot.
ax : plt.Axes or None
Matplotlib Axes to plot on. If None, a new figure is created.
Returns
-------
ax : plt.Axes
Matplotlib Axes for the plot.
"""
self._check_svdvals_exist()
if ax is None:
ax = plt.figure().add_subplot(111)
svdvals2 = self.svdvals**2
res_energy = 1 - (np.cumsum(svdvals2) / np.sum(svdvals2))
j = np.arange(1, svdvals2.size + 1)
ax.semilogy(j, res_energy, "C0.-", ms=10, lw=1, zorder=3)
ax.set_xlim(0, j.size)
if threshold is not None:
rank = np.count_nonzero(res_energy > threshold) + 1
ax.axhline(threshold, color="gray", linewidth=.5)
ax.axvline(rank, color="gray", linewidth=.5)
# TODO: label lines with text.
ax.set_xlabel("Singular value index")
ax.set_ylabel("Residual energy")
return ax
def plot_cumulative_energy(self, threshold=None, ax=None):
"""Plot the cumulative singular value energy, defined by
energy_j = sum(svdvals[:j]**2) / sum(svdvals**2).
Parameters
----------
threshold : 0 ≤ float ≤ 1 or None
Cutoff value to mark on the plot.
ax : plt.Axes or None
Matplotlib Axes to plot on. If None, a new figure is created.
Returns
-------
ax : plt.Axes
Matplotlib Axes for the plot.
"""
self._check_svdvals_exist()
if ax is None:
ax = plt.figure().add_subplot(111)
svdvals2 = self.svdvals**2
cum_energy = np.cumsum(svdvals2) / np.sum(svdvals2)
j = np.arange(1, svdvals2.size + 1)
ax.plot(j, cum_energy, "C0.-", ms=10, lw=1, zorder=3)
ax.set_xlim(0, j.size)
if threshold is not None:
rank = int(np.searchsorted(cum_energy, threshold)) + 1
ax.axhline(threshold, color="gray", linewidth=.5)
ax.axvline(rank, color="gray", linewidth=.5)
# TODO: label lines with text.
ax.set_xlabel(r"Singular value index")
ax.set_ylabel(r"Residual energy")
return ax
def plot_energy(self):
"""Plot the normalized singular value and residual energy decay."""
self._check_svdvals_exist()
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
self.plot_svdval_decay(ax=axes[0])
self.plot_residual_energy(ax=axes[1])
fig.tight_layout()
return fig, axes
# Persistence -------------------------------------------------------------
def save(self, savefile, overwrite=False):
"""Save the basis to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the basis in.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
with hdf5_savehandle(savefile, overwrite) as hf:
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["economize"] = int(self.economize)
if self.r is not None:
meta.attrs["r"] = self.r
if self.transformer is not None:
TransformerClass = self.transformer.__class__.__name__
meta.attrs["TransformerClass"] = TransformerClass
self.transformer.save(hf.create_group("transformer"))
if self.entries is not None:
hf.create_dataset("entries", data=self.__entries)
hf.create_dataset("svdvals", data=self.__svdvals)
hf.create_dataset("dual", data=self.__dual)
@classmethod
def load(cls, loadfile):
"""Load a basis from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the basis was stored (via save()).
Returns
-------
PODBasis object
"""
entries, svdvals, dualT, transformer, r = None, None, None, None, None
with hdf5_loadhandle(loadfile) as hf:
if "meta" not in hf:
raise LoadfileFormatError("invalid save format "
"(meta/ not found)")
economize = bool(hf["meta"].attrs["economize"])
if "r" in hf["meta"].attrs:
r = int(hf["meta"].attrs["r"])
if "transformer" in hf:
TransformerClassName = hf["meta"].attrs["TransformerClass"]
TransformerClass = getattr(transform, TransformerClassName)
transformer = TransformerClass.load(hf["transformer"])
if "entries" in hf:
entries = hf["entries"][:]
svdvals = hf["svdvals"][:]
dualT = hf["dual"][:].T
out = cls(transformer=transformer, economize=economize)
out._store_svd(entries, svdvals, dualT)
out.r = r
return out
class PODBasisMulti(LinearBasisMulti):
r"""Block-diagonal proper othogonal decomposition basis, derived from the
principal left singular vectors of a collection of states grouped into
blocks:
[ Q1 ] [ Vr1 ]
Q1 = [ Q2 ] --> POD basis = [ Vr2 ], where
[ | ] [ \ ]
svd(Qi) = Vi Si Wi^T --> Vri = Vi[:, :ri].
The low-dimensional approximation is linear (see PODBasis).
Parameters
----------
num_variables : int
Number of variables represented in a single snapshot (number of
individual bases to learn). The dimension `n` of the snapshots
must be evenly divisible by num_variables; for example,
num_variables=3 means the first n entries of a snapshot correspond to
the first variable, and the next n entries correspond to the second
variable, and the last n entries correspond to the third variable.
transformer : Transformer or None
Transformer for pre-processing states before dimensionality reduction.
See SnapshotTransformerMulti for a transformer that scales state
variables individually.
economize : bool
If True, throw away basis vectors beyond the first `r` whenever
the `r` attribute is changed.
variable_names : list of num_variables strings, optional
Names for each of the `num_variables` variables.
Defaults to "variable 1", "variable 2", ....
Attributes
----------
n : int
Total dimension of the state space.
ni : int
Dimension of individual variables, i.e., ni = n / num_variables.
r : int
Total dimension of the basis (number of basis vectors).
rs : list(int)
Dimensions for each diagonal basis block, i.e., `r[i]` is the number
of basis vectors in the representation for state variable `i`.
entries : (n, r) ndarray or scipy.sparse.csc_matrix.
Entries of the basis matrix.
svdvals : (k,) or (r,) ndarray
Singular values of the training data.
dual : (n, r) ndarray
Right singular vectors of the data.
"""
_BasisClass = PODBasis
def __init__(self, num_variables,
transformer=None, economize=False, variable_names=None):
"""Initialize an empty basis and set the transformer."""
# Store dimensions and transformer.
LinearBasisMulti.__init__(self, num_variables,
transformer=transformer,
variable_names=variable_names)
self.economize = bool(economize)
# Properties -------------------------------------------------------------
@property
def rs(self):
"""Dimensions for each diagonal basis block, i.e., `rs[i]` is the
number of basis vectors in the representation for state variable `i`.
"""
rs = [basis.r for basis in self.bases]
return rs if any(rs) else None
@rs.setter
def rs(self, rs):
"""Reset the basis dimensions."""
if len(rs) != self.num_variables:
raise ValueError(f"rs must have length {self.num_variables}")
# This will raise an AttributeError if the entries are not set.
for basis, r in zip(self.bases, rs):
basis.r = r # Economization is also taken care of here.
self._set_entries()
@property
def economize(self):
"""If True, throw away basis vectors beyond the first `r` whenever
the `r` attribute is changed."""
return self.__economize
@economize.setter
def economize(self, econ):
"""Set the economize flag."""
econ = bool(econ)
for basis in self.bases:
basis.economize = econ
self.__economize = econ
# Main routines -----------------------------------------------------------
def fit(self, states,
rs=None, cumulative_energy=None, residual_energy=None,
**options):
"""Fit the basis to the data.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot of
dimension n. If the basis has a transformer, the states are
transformed (and the transformer is updated) before computing
the basis entries.
rs : list(int) or None
Number of basis vectors for each state variable.
If None, use the largest possible bases (ri = min{ni, k}).
cumulative_energy : float or None
Cumulative energy threshold. If provided and rs=None, choose the
smallest number of basis vectors so that the cumulative singular
value energy exceeds the given threshold.
residual_energy : float or None
Residual energy threshold. If provided, rs=None, and
cumulative_energy=None, choose the smallest number of basis vectors
so that the residual singular value energy is less than the given
threshold.
options
Additional parameters for scipy.linalg.svd().
"""
# Transform the states.
if self.transformer is not None:
states = self.transformer.fit_transform(states)
# Split the state and compute the basis for each variable.
if rs is None:
rs = [None] * self.num_variables
for basis, r, var in zip(self.bases, rs,
np.split(states, self.num_variables, axis=0)):
basis.fit(var, r, cumulative_energy, residual_energy, **options)
self._set_entries()
return self
def fit_randomized(self, states, rs, **options):
"""Compute the POD basis of rank r corresponding to the states
via the randomized singular value decomposition
(sklearn.utils.extmath.randomized_svd()).
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot of
dimension n. If the basis has a transformer, the states are
transformed (and the transformer is updated) before computing
the basis entries.
rs : list(int) or None
Number of basis vectors for each state variable.
options
Additional parameters for sklearn.utils.extmath.randomized_svd().
Notes
-----
This method uses an iterative method to approximate a partial singular
value decomposition, which can be useful for very large n.
The method fit() computes the full singular value decomposition.
"""
# Transform the states.
if self.transformer is not None:
states = self.transformer.fit_transform(states)
# Fit the individual bases.
if not isinstance(rs, list) or len(rs) != self.num_variables:
raise TypeError(f"rs must be list of length {self.num_variables}")
for basis, r, var in zip(self.bases, rs,
np.split(states, self.num_variables, axis=0)):
basis.fit_randomized(var, r, **options)
self._set_entries()
return self
# Persistence -------------------------------------------------------------
def save(self, savefile, save_transformer=True, overwrite=False):
"""Save the basis to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the basis in.
save_transformer : bool
If True, save the transformer as well as the basis entries.
If False, only save the basis entries.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
LinearBasisMulti.save(self, savefile, save_transformer, overwrite)
with h5py.File(savefile, 'a') as hf:
hf["meta"].attrs["economize"] = self.economize
@classmethod
def load(cls, loadfile):
"""Load a basis from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the basis was stored (via save()).
Returns
-------
PODBasis object
"""
# basis = LinearBasisMulti.load(cls, loadfile)
basis = super(cls, cls).load(loadfile)
with h5py.File(loadfile, 'r') as hf:
basis.economize = hf["meta"].attrs["economize"]
return basis
# Functional API ==============================================================
def pod_basis(states, r=None, mode="dense", return_W=False, **options):
"""Compute the POD basis of rank r corresponding to the states.
Parameters
----------
states : (n, k) ndarray
Matrix of k snapshots. Each column is a single snapshot of dimension n.
r : int or None
Number of POD basis vectors and singular values to compute.
If None (default), compute the full SVD.
mode : str
Strategy to use for computing the truncated SVD of the states. Options:
* "dense" (default): Use scipy.linalg.svd() to compute the SVD.
May be inefficient for very large matrices.
* "sparse": Use scipy.sparse.linalg.svds() to compute the SVD.
This uses ARPACK for the eigensolver. Inefficient for non-sparse
matrices; requires separate computations for full SVD.
* "randomized": Compute an approximate SVD with a randomized approach
using sklearn.utils.extmath.randomized_svd(). This gives faster
results at the cost of some accuracy.
return_W : bool
If True, also return the first r *right* singular vectors.
options
Additional parameters for the SVD solver, which depends on `mode`:
* "dense": scipy.linalg.svd()
* "sparse": scipy.sparse.linalg.svds()
* "randomized": sklearn.utils.extmath.randomized_svd()
Returns
-------
basis : (n, r) ndarray
First r POD basis vectors (left singular vectors).
Each column is a single basis vector of dimension n.
svdvals : (n,), (k,), or (r,) ndarray
Singular values in descending order. Always returns as many as are
calculated: r for mode="randomize" or "sparse", min(n, k) for "dense".
W : (k, r) ndarray
First r **right** singular vectors, as columns.
**Only returned if return_W=True.**
"""
# Validate the rank.
rmax = min(states.shape)
if r is None:
r = rmax
if r > rmax or r < 1:
raise ValueError(f"invalid POD rank r = {r} (need 1 ≤ r ≤ {rmax})")
if mode == "dense" or mode == "simple":
V, svdvals, Wt = la.svd(states, full_matrices=False, **options)
W = Wt.T
elif mode == "sparse" or mode == "arpack":
get_smallest = False
if r == rmax:
r -= 1
get_smallest = True
# Compute all but the last svd vectors / values (maximum allowed).
V, svdvals, Wt = spla.svds(states, r, which="LM",
return_singular_vectors=True, **options)
V = V[:, ::-1]
svdvals = svdvals[::-1]
W = Wt[::-1, :].T
# Get the smallest vector / value separately.
if get_smallest:
V1, smallest, W1 = spla.svds(states, 1, which="SM",
return_singular_vectors='u',
**options)
print(f"W1.shape: {W1.shape}")
V = np.concatenate((V, V1), axis=1)
svdvals = np.concatenate((svdvals, smallest))
W = np.concatenate((W, W1.T), axis=1)
r += 1
elif mode == "randomized":
if "random_state" not in options:
options["random_state"] = None
V, svdvals, Wt = sklmath.randomized_svd(states, r, **options)
W = Wt.T
else:
raise NotImplementedError(f"invalid mode '{mode}'")
if return_W:
return V[:, :r], svdvals, W[:, :r]
return V[:, :r], svdvals
def svdval_decay(singular_values, tol=1e-8, normalize=True,
plot=True, ax=None):
"""Count the number of normalized singular values that are greater than
the specified tolerance.
Parameters
----------
singular_values : (n,) ndarray
Singular values of a snapshot set, e.g., scipy.linalg.svdvals(states).
tol : float or list(float)
Cutoff value(s) for the singular values.
normalize : bool
If True, normalize so that the maximum singular value is 1.
plot : bool
If True, plot the singular values and the cutoff value(s) against the
singular value index.
ax : plt.Axes or None
Matplotlib Axes to plot the results on if plot = True.
If not given, a new single-axes figure is created.
Returns
-------
ranks : int or list(int)
The number of singular values greater than the cutoff value(s).
"""
# Calculate the number of singular values above the cutoff value(s).
one_tol = np.isscalar(tol)
if one_tol:
tol = [tol]
singular_values = np.sort(singular_values)[::-1]
if normalize:
singular_values /= singular_values[0]
ranks = [np.count_nonzero(singular_values > epsilon) for epsilon in tol]
if plot:
# Visualize singular values and cutoff value(s).
if ax is None:
ax = plt.figure().add_subplot(111)
j = np.arange(1, singular_values.size + 1)
ax.semilogy(j, singular_values, 'C0*', ms=10, mew=0, zorder=3)
ax.set_xlim((0, j.size))
ylim = ax.get_ylim()
for epsilon, r in zip(tol, ranks):
ax.axhline(epsilon, color="black", linewidth=.5, alpha=.75)
ax.axvline(r, color="black", linewidth=.5, alpha=.75)
ax.set_ylim(ylim)
ax.set_xlabel(r"Singular value index $j$")
ax.set_ylabel(r"Singular value $\sigma_j$")
return ranks[0] if one_tol else ranks
def cumulative_energy(singular_values, thresh=.9999, plot=True, ax=None):
"""Compute the number of singular values needed to surpass a given
energy threshold. The energy of j singular values is defined by
energy_j = sum(singular_values[:j]**2) / sum(singular_values**2).
Parameters
----------
singular_values : (n,) ndarray
Singular values of a snapshot set, e.g., scipy.linalg.svdvals(states).
thresh : float or list(float)
Energy capture threshold(s). Default is 99.99%.
plot : bool
If True, plot the singular values and the cumulative energy against
the singular value index (linear scale).
ax : plt.Axes or None
Matplotlib Axes to plot the results on if plot = True.
If not given, a new single-axes figure is created.
Returns
-------
ranks : int or list(int)
The number of singular values required to capture more than each
energy capture threshold.
"""
# Calculate the cumulative energy.
svdvals2 = np.sort(singular_values)[::-1]**2
cum_energy = np.cumsum(svdvals2) / np.sum(svdvals2)
# Determine the points at which the cumulative energy passes the threshold.
one_thresh = np.isscalar(thresh)
if one_thresh:
thresh = [thresh]
ranks = [int(np.searchsorted(cum_energy, xi)) + 1 for xi in thresh]
if plot:
# Visualize cumulative energy and threshold value(s).
if ax is None:
ax = plt.figure().add_subplot(111)
j = np.arange(1, singular_values.size + 1)
ax.plot(j, cum_energy, 'C2.-', ms=10, lw=1, zorder=3)
ax.set_xlim(0, j.size)
for xi, r in zip(thresh, ranks):
ax.axhline(xi, color="black", linewidth=.5, alpha=.5)
ax.axvline(r, color="black", linewidth=.5, alpha=.5)
ax.set_xlabel(r"Singular value index")
ax.set_ylabel(r"Cumulative energy")
return ranks[0] if one_thresh else ranks
def residual_energy(singular_values, tol=1e-6, plot=True, ax=None):
"""Compute the number of singular values needed such that the residual
energy drops beneath the given tolerance. The residual energy of j
singular values is defined by
residual_j = 1 - sum(singular_values[:j]**2) / sum(singular_values**2).
Parameters
----------
singular_values : (n,) ndarray
Singular values of a snapshot set, e.g., scipy.linalg.svdvals(states).
tol : float or list(float)
Energy residual tolerance(s). Default is 10^-6.
plot : bool
If True, plot the singular values and the residual energy against
the singular value index (log scale).
ax : plt.Axes or None
Matplotlib Axes to plot the results on if plot = True.
If not given, a new single-axes figure is created.
Returns
-------
ranks : int or list(int)
Number of singular values required to for the residual energy to drop
beneath each tolerance.
"""
# Calculate the residual energy.
svdvals2 = np.sort(singular_values)[::-1]**2
res_energy = 1 - (np.cumsum(svdvals2) / np.sum(svdvals2))
# Determine the points when the residual energy dips under the tolerance.
one_tol = np.isscalar(tol)
if one_tol:
tol = [tol]
ranks = [np.count_nonzero(res_energy > epsilon) + 1 for epsilon in tol]
if plot:
# Visualize residual energy and tolerance value(s).
if ax is None:
ax = plt.figure().add_subplot(111)
j = np.arange(1, singular_values.size + 1)
ax.semilogy(j, res_energy, 'C1.-', ms=10, lw=1, zorder=3)
ax.set_xlim(0, j.size)
for epsilon, r in zip(tol, ranks):
ax.axhline(epsilon, color="black", linewidth=.5, alpha=.5)
ax.axvline(r, color="black", linewidth=.5, alpha=.5)
ax.set_xlabel(r"Singular value index")
ax.set_ylabel(r"Residual energy")
return ranks[0] if one_tol else ranks
def projection_error(states, basis):
"""Calculate the absolute and relative projection errors induced by
projecting states to a low dimensional basis, i.e.,
absolute_error = ||Q - Vr Vr^T Q||_F,
relative_error = ||Q - Vr Vr^T Q||_F / ||Q||_F
where Q = states and Vr = basis. Note that Vr Vr^T is the orthogonal
projector onto subspace of R^n defined by the basis.
Parameters
----------
states : (n, k) or (k,) ndarray
Matrix of k snapshots where each column is a single snapshot, or a
single 1D snapshot. If 2D, use the Frobenius norm; if 1D, the l2 norm.
Vr : (n, r) ndarray
Low-dimensional basis of rank r. Each column is one basis vector.
Returns
-------
absolute_error : float
Absolute projection error ||Q - Vr Vr^T Q||_F.
relative_error : float
Relative projection error ||Q - Vr Vr^T Q||_F / ||Q||_F.
"""
norm_of_states = la.norm(states)
absolute_error = la.norm(states - basis @ (basis.T @ states))
return absolute_error, absolute_error / norm_of_states | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/basis/_pod.py | 0.917128 | 0.711306 | _pod.py | pypi |
"""Base basis class."""
import abc
import scipy.linalg as la
from ..transform._base import _check_is_transformer
class _BaseBasis(abc.ABC):
"""Abstract base class for all basis classes."""
def __init__(self, transformer=None):
"""Set the transformer."""
self.transformer = transformer
# Transformer -------------------------------------------------------------
@property
def transformer(self):
"""Transformer for pre-processing states before dimension reduction."""
return self.__transformer
@transformer.setter
def transformer(self, tf):
"""Set the transformer, ensuring it has the necessary attributes."""
if tf is not None:
_check_is_transformer(tf)
self.__transformer = tf
# Fitting -----------------------------------------------------------------
@abc.abstractmethod
def fit(self, *args, **kwargs): # pragma: no cover
"""Construct the basis."""
raise NotImplementedError
# Encoding / Decoding -----------------------------------------------------
@abc.abstractmethod
def encode(self, state): # pragma: no cover
"""Map high-dimensional states to low-dimensional latent coordinates.
Parameters
----------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
Returns
-------
state_ : (r,) or (r, k) ndarray
Low-dimensional latent coordinate vector, or a collection of k
such vectors organized as the columns of a matrix.
"""
raise NotImplementedError
@abc.abstractmethod
def decode(self, state_): # pragma: no cover
"""Map low-dimensional latent coordinates to high-dimensional states.
Parameters
----------
state_ : (r,) or (r, k) ndarray
Low-dimensional latent coordinate vector, or a collection of k
such vectors organized as the columns of a matrix.
Returns
-------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
"""
raise NotImplementedError
# Projection --------------------------------------------------------------
def project(self, state):
"""Project a high-dimensional state vector to the subset of the high-
dimensional space that can be represented by the basis by encoding the
state in low-dimensional latent coordinates, then decoding those
coordinates: project(Q) = decode(encode(Q)).
Parameters
----------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
Returns
-------
state_projected : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix, projected to the basis range.
"""
return self.decode(self.encode(state))
def projection_error(self, state, relative=True):
"""Compute the error of the basis representation of a state or states:
err_absolute = || state - project(state) ||
err_relative = || state - project(state) || / || state ||
If `state` is one-dimensional then || . || is the vector 2-norm.
If `state` is two-dimensional then || . || is the Frobenius norm.
See scipy.linalg.norm().
Parameters
----------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
relative : bool
If True, normalize the error by the norm of the original state.
Returns
-------
Relative error of the projection (relative=True) or
absolute error of the projection (relative=False).
"""
diff = la.norm(state - self.project(state))
if relative:
diff /= la.norm(state)
return diff
# Persistence -------------------------------------------------------------
def save(self, *args, **kwargs):
"""Save the basis to an HDF5 file."""
raise NotImplementedError("use pickle/joblib") # pragma: no cover
def load(self, *args, **kwargs):
"""Load a basis from an HDF5 file."""
raise NotImplementedError("use pickle/joblib") # pragma: no cover | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/basis/_base.py | 0.957328 | 0.597843 | _base.py | pypi |
"""Linear basis class."""
__all__ = [
"LinearBasis",
"LinearBasisMulti",
]
import numpy as np
import scipy.sparse as sparse
from ...errors import LoadfileFormatError
from ...utils import hdf5_savehandle, hdf5_loadhandle
from .._multivar import _MultivarMixin
from .. import transform
from ._base import _BaseBasis
class LinearBasis(_BaseBasis):
"""Linear basis for representing the low-dimensional approximation
q = Vr @ q_ := sum([Vr[:, j]*q_[j] for j in range(Vr.shape[1])])
(full_state = basis * reduced_state).
Parameters
----------
transformer : Transformer or None
Transformer for pre-processing states before dimensionality reduction.
Attributes
----------
n : int
Dimension of the state space (size of each basis vector).
r : int
Dimension of the basis (number of basis vectors in the representation).
shape : tulpe
Dimensions (n, r).
entries : (n, r) ndarray
Entries of the basis matrix Vr.
"""
def __init__(self, transformer=None):
"""Initialize an empty basis and set the transformer."""
self.__entries = None
_BaseBasis.__init__(self, transformer)
# Properties --------------------------------------------------------------
@property
def entries(self):
"""Entries of the basis."""
return self.__entries
@property
def n(self):
"""Dimension of the state, i.e., the size of each basis vector."""
return None if self.entries is None else self.entries.shape[0]
@property
def r(self):
"""Dimension of the basis, i.e., the number of basis vectors."""
return None if self.entries is None else self.entries.shape[1]
@property
def shape(self):
"""Dimensions of the basis (state_dimension, reduced_dimension)."""
return None if self.entries is None else self.entries.shape
def __getitem__(self, key):
"""self[:] --> self.entries."""
return self.entries[key]
def fit(self, basis):
"""Store the basis entries (without any filtering by the transformer).
Paramters
---------
basis : (n, r) ndarray
Basis entries. These entries are NOT filtered by the transformer.
Returns
-------
self
"""
if basis is not None and (
not hasattr(basis, "T") or not hasattr(basis, "__matmul__")
):
raise TypeError("invalid basis")
self.__entries = basis
return self
def __str__(self):
"""String representation: class and dimensions."""
out = [self.__class__.__name__]
if self.transformer is not None:
out[0] = f"{out[0]} with {self.transformer.__class__.__name__}"
if self.n is None:
out[0] = f"Empty {out[0]}"
else:
out.append(f"Full-order dimension n = {self.n:d}")
out.append(f"Reduced-order dimension r = {self.r:d}")
return "\n".join(out)
def __repr__(self):
"""Unique ID + string representation."""
uniqueID = f"<{self.__class__.__name__} object at {hex(id(self))}>"
return f"{uniqueID}\n{str(self)}"
# Encoder / decoder -------------------------------------------------------
def encode(self, state):
"""Map high-dimensional states to low-dimensional latent coordinates.
Parameters
----------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
Returns
-------
state_ : (r,) or (r, k) ndarray
Low-dimensional latent coordinate vector, or a collection of k
such vectors organized as the columns of a matrix.
"""
if self.transformer is not None:
state = self.transformer.transform(state)
return self.entries.T @ state
def decode(self, state_):
"""Map low-dimensional latent coordinates to high-dimensional states.
Parameters
----------
state_ : (r,) or (r, k) ndarray
Low-dimensional latent coordinate vector, or a collection of k
such vectors organized as the columns of a matrix.
Returns
-------
state : (n,) or (n, k) ndarray
High-dimensional state vector, or a collection of k such vectors
organized as the columns of a matrix.
"""
state = self.entries @ state_
if self.transformer is not None:
state = self.transformer.inverse_transform(state)
return state
# Persistence -------------------------------------------------------------
def __eq__(self, other):
"""Two LinearBasis objects are equal if their type, dimensions, and
basis entries are the same.
"""
if not isinstance(other, self.__class__):
return False
if self.shape != other.shape:
return False
if self.transformer != other.transformer:
return False
return np.all(self.entries == other.entries)
def save(self, savefile, save_transformer=True, overwrite=False):
"""Save the basis to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the basis in.
save_transformer : bool
If True, save the transformer as well as the basis entries.
If False, only save the basis entries.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
with hdf5_savehandle(savefile, overwrite) as hf:
if save_transformer and self.transformer is not None:
meta = hf.create_dataset("meta", shape=(0,))
TransformerClass = self.transformer.__class__.__name__
meta.attrs["TransformerClass"] = TransformerClass
self.transformer.save(hf.create_group("transformer"))
if self.entries is not None:
hf.create_dataset("entries", data=self.entries)
@classmethod
def load(cls, loadfile):
"""Load a basis from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the basis was stored (via save()).
Returns
-------
_BaseTransformer
"""
entries, transformer = None, None
with hdf5_loadhandle(loadfile) as hf:
if "transformer" in hf:
if "meta" not in hf:
raise LoadfileFormatError("invalid save format "
"(meta/ not found)")
TransformerClassName = hf["meta"].attrs["TransformerClass"]
TransformerClass = getattr(transform, TransformerClassName)
transformer = TransformerClass.load(hf["transformer"])
if "entries" in hf:
entries = hf["entries"][:]
return cls(transformer).fit(entries)
class LinearBasisMulti(LinearBasis, _MultivarMixin):
r"""Block-diagonal basis grouping individual bases for each state variable.
[ Vr1 ]
qi = Vri @ qi_ --> LinearBasis = [ Vr2 ].
i = 1, ..., num_variables [ \ ]
The low-dimensional approximation is linear (see LinearBasis).
Parameters
----------
num_variables : int
Number of variables represented in a single snapshot (number of
individual bases to learn). The dimension `n` of the snapshots
must be evenly divisible by num_variables; for example,
num_variables=3 means the first n entries of a snapshot correspond to
the first variable, and the next n entries correspond to the second
variable, and the last n entries correspond to the third variable.
transformer : Transformer or None
Transformer for pre-processing states before dimensionality reduction.
See SnapshotTransformerMulti for a transformer that scales state
variables individually.
variable_names : list of num_variables strings, optional
Names for each of the `num_variables` variables.
Defaults to "variable 1", "variable 2", ....
Attributes
----------
n : int
Total dimension of the state space.
ni : int
Dimension of individual variables, i.e., ni = n / num_variables.
r : int
Total dimension of the basis (number of basis vectors).
rs : list(int)
Dimensions for each diagonal basis block, i.e., `r[i]` is the number
of basis vectors in the representation for state variable `i`.
entries : (n, r) ndarray or scipy.sparse.csc_matrix.
Entries of the basis matrix.
bases : list(LinearBasis)
Individual bases for each state variable.
"""
_BasisClass = LinearBasis
def __init__(self, num_variables, transformer=None, variable_names=None):
"""Initialize an empty basis and set the transformer."""
# Store dimensions and transformer.
_MultivarMixin.__init__(self, num_variables, variable_names)
LinearBasis.__init__(self, transformer)
self.bases = [self._BasisClass(transformer=None)
for _ in range(self.num_variables)]
# Properties -------------------------------------------------------------
@property
def r(self):
"""Total dimension of the basis (number of basis vectors)."""
rs = self.rs
return None if rs is None else sum(rs)
@property
def rs(self):
"""Dimensions for each diagonal basis block, i.e., `rs[i]` is the
number of basis vectors in the representation for state variable `i`.
"""
rs = [basis.r for basis in self.bases]
return rs if any(rs) else None
def _set_entries(self):
"""Stack individual basis entries as a block diagonal sparse matrix."""
blocks = []
for basis in self.bases:
if basis.n is None: # Quit if any basis is empty.
return
if basis.n != self.bases[0].n:
raise ValueError("all bases must have the same row dimension")
blocks.append(basis.entries)
self._LinearBasis__entries = sparse.block_diag(blocks, format="csc")
def __eq__(self, other):
"""Test two LinearBasisMulti objects for equality."""
if not isinstance(other, self.__class__):
return False
if self.num_variables != other.num_variables:
return False
return all(b1 == b2 for b1, b2 in zip(self.bases, other.bases))
def __str__(self):
"""String representation: centering and scaling directives."""
out = [f"{self.num_variables}-variable {self._BasisClass.__name__}"]
if self.transformer is not None:
out[0] = f"{out[0]} with {self.transformer.__class__.__name__}"
namelength = max(len(name) for name in self.variable_names)
sep = " " * (namelength + 5)
for i, (name, st) in enumerate(zip(self.variable_names, self.bases)):
ststr = str(st).replace('\n', f"\n{sep}")
ststr = ststr.replace("n =", f"n{i+1:d} =")
ststr = ststr.replace("r =", f"r{i+1:d} =")
out.append(f"* {{:>{namelength}}} : {ststr}".format(name))
if self.n is None:
out[0] = f"Empty {out[0]}"
else:
out.append(f"Total full-order dimension n = {self.n:d}")
out.append(f"Total reduced-order dimension r = {self.r:d}")
return '\n'.join(out)
# Main routines -----------------------------------------------------------
def fit(self, bases):
"""Store the basis entries (without any filtering by the transformer).
Paramters
---------
bases : list of num_entries (n, ri) ndarrays
Basis entries. These entries are NOT filtered by the transformer.
Returns
-------
self
"""
for basis, Vi in zip(self.bases, bases):
basis.fit(Vi)
self._set_entries()
return self
# Persistence -------------------------------------------------------------
def save(self, savefile, save_transformer=True, overwrite=False):
"""Save the basis to an HDF5 file.
Parameters
----------
savefile : str
Path of the file to save the basis in.
save_transformer : bool
If True, save the transformer as well as the basis entries.
If False, only save the basis entries.
overwrite : bool
If True, overwrite the file if it already exists. If False
(default), raise a FileExistsError if the file already exists.
"""
with hdf5_savehandle(savefile, overwrite) as hf:
# metadata
meta = hf.create_dataset("meta", shape=(0,))
meta.attrs["num_variables"] = self.num_variables
meta.attrs["variable_names"] = self.variable_names
if save_transformer and self.transformer is not None:
TransformerClass = self.transformer.__class__.__name__
meta.attrs["TransformerClass"] = TransformerClass
self.transformer.save(hf.create_group("transformer"))
for i in range(self.num_variables):
self.bases[i].save(hf.create_group(f"variable{i+1:d}"))
@classmethod
def load(cls, loadfile):
"""Load a basis from an HDF5 file.
Parameters
----------
loadfile : str
Path to the file where the basis was stored (via save()).
Returns
-------
PODBasis object
"""
transformer = None
with hdf5_loadhandle(loadfile) as hf:
# Load metadata.
if "meta" not in hf:
raise LoadfileFormatError("invalid save format "
"(meta/ not found)")
num_variables = hf["meta"].attrs["num_variables"]
variable_names = hf["meta"].attrs["variable_names"].tolist()
# Load transformer if present.
if "transformer" in hf:
TransformerClassName = hf["meta"].attrs["TransformerClass"]
TransformerClass = getattr(transform, TransformerClassName)
transformer = TransformerClass.load(hf["transformer"])
# Load individual bases.
bases = []
for i in range(num_variables):
group = f"variable{i+1}"
if group not in hf:
raise LoadfileFormatError("invalid save format "
f"({group}/ not found)")
bases.append(cls._BasisClass.load(hf[group]))
# Initialize and return the basis object.
basis = cls(num_variables,
transformer=transformer, variable_names=variable_names)
basis.bases = bases
basis._set_entries()
return basis | /rom_operator_inference-1.4.1-py3-none-any.whl/rom_operator_inference/pre/basis/_linear.py | 0.96556 | 0.609205 | _linear.py | pypi |
"""A class that provides utilities to show information in terminal.
"""
import logging
from . import printer
from functools import wraps
def auto_clear(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Get the instance of Console class from args
self = args[0]
assert isinstance(self, Console)
# Clear line if last called function is print_progress
# Using 'is' instead of '==' does not work
if self._last_func_called == self.print_progress:
self.clear_line()
self._last_func_called = func
# Print
return func(*args, **kwargs)
return wrapper
class Console(object):
__VERSION__ = (1, 0, 0)
DEFAULT_TITLE = 'main'
DEFAULT_PROMPT = '>>'
INFO_PROMPT = '::'
SUPPLEMENT_PROMPT = '..'
WARNING_PROMPT = '!!'
TEXT_WIDTH = 79
def __init__(self, buffer_size=0, fancy_text=True):
"""Initiate a console.
:param buffer_size: buffer size
:param fancy_text: whether to allow fancy text. Notice that if this value
is set to False, all consoles will not be able to produce
fancy text.
"""
# Public variables
self.buffer_size = buffer_size
assert isinstance(buffer_size, int) and buffer_size >= 0
# Turn off fancy text forever if fancy_text is set to False
if not fancy_text: self.disable_fancy_text()
# Private variables
self._buffer = []
self._title = None
self._last_func_called = None
# region: Properties
@property
def buffer(self):
return self._buffer
@property
def buffer_string(self):
return '\n'.join(self._buffer)
# endregion: Properties
# region: Private Methods
def _add_to_buffer(self, line):
assert isinstance(line, str)
if self.buffer_size == 0: return
assert self.buffer_size > 0
self._buffer.append(line)
if len(self._buffer) > self.buffer_size: self._buffer.pop(0)
# endregion: Private Methods
# region: Public Methods
@staticmethod
def disable_fancy_text():
"""Disable fancy text. Notice that this will disable fancy text for all
instances of Console.
"""
printer.fancy_text = False
@auto_clear
def write_line(self, text, color=None, highlight=None, attributes=None,
buffer=True, **kwargs):
"""Write a line with fancy style.
During parsing, changing the leading character is not supported currently.
:param text: text to be written
:param color: should be in
{red, green, yellow, blue, magenta, cyan, white}
:param highlight: should be in
{on_red, on_green, on_yellow, on_blue, on_magenta, on_cyan, on_white}
:param attributes: should be a subset of
{bold, dark, underline, blink, reverse, concealed}
:param buffer: whether to add text to buffer
:param kwargs: additional keyword arguments for python print function
:return: raw text
"""
text = printer.write_line(text, color, highlight, attributes, **kwargs)
# Add to buffer if necessary
if buffer: self._add_to_buffer(text)
return text
def write(self, text, color=None, highlight=None, attributes=None):
"""Write the given text with fancy style, as in write_line method.
During parsing, changing the leading character is not supported currently.
This method will neither return anything nor add content into buffer.
:param text: text to be written
"""
printer.write(text, color, highlight, attributes)
def split(self, splitter='-', text_width=None, color=None, highlight=None,
attributes=None):
"""Split terminal with fancy (if style is specified) splitter.
Example:
console.split('#{-}{red}#{-}{yellow}#{-}{blue}')
"""
text_width = text_width or self.TEXT_WIDTH
# Repeat number should be calculated using raw text
ren, raw = printer.parse(splitter)
num = int(text_width / len(raw))
self.write_line(num * ren, color, highlight, attributes)
def start(self, title=None, text_width=None, color='grey'):
"""Indicate the start of a program."""
title = title or self.DEFAULT_TITLE
text_width = text_width or self.TEXT_WIDTH
self.write_line('-> Start of {}'.format(title), color)
self.split(text_width=text_width, color=color)
self._title = title
def end(self, text_width=None, color='grey'):
"""Indicate the end of a program."""
text_width = text_width or self.TEXT_WIDTH
self.split(text_width=text_width, color=color)
self.write_line('|> End of {}'.format(self._title), color)
def section(self, section_title):
"""Indicate the begin of a section."""
self.split()
self.write_line(':: {}'.format(section_title))
self.split()
@auto_clear
def show_status(self, text, color=None, highlight=None, attributes=None,
prompt=None, buffer=True, **kwargs):
"""Show text following the specified prompt.
The text will be displayed in a fancy style if corresponding configurations
are provided, as in self.write_line method.
:param text: text to be written.
:param prompt: The leading symbol, usually '>>' by default.
:param buffer: Whether to add text to buffer
"""
# Use default prompt symbol if not provided
if prompt is None: prompt = self.DEFAULT_PROMPT
# The type of 'text' should not be restricted
assert isinstance(prompt, str)
prompt_text = '{} {}'.format(prompt, text)
# Print using write_line
return self.write_line(
prompt_text, color, highlight, attributes, buffer=buffer, **kwargs)
def show_info(self, text, color=None, highlight=None, attributes=None):
"""Show information using self.show_status"""
return self.show_status(
text, color, highlight, attributes, self.INFO_PROMPT)
def supplement(self, text, color=None, highlight=None, attributes=None,
level=1):
"""Show supplement using self.show_status"""
assert isinstance(level, int) and level > 0
return self.show_status(
text, color, highlight, attributes, self.SUPPLEMENT_PROMPT * level)
def warning(self, text, color='red', highlight=None, attributes=None):
"""Show supplement using self.show_status"""
return self.show_status(
text, color, highlight, attributes, self.WARNING_PROMPT)
def print_progress(self, index=None, total=None, start_time=None,
progress=None):
"""Show progress bar using printer.print_progress.
:param index: positive scalar, indicating current working progress
:param total: positive scalar, indicating the scale of total work
:param start_time: if provided, ETA will be displayed to the right of
the progress bar
:param progress: if provided, 'index' and 'total' will be ignored.
"""
printer.print_progress(index, total, start_time, progress)
# This method does not need to be decorated due to the line below
self._last_func_called = self.print_progress
@staticmethod
def clear_line():
"""Clear a line in which current cursor is positioned."""
printer.clear_line(Console.TEXT_WIDTH)
# endregion: Public Methods
# region: Static Methods
@staticmethod
def disable_logging(pkg_name):
"""Suppress the annoying logging information in terminal.
:param pkg_name: Name of the package producing the unwanted log.
e.g., 'tensorflow'
"""
assert isinstance(pkg_name, str)
logging.getLogger(pkg_name).disabled = True
# endregion: Static Methods | /roma_console-1.0.3-py3-none-any.whl/console/console.py | 0.640074 | 0.370453 | console.py | pypi |
import os
import re
import time
from sys import stdout
__all__ = ['parse', 'write_line', 'write', 'clear_line']
fancy_text = True
# region: Code from termcolor.py
ATTRIBUTES = dict(
list(zip([
'bold',
'dark',
'',
'underline',
'blink',
'',
'reverse',
'concealed'
],
list(range(1, 9))
))
)
del ATTRIBUTES['']
HIGHLIGHTS = dict(
list(zip([
'on_grey',
'on_red',
'on_green',
'on_yellow',
'on_blue',
'on_magenta',
'on_cyan',
'on_white'
],
list(range(40, 48))
))
)
COLORS = dict(
list(zip([
'grey',
'red',
'green',
'yellow',
'blue',
'magenta',
'cyan',
'white',
],
list(range(30, 38))
))
)
RESET = '\033[0m'
def colored(text, color=None, on_color=None, attrs=None):
"""Colorize text.
Available text colors:
red, green, yellow, blue, magenta, cyan, white.
Available text highlights:
on_red, on_green, on_yellow, on_blue, on_magenta, on_cyan, on_white.
Available attributes:
bold, dark, underline, blink, reverse, concealed.
Example:
colored('Hello, World!', 'red', 'on_grey', ['blue', 'blink'])
colored('Hello, World!', 'green')
"""
if all([color is None, on_color is None, attrs is None]): return text
if os.getenv('ANSI_COLORS_DISABLED') is None and fancy_text:
fmt_str = '\033[%dm%s'
if color is not None:
text = fmt_str % (COLORS[color], text)
if on_color is not None:
text = fmt_str % (HIGHLIGHTS[on_color], text)
if attrs is not None:
for attr in attrs:
text = fmt_str % (ATTRIBUTES[attr], text)
text += RESET
return text
# endregion: Code from termcolor.py
# region: Enumerates
class TextColors:
GREY = 'grey'
RED = 'red'
GREEN = 'green'
YELLOW = 'yellow'
BLUE = 'blue'
MAGENTA = 'magenta'
CYAN = 'cyan'
WHITE = 'white'
class TextHighlights:
ON_GREY = 'on_grey'
ON_RED = 'on_red'
ON_GREEN = 'on_green'
ON_YELLOW = 'on_yellow'
ON_BLUE = 'on_blue'
ON_MAGENTA = 'on_magenta'
ON_CYAN = 'on_cyan'
ON_WHITE = 'on_white'
class Attributes:
BOLD = 'bold'
DARK = 'dark'
UNDERLINE = 'underline'
BLINK = 'blink'
REVERSE = 'reverse'
CONCEALED = 'concealed'
# endregion: Enumerates
# region: Private Methods
def _render(text, *args):
"""Renders text using colored.
Arguments in the back, if exist, will overwrite the previous ones,
except for ATTRIBUTE arguments, which will be gathered to a list.
:param text: text to be rendered
:param args: arguments for rendering
:return: rendered text and raw text
"""
color, highlight, attributes = None, None, []
for arg in args:
if arg in COLORS.keys():
color = arg
elif arg in HIGHLIGHTS.keys():
highlight = arg
elif arg in ATTRIBUTES.keys() and arg not in attributes:
attributes.append(arg)
return colored(text, color, highlight, attributes), text
# endregion: Private Methods
# region: Public Methods
def parse(text, lead='#'):
"""Parse the given text according to the syntax.
Pattern: lead + r'(\{.+?\})+'
Here 'lead' is the leading character that can be specified.
e.g., if 'lead' is set to '#', the pattern will be r'#(\{.+?\})+'
Example:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
from termcolor import colored
rendered, raw = parse('#{Hello}{red}, #{World}{green}!')
assert raw == 'Hello, World!'
# The lines below will yield exactly the same outputs
print(rendered)
print(colored('Hello', 'red') + ', ' + colored('World', 'green') + '!')
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
:param text: content to be parsed
:param lead: the leading character for parsing, '#' by default.
BETTER NOT CHANGE !!
:return: rendered line for and raw line, see example above
"""
assert isinstance(text, str) and isinstance(lead, str)
# Find units to be colored
pattern = lead + r'(?:\{.+?\})+'
units = re.findall(pattern, text)
# Render each unit
rendered_text, raw_text = text, text
for unit in units:
args = re.findall(r'\{(.+?)\}', unit)
ren, raw = _render(*args)
# Update rendered and raw text
rendered_text = rendered_text.replace(unit, ren)
raw_text = raw_text.replace(unit, raw)
# Force to display plain text if fancy text is disabled
if not fancy_text: rendered_text = raw_text
return rendered_text, raw_text
def write_line(text, color=None, highlight=None, attributes=None, **kwargs):
"""Write a line with fancy style.
During parsing, changing the leading character is not supported currently.
:param text: text to be written
:param color: should be in
{red, green, yellow, blue, magenta, cyan, white}
:param highlight: should be in
{on_red, on_green, on_yellow, on_blue, on_magenta, on_cyan, on_white}
:param attributes: should be a subset of
{bold, dark, underline, blink, reverse, concealed}
:param kwargs: additional keyword arguments for python print function
:return: raw text
"""
ren, raw = parse(text)
print(colored(ren, color, highlight, attributes), **kwargs)
return raw
def write(text, color=None, highlight=None, attributes=None):
"""Write the given text with fancy style.
During parsing, changing the leading character is not supported currently.
:param text: text to be written
:param color: should be in
{red, green, yellow, blue, magenta, cyan, white}
:param highlight: should be in
{on_red, on_green, on_yellow, on_blue, on_magenta, on_cyan, on_white}
:param attributes: should be a subset of
{bold, dark, underline, blink, reverse, concealed}
"""
ren, _ = parse(text)
stdout.write(colored(ren, color, highlight, attributes))
stdout.flush()
def print_progress(index=None, total=None, start_time=None, progress=None,
bar_width=65):
"""Show progress bar.
This method is inherited from tframe.utils.console.print_progress.
The line which the cursor in terminal is positioned will be overwritten.
:param index: positive scalar, indicating current working progress
:param total: positive scalar, indicating the scale of total work
:param start_time: if provided, ETA will be displayed to the right of
the progress bar
:param progress: if provided, 'index' and 'total' will be ignored.
:param bar_width: width of progress bar, 65 by default
"""
# Calculate progress if not provided
if progress is None:
if index is None or total is None:
raise ValueError(
'index and total must be given if progress is not provided')
progress = 1.0 * index / total
progress = min(progress, 1.0)
# Generate tail
if start_time is not None:
duration = time.time() - start_time
eta = duration / max(progress, 1e-7) * (1 - progress)
tail = "ETA: {:.0f}s".format(eta)
else:
tail = "{:.0f}%".format(100 * progress)
# Generate progress bar
left = int(progress * bar_width)
right = bar_width - left
mid = '=' if progress == 1 else '>'
clear_line()
stdout.write('[%s%s%s] %s' %
('=' * left, mid, ' ' * right, tail))
stdout.flush()
def clear_line(text_width=79):
"""Clear a line in console."""
stdout.write("\r{}\r".format(" " * text_width))
stdout.flush()
# endregion: Public Methods
if __name__ == '__main__':
# The lines below will yield exactly the same outputs
print(colored('Hello, World!', 'red', 'on_cyan'))
write_line('Hello, World!', 'red', 'on_cyan')
write_line('#{Hello, World!}{red}{on_cyan}')
# The lines below will yield exactly the same outputs
print(colored('Hello', 'red') + ', ' + colored('World', 'green') + '!')
write_line('#{Hello}{red}, #{World}{green}!')
write('#{Hello}{red}, #{World}{green}!') | /roma_console-1.0.3-py3-none-any.whl/console/printer.py | 0.543348 | 0.163579 | printer.py | pypi |
import numpy as np
from roman_datamodels import stnode
from ._core import DataModel
__all__ = []
class _DataModel(DataModel):
"""
Exists only to populate the __all__ for this file automatically
This is something which is easily missed, but is important for the automatic
documentation generation to work properly.
"""
def __init_subclass__(cls, **kwargs):
"""Register each subclass in the __all__ for this module"""
super().__init_subclass__(**kwargs)
if cls.__name__ in __all__:
raise ValueError(f"Duplicate model type {cls.__name__}")
__all__.append(cls.__name__)
class MosaicModel(_DataModel):
_node_type = stnode.WfiMosaic
class ImageModel(_DataModel):
_node_type = stnode.WfiImage
class ScienceRawModel(_DataModel):
_node_type = stnode.WfiScienceRaw
class MsosStackModel(_DataModel):
_node_type = stnode.MsosStack
class RampModel(_DataModel):
_node_type = stnode.Ramp
@classmethod
def from_science_raw(cls, model):
"""
Construct a RampModel from a ScienceRawModel
Parameters
----------
model : ScienceRawModel or RampModel
The input science raw model (a RampModel will also work)
"""
if isinstance(model, cls):
return model
if isinstance(model, ScienceRawModel):
from roman_datamodels.maker_utils import mk_ramp
instance = mk_ramp(shape=model.shape)
# Copy input_model contents into RampModel
for key in model:
# If a dictionary (like meta), overwrite entries (but keep
# required dummy entries that may not be in input_model)
if isinstance(instance[key], dict):
instance[key].update(getattr(model, key))
elif isinstance(instance[key], np.ndarray):
# Cast input ndarray as RampModel dtype
instance[key] = getattr(model, key).astype(instance[key].dtype)
else:
instance[key] = getattr(model, key)
return cls(instance)
raise ValueError("Input model must be a ScienceRawModel or RampModel")
class RampFitOutputModel(_DataModel):
_node_type = stnode.RampFitOutput
class AssociationsModel(_DataModel):
# Need an init to allow instantiation from a JSON file
_node_type = stnode.Associations
@classmethod
def is_association(cls, asn_data):
"""
Test if an object is an association by checking for required fields
Parameters
----------
asn_data :
The data to be tested.
"""
return isinstance(asn_data, dict) and "asn_id" in asn_data and "asn_pool" in asn_data
class GuidewindowModel(_DataModel):
_node_type = stnode.Guidewindow
class FlatRefModel(_DataModel):
_node_type = stnode.FlatRef
class DarkRefModel(_DataModel):
_node_type = stnode.DarkRef
class DistortionRefModel(_DataModel):
_node_type = stnode.DistortionRef
class GainRefModel(_DataModel):
_node_type = stnode.GainRef
class IpcRefModel(_DataModel):
_node_type = stnode.IpcRef
class LinearityRefModel(_DataModel):
_node_type = stnode.LinearityRef
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which
controls the size of other arrays that are implicitly created.
This is intended to be overridden in the subclasses if the
primary array's name is not "data".
"""
return "coeffs"
class InverselinearityRefModel(_DataModel):
_node_type = stnode.InverselinearityRef
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which
controls the size of other arrays that are implicitly created.
This is intended to be overridden in the subclasses if the
primary array's name is not "data".
"""
return "coeffs"
class MaskRefModel(_DataModel):
_node_type = stnode.MaskRef
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which
controls the size of other arrays that are implicitly created.
This is intended to be overridden in the subclasses if the
primary array's name is not "data".
"""
return "dq"
class PixelareaRefModel(_DataModel):
_node_type = stnode.PixelareaRef
class ReadnoiseRefModel(_DataModel):
_node_type = stnode.ReadnoiseRef
class SuperbiasRefModel(_DataModel):
_node_type = stnode.SuperbiasRef
class SaturationRefModel(_DataModel):
_node_type = stnode.SaturationRef
class WfiImgPhotomRefModel(_DataModel):
_node_type = stnode.WfiImgPhotomRef
class RefpixRefModel(_DataModel):
_node_type = stnode.RefpixRef | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/datamodels/_datamodels.py | 0.69451 | 0.217005 | _datamodels.py | pypi |
import warnings
from pathlib import Path
import asdf
from astropy.utils import minversion
from roman_datamodels import validate
from ._core import MODEL_REGISTRY, DataModel
# .dev is included in the version comparison to allow for correct version
# comparisons with development versions of asdf 3.0
if minversion(asdf, "3.dev"):
AsdfInFits = None
else:
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
category=asdf.exceptions.AsdfDeprecationWarning,
message=r"AsdfInFits has been deprecated.*",
)
from asdf.fits_embed import AsdfInFits
__all__ = ["rdm_open"]
def _open_path_like(init, memmap=False, **kwargs):
"""
Attempt to open init as if it was a path-like object.
Parameters
----------
init : str
Any path-like object that can be opened by asdf such as a valid string
memmap : bool
If we should open the file with memmap
**kwargs:
Any additional arguments to pass to asdf.open
Returns
-------
`asdf.AsdfFile`
"""
kwargs["copy_arrays"] = not memmap
try:
asdf_file = asdf.open(init, **kwargs)
except ValueError as err:
raise TypeError("Open requires a filepath, file-like object, or Roman datamodel") from err
# This is only needed until we move min asdf version to 3.0
if AsdfInFits is not None and isinstance(asdf_file, AsdfInFits):
asdf_file.close()
raise TypeError("Roman datamodels does not accept FITS files or objects")
return asdf_file
def rdm_open(init, memmap=False, **kwargs):
"""
Datamodel open/create function.
This function opens a Roman datamodel from an asdf file or generates
the datamodel from an existing one.
Parameters
----------
init : str, `DataModel`, `asdf.AsdfFile`
May be any one of the following types:
- `asdf.AsdfFile` instance
- string indicating the path to an ASDF file
- `DataModel` Roman data model instance
memmap : bool
Open ASDF file binary data using memmap (default: False)
Returns
-------
`DataModel`
"""
with validate.nuke_validation():
if isinstance(init, DataModel):
# Copy the object so it knows not to close here
return init.copy(deepcopy=False)
# Temp fix to catch JWST args before being passed to asdf open
if "asn_n_members" in kwargs:
del kwargs["asn_n_members"]
asdf_file = init if isinstance(init, asdf.AsdfFile) else _open_path_like(init, memmap=memmap, **kwargs)
if (model_type := type(asdf_file.tree["roman"])) in MODEL_REGISTRY:
return MODEL_REGISTRY[model_type](asdf_file, **kwargs)
if isinstance(init, str):
exts = Path(init).suffixes
if not exts:
raise ValueError(f"Input file path does not have an extension: {init}")
# Assume json files are asn and return them
if exts[0] == "json":
return init
asdf_file.close()
raise TypeError(f"Unknown datamodel type: {model_type}") | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/datamodels/_utils.py | 0.749821 | 0.26365 | _utils.py | pypi |
import warnings
import numpy as np
from astropy import units as u
from roman_datamodels import stnode
from ._base import MESSAGE, save_node
from ._common_meta import mk_common_meta, mk_guidewindow_meta, mk_msos_stack_meta, mk_photometry_meta, mk_resample_meta
from ._tagged_nodes import mk_cal_logs
def mk_level1_science_raw(*, shape=(8, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy level 1 ScienceRaw instance (or file) with arrays and valid
values for attributes required by the schema.
Parameters
----------
shape : tuple, int
(optional, keyword-only) (z, y, x) Shape of data array. This includes a
four-pixel border representing the reference pixels. Default is
(8, 4096, 4096)
(8 integrations, 4088 x 4088 represent the science pixels, with the
additional being the border reference pixels).
filepath : str
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.WfiScienceRaw
"""
if len(shape) != 3:
shape = (8, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (8, 4096, 4096)")
wfi_science_raw = stnode.WfiScienceRaw()
wfi_science_raw["meta"] = mk_common_meta(**kwargs.get("meta", {}))
n_groups = shape[0]
wfi_science_raw["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.uint16), u.DN, dtype=np.uint16))
# add amp 33 ref pix
wfi_science_raw["amp33"] = kwargs.get(
"amp33", u.Quantity(np.zeros((n_groups, 4096, 128), dtype=np.uint16), u.DN, dtype=np.uint16)
)
return save_node(wfi_science_raw, filepath=filepath)
def mk_level2_image(*, shape=(4088, 4088), n_groups=8, filepath=None, **kwargs):
"""
Create a dummy level 2 Image instance (or file) with arrays and valid values
for attributes required by the schema.
Parameters
----------
shape : tuple, int
(optional, keyword-only) Shape (y, x) of data array in the model (and
its corresponding dq/err arrays). This specified size does NOT include
the four-pixel border of reference pixels - those are trimmed at level
2. This size, however, is used to construct the additional arrays that
contain the original border reference pixels (i.e if shape = (10, 10),
the border reference pixel arrays will have (y, x) dimensions (14, 4)
and (4, 14)). Default is 4088 x 4088.
If shape is a tuple of length 3, the first element is assumed to be the
n_groups and will override any settings there.
n_groups : int
(optional, keyword-only) The level 2 file is flattened, but it contains
arrays for the original reference pixels which remain 3D. n_groups
specifies what the z dimension of these arrays should be. Defaults to 8.
filepath : str
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.WfiImage
"""
if len(shape) > 2:
shape = shape[1:3]
n_groups = shape[0]
warnings.warn(
f"{MESSAGE} assuming the first entry is n_groups followed by y, x. The remaining is thrown out!", UserWarning
)
wfi_image = stnode.WfiImage()
wfi_image["meta"] = mk_photometry_meta(**kwargs.get("meta", {}))
# add border reference pixel arrays
wfi_image["border_ref_pix_left"] = kwargs.get(
"border_ref_pix_left", u.Quantity(np.zeros((n_groups, shape[0] + 8, 4), dtype=np.float32), u.DN, dtype=np.float32)
)
wfi_image["border_ref_pix_right"] = kwargs.get(
"border_ref_pix_right", u.Quantity(np.zeros((n_groups, shape[0] + 8, 4), dtype=np.float32), u.DN, dtype=np.float32)
)
wfi_image["border_ref_pix_top"] = kwargs.get(
"border_ref_pix_top", u.Quantity(np.zeros((n_groups, shape[0] + 8, 4), dtype=np.float32), u.DN, dtype=np.float32)
)
wfi_image["border_ref_pix_bottom"] = kwargs.get(
"border_ref_pix_bottom", u.Quantity(np.zeros((n_groups, shape[0] + 8, 4), dtype=np.float32), u.DN, dtype=np.float32)
)
# and their dq arrays
wfi_image["dq_border_ref_pix_left"] = kwargs.get("dq_border_ref_pix_left", np.zeros((shape[0] + 8, 4), dtype=np.uint32))
wfi_image["dq_border_ref_pix_right"] = kwargs.get("dq_border_ref_pix_right", np.zeros((shape[0] + 8, 4), dtype=np.uint32))
wfi_image["dq_border_ref_pix_top"] = kwargs.get("dq_border_ref_pix_top", np.zeros((4, shape[1] + 8), dtype=np.uint32))
wfi_image["dq_border_ref_pix_bottom"] = kwargs.get("dq_border_ref_pix_bottom", np.zeros((4, shape[1] + 8), dtype=np.uint32))
# add amp 33 ref pixel array
amp33_size = (n_groups, 4096, 128)
wfi_image["amp33"] = kwargs.get("amp33", u.Quantity(np.zeros(amp33_size, dtype=np.uint16), u.DN, dtype=np.uint16))
wfi_image["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32))
wfi_image["dq"] = kwargs.get("dq", np.zeros(shape, dtype=np.uint32))
wfi_image["err"] = kwargs.get("err", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32))
wfi_image["var_poisson"] = kwargs.get(
"var_poisson", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_image["var_rnoise"] = kwargs.get(
"var_rnoise", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_image["var_flat"] = kwargs.get(
"var_flat", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_image["cal_logs"] = mk_cal_logs(**kwargs)
return save_node(wfi_image, filepath=filepath)
def mk_level3_mosaic(*, shape=(4088, 4088), n_images=2, filepath=None, **kwargs):
"""
Create a dummy level 3 Mosaic instance (or file) with arrays and valid
values for attributes required by the schema.
Parameters
----------
shape : tuple, int
(optional, keyword-only) Shape (y, x) of data array in the model (and
its corresponding dq/err arrays). Default is 4088 x 4088.
If shape is a tuple of length 3, the first element is assumed to be
n_images and will override the n_images parameter.
n_images : int
Number of images used to create the level 3 image. Defaults to 2.
filepath : str
(optional) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.WfiMosaic
"""
if len(shape) > 2:
shape = shape[1:3]
n_images = shape[0]
warnings.warn(
f"{MESSAGE} assuming the first entry is n_images followed by y, x. The remaining is thrown out!", UserWarning
)
wfi_mosaic = stnode.WfiMosaic()
wfi_mosaic["meta"] = mk_resample_meta(**kwargs.get("meta", {}))
wfi_mosaic["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32))
wfi_mosaic["err"] = kwargs.get("err", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32))
wfi_mosaic["context"] = kwargs.get("context", np.zeros((n_images,) + shape, dtype=np.uint32))
wfi_mosaic["weight"] = kwargs.get("weight", np.zeros(shape, dtype=np.float32))
wfi_mosaic["var_poisson"] = kwargs.get(
"var_poisson", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_mosaic["var_rnoise"] = kwargs.get(
"var_rnoise", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_mosaic["var_flat"] = kwargs.get(
"var_flat", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
wfi_mosaic["cal_logs"] = mk_cal_logs(**kwargs)
return save_node(wfi_mosaic, filepath=filepath)
def mk_msos_stack(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Level 3 MSOS stack instance (or file) with arrays and valid values
Parameters
----------
shape : tuple, int
(optional) Shape (z, y, x) of data array in the model (and its
corresponding dq/err arrays). This specified size includes the
four-pixel border of reference pixels. Default is 8 x 4096 x 4096.
filepath : str
(optional) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.MsosStack
"""
if len(shape) > 2:
shape = shape[1:3]
warnings.warn(f"{MESSAGE} assuming the the first two entries are y, x. The remaining is thrown out!", UserWarning)
msos_stack = stnode.MsosStack()
msos_stack["meta"] = mk_msos_stack_meta(**kwargs.get("meta", {}))
msos_stack["data"] = kwargs.get("data", np.zeros(shape, dtype=np.float64))
msos_stack["uncertainty"] = kwargs.get("uncertainty", np.zeros(shape, dtype=np.float64))
msos_stack["mask"] = kwargs.get("mask", np.zeros(shape, dtype=np.uint8))
msos_stack["coverage"] = kwargs.get("coverage", np.zeros(shape, dtype=np.uint8))
return save_node(msos_stack, filepath=filepath)
def mk_ramp(*, shape=(8, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Ramp instance (or file) with arrays and valid values for attributes
required by the schema.
Parameters
----------
shape : tuple, int
(optional, keyword-only) Shape (z, y, x) of data array in the model (and
its corresponding dq/err arrays). This specified size includes the
four-pixel border of reference pixels. Default is 8 x 4096 x 4096.
filepath : str
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.Ramp
"""
if len(shape) != 3:
shape = (8, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (8, 4096, 4096)")
ramp = stnode.Ramp()
ramp["meta"] = mk_common_meta(**kwargs.get("meta", {}))
# add border reference pixel arrays
ramp["border_ref_pix_left"] = kwargs.get(
"border_ref_pix_left", u.Quantity(np.zeros((shape[0], shape[1], 4), dtype=np.float32), u.DN, dtype=np.float32)
)
ramp["border_ref_pix_right"] = kwargs.get(
"border_ref_pix_right", u.Quantity(np.zeros((shape[0], shape[1], 4), dtype=np.float32), u.DN, dtype=np.float32)
)
ramp["border_ref_pix_top"] = kwargs.get(
"border_ref_pix_top", u.Quantity(np.zeros((shape[0], 4, shape[2]), dtype=np.float32), u.DN, dtype=np.float32)
)
ramp["border_ref_pix_bottom"] = kwargs.get(
"border_ref_pix_bottom", u.Quantity(np.zeros((shape[0], 4, shape[2]), dtype=np.float32), u.DN, dtype=np.float32)
)
# and their dq arrays
ramp["dq_border_ref_pix_left"] = kwargs.get("dq_border_ref_pix_left", np.zeros((shape[1], 4), dtype=np.uint32))
ramp["dq_border_ref_pix_right"] = kwargs.get("dq_border_ref_pix_right", np.zeros((shape[1], 4), dtype=np.uint32))
ramp["dq_border_ref_pix_top"] = kwargs.get("dq_border_ref_pix_top", np.zeros((4, shape[2]), dtype=np.uint32))
ramp["dq_border_ref_pix_bottom"] = kwargs.get("dq_border_ref_pix_bottom", np.zeros((4, shape[2]), dtype=np.uint32))
# add amp 33 ref pixel array
ramp["amp33"] = kwargs.get("amp33", u.Quantity(np.zeros((shape[0], shape[1], 128), dtype=np.uint16), u.DN, dtype=np.uint16))
ramp["data"] = kwargs.get("data", u.Quantity(np.full(shape, 1.0, dtype=np.float32), u.DN, dtype=np.float32))
ramp["pixeldq"] = kwargs.get("pixeldq", np.zeros(shape[1:], dtype=np.uint32))
ramp["groupdq"] = kwargs.get("groupdq", np.zeros(shape, dtype=np.uint8))
ramp["err"] = kwargs.get("err", u.Quantity(np.zeros(shape, dtype=np.float32), u.DN, dtype=np.float32))
return save_node(ramp, filepath=filepath)
def mk_ramp_fit_output(*, shape=(8, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Rampfit Output instance (or file) with arrays and valid
values for attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.RampFitOutput
"""
if len(shape) != 3:
shape = (8, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (8, 4096, 4096)")
rampfitoutput = stnode.RampFitOutput()
rampfitoutput["meta"] = mk_common_meta(**kwargs.get("meta", {}))
rampfitoutput["slope"] = kwargs.get(
"slope", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32)
)
rampfitoutput["sigslope"] = kwargs.get(
"sigslope", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.s, dtype=np.float32)
)
rampfitoutput["yint"] = kwargs.get("yint", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron, dtype=np.float32))
rampfitoutput["sigyint"] = kwargs.get("sigyint", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron, dtype=np.float32))
rampfitoutput["pedestal"] = kwargs.get(
"pedestal", u.Quantity(np.zeros(shape[1:], dtype=np.float32), u.electron, dtype=np.float32)
)
rampfitoutput["weights"] = kwargs.get("weights", np.zeros(shape, dtype=np.float32))
rampfitoutput["crmag"] = kwargs.get("crmag", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron, dtype=np.float32))
rampfitoutput["var_poisson"] = kwargs.get(
"var_poisson", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
rampfitoutput["var_rnoise"] = kwargs.get(
"var_rnoise", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron**2 / u.s**2, dtype=np.float32)
)
return save_node(rampfitoutput, filepath=filepath)
def mk_rampfitoutput(**kwargs):
warnings.warn("mk_rampfitoutput is deprecated. Use mk_rampfit_output instead.", DeprecationWarning)
return mk_ramp_fit_output(**kwargs)
def mk_associations(*, shape=(2, 3, 1), filepath=None, **kwargs):
"""
Create a dummy Association table instance (or file) with table and valid
values for attributes required by the schema.
Parameters
----------
shape : tuple
(optional, keyword-only) The shape of the member elements of products.
filepath : string
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.AssociationsModel
"""
associations = stnode.Associations()
associations["asn_type"] = kwargs.get("asn_type", "image")
associations["asn_rule"] = kwargs.get("asn_rule", "candidate_Asn_Lv2Image_i2d")
associations["version_id"] = kwargs.get("version_id", "null")
associations["code_version"] = kwargs.get("code_version", "0.16.2.dev16+g640b0b79")
associations["degraded_status"] = kwargs.get("degraded_status", "No known degraded exposures in association.")
associations["program"] = kwargs.get("program", 1)
associations["constraints"] = kwargs.get(
"constraints",
(
"DMSAttrConstraint({'name': 'program', 'sources': ['program'], "
"'value': '001'})\nConstraint_TargetAcq({'name': 'target_acq', 'value': "
"'target_acquisition'})\nDMSAttrConstraint({'name': 'science', "
"'DMSAttrConstraint({'name': 'asn_candidate','sources': "
"['asn_candidate'], 'value': \"\\\\('o036',\\\\ 'observation'\\\\)\"})"
),
)
associations["asn_id"] = kwargs.get("asn_id", "o036")
associations["asn_pool"] = kwargs.get("asn_pool", "r00001_20200530t023154_pool")
associations["target"] = kwargs.get("target", 16)
file_idx = 0
if "products" in kwargs:
associations["products"] = kwargs["products"]
else:
associations["products"] = []
CHOICES = ["SCIENCE", "CALIBRATION", "ENGINEERING"]
for product_idx, members in enumerate(shape):
members_lst = []
for member_idx in range(members):
members_lst.append(
{"expname": "file_" + str(file_idx) + ".asdf", "exposerr": "null", "exptype": CHOICES[member_idx % 3]}
)
file_idx += 1
associations["products"].append({"name": f"product{product_idx}", "members": members_lst})
return save_node(associations, filepath=filepath)
def mk_guidewindow(*, shape=(2, 8, 16, 32, 32), filepath=None, **kwargs):
"""
Create a dummy Guidewindow instance (or file) with arrays and valid values
for attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.Guidewindow
"""
if len(shape) != 5:
shape = (2, 8, 16, 32, 32)
warnings.warn("Input shape must be 5D. Defaulting to (2, 8, 16, 32, 32)")
guidewindow = stnode.Guidewindow()
guidewindow["meta"] = mk_guidewindow_meta(**kwargs.get("meta", {}))
guidewindow["pedestal_frames"] = kwargs.get(
"pedestal_frames", u.Quantity(np.zeros(shape, dtype=np.uint16), u.DN, dtype=np.uint16)
)
guidewindow["signal_frames"] = kwargs.get(
"signal_frames", u.Quantity(np.zeros(shape, dtype=np.uint16), u.DN, dtype=np.uint16)
)
guidewindow["amp33"] = kwargs.get("amp33", u.Quantity(np.zeros(shape, dtype=np.uint16), u.DN, dtype=np.uint16))
return save_node(guidewindow, filepath=filepath) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/_datamodels.py | 0.722429 | 0.289397 | _datamodels.py | pypi |
from astropy import time
from roman_datamodels import stnode
from ._base import NOSTR
def mk_calibration_software_version(**kwargs):
"""
Create a dummy CalibrationSoftwareVersion object with valid values
Returns
-------
roman_datamodels.stnode.CalibrationSoftwareVersion
"""
return stnode.CalibrationSoftwareVersion(kwargs.get("calibration_software_version", "9.9.0"))
def mk_sdf_software_version(**kwargs):
"""
Create a dummy SdfSoftwareVersion object with valid values
Returns
-------
roman_datamodels.stnode.SdfSoftwareVersion
"""
return stnode.SdfSoftwareVersion(kwargs.get("sdf_software_version", "7.7.7"))
def mk_filename(**kwargs):
"""
Create a dummy Filename object with valid values
Returns
-------
roman_datamodels.stnode.Filename
"""
return stnode.Filename(kwargs.get("filename", NOSTR))
def mk_file_date(**kwargs):
"""
Create a dummy FileDate object with valid values
Returns
-------
roman_datamodels.stnode.FileDate
"""
return stnode.FileDate(kwargs.get("file_date", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc")))
def mk_model_type(**kwargs):
"""
Create a dummy ModelType object with valid values
Returns
-------
roman_datamodels.stnode.ModelType
"""
return stnode.ModelType(kwargs.get("model_type", NOSTR))
def mk_origin(**kwargs):
"""
Create a dummy Origin object with valid values
Returns
-------
roman_datamodels.stnode.Origin
"""
return stnode.Origin(kwargs.get("origin", "STSCI"))
def mk_prd_software_version(**kwargs):
"""
Create a dummy PrdSoftwareVersion object with valid values
Returns
-------
roman_datamodels.stnode.PrdSoftwareVersion
"""
return stnode.PrdSoftwareVersion(kwargs.get("prd_software_version", "8.8.8"))
def mk_telescope(**kwargs):
"""
Create a dummy Telescope object with valid values
Returns
-------
roman_datamodels.stnode.Telescope
"""
return stnode.Telescope(kwargs.get("telescope", "ROMAN"))
def mk_basic_meta(**kwargs):
"""
Create a dummy basic metadata dictionary with valid values for attributes
Returns
-------
dict (defined by the basic-1.0.0 schema)
"""
meta = {}
meta["calibration_software_version"] = mk_calibration_software_version(**kwargs)
meta["sdf_software_version"] = mk_sdf_software_version(**kwargs)
meta["filename"] = mk_filename(**kwargs)
meta["file_date"] = mk_file_date(**kwargs)
meta["model_type"] = mk_model_type(**kwargs)
meta["origin"] = mk_origin(**kwargs)
meta["prd_software_version"] = mk_prd_software_version(**kwargs)
meta["telescope"] = mk_telescope(**kwargs)
return meta | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/_basic_meta.py | 0.76145 | 0.354657 | _basic_meta.py | pypi |
from astropy import units as u
from roman_datamodels import stnode
from ._base import NONUM
def mk_photometry(**kwargs):
"""
Create a dummy Photometry instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Photometry
"""
phot = stnode.Photometry()
phot["conversion_microjanskys"] = kwargs.get("conversion_microjanskys", NONUM * u.uJy / u.arcsec**2)
phot["conversion_megajanskys"] = kwargs.get("conversion_megajanskys", NONUM * u.MJy / u.sr)
phot["pixelarea_steradians"] = kwargs.get("pixelarea_steradians", NONUM * u.sr)
phot["pixelarea_arcsecsq"] = kwargs.get("pixelarea_arcsecsq", NONUM * u.arcsec**2)
phot["conversion_microjanskys_uncertainty"] = kwargs.get("conversion_microjanskys_uncertainty", NONUM * u.uJy / u.arcsec**2)
phot["conversion_megajanskys_uncertainty"] = kwargs.get("conversion_megajanskys_uncertainty", NONUM * u.MJy / u.sr)
return phot
def mk_resample(**kwargs):
"""
Create a dummy Resample instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Resample
"""
res = stnode.Resample()
res["pixel_scale_ratio"] = kwargs.get("pixel_scale_ratio", NONUM)
res["pixfrac"] = kwargs.get("pixfrac", NONUM)
res["pointings"] = kwargs.get("pointings", -1 * NONUM)
res["product_exposure_time"] = kwargs.get("product_exposure_time", -1 * NONUM)
res["weight_type"] = kwargs.get("weight_type", "exptime")
return res
def mk_cal_logs(**kwargs):
"""
Create a dummy CalLogs instance with valid values for attributes
required by the schema.
Returns
-------
roman_datamodels.stnode.CalLogs
"""
return stnode.CalLogs(
kwargs.get(
"cal_logs",
[
"2021-11-15T09:15:07.12Z :: FlatFieldStep :: INFO :: Completed",
"2021-11-15T10:22.55.55Z :: RampFittingStep :: WARNING :: Wow, lots of Cosmic Rays detected",
],
)
) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/_tagged_nodes.py | 0.826642 | 0.218107 | _tagged_nodes.py | pypi |
from astropy import time
from astropy import units as u
from roman_datamodels import stnode
from ._base import NONUM, NOSTR
from ._basic_meta import mk_basic_meta
from ._tagged_nodes import mk_photometry, mk_resample
def mk_exposure(**kwargs):
"""
Create a dummy Exposure instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Exposure
"""
exp = stnode.Exposure()
exp["id"] = kwargs.get("id", NONUM)
exp["type"] = kwargs.get("type", "WFI_IMAGE")
exp["start_time"] = kwargs.get("start_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
exp["mid_time"] = kwargs.get("mid_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
exp["end_time"] = kwargs.get("end_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
exp["start_time_mjd"] = kwargs.get("start_time_mjd", NONUM)
exp["mid_time_mjd"] = kwargs.get("mid_time_mjd", NONUM)
exp["end_time_mjd"] = kwargs.get("end_time_mjd", NONUM)
exp["start_time_tdb"] = kwargs.get("start_time_tdb", NONUM)
exp["mid_time_tdb"] = kwargs.get("mid_time_tdb", NONUM)
exp["end_time_tdb"] = kwargs.get("end_time_tdb", NONUM)
exp["ngroups"] = kwargs.get("ngroups", 6)
exp["nframes"] = kwargs.get("nframes", 8)
exp["data_problem"] = kwargs.get("data_problem", False)
exp["sca_number"] = kwargs.get("sca_number", NONUM)
exp["gain_factor"] = kwargs.get("gain_factor", NONUM)
exp["integration_time"] = kwargs.get("integration_time", NONUM)
exp["elapsed_exposure_time"] = kwargs.get("elapsed_exposure_time", NONUM)
exp["frame_divisor"] = kwargs.get("frame_divisor", NONUM)
exp["groupgap"] = kwargs.get("groupgap", 0)
exp["frame_time"] = kwargs.get("frame_time", NONUM)
exp["group_time"] = kwargs.get("group_time", NONUM)
exp["exposure_time"] = kwargs.get("exposure_time", NONUM)
exp["effective_exposure_time"] = kwargs.get("effective_exposure_time", NONUM)
exp["duration"] = kwargs.get("duration", NONUM)
exp["ma_table_name"] = kwargs.get("ma_table_name", NOSTR)
exp["ma_table_number"] = kwargs.get("ma_table_number", NONUM)
exp["level0_compressed"] = kwargs.get("level0_compressed", True)
exp["read_pattern"] = kwargs.get("read_pattern", [[1], [2, 3], [4], [5, 6, 7, 8], [9, 10], [11]])
return exp
def mk_wfi_mode(**kwargs):
"""
Create a dummy WFI mode instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.WfiMode
"""
mode = stnode.WfiMode()
mode["name"] = kwargs.get("name", "WFI")
mode["detector"] = kwargs.get("detector", "WFI01")
mode["optical_element"] = kwargs.get("optical_element", "F158")
return mode
def mk_program(**kwargs):
"""
Create a dummy Program instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Program
"""
prog = stnode.Program()
prog["title"] = kwargs.get("title", NOSTR)
prog["pi_name"] = kwargs.get("pi_name", NOSTR)
prog["category"] = kwargs.get("category", NOSTR)
prog["subcategory"] = kwargs.get("subcategory", NOSTR)
prog["science_category"] = kwargs.get("science_category", NOSTR)
prog["continuation_id"] = kwargs.get("continuation_id", NONUM)
return prog
def mk_observation(**kwargs):
"""
Create a dummy Observation instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Observation
"""
obs = stnode.Observation()
obs["obs_id"] = kwargs.get("obs_id", NOSTR)
obs["visit_id"] = kwargs.get("visit_id", NOSTR)
obs["program"] = kwargs.get("program", str(NONUM))
obs["execution_plan"] = kwargs.get("execution_plan", NONUM)
obs["pass"] = kwargs.get("pass", NONUM)
obs["segment"] = kwargs.get("segment", NONUM)
obs["observation"] = kwargs.get("observation", NONUM)
obs["visit"] = kwargs.get("visit", NONUM)
obs["visit_file_group"] = kwargs.get("visit_file_group", NONUM)
obs["visit_file_sequence"] = kwargs.get("visit_file_sequence", NONUM)
obs["visit_file_activity"] = kwargs.get("visit_file_activity", NOSTR)
obs["exposure"] = kwargs.get("exposure", NONUM)
obs["template"] = kwargs.get("template", NOSTR)
obs["observation_label"] = kwargs.get("observation_label", NOSTR)
obs["survey"] = kwargs.get("survey", "N/A")
return obs
def mk_ephemeris(**kwargs):
"""
Create a dummy Ephemeris instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Ephemeris
"""
ephem = stnode.Ephemeris()
ephem["earth_angle"] = kwargs.get("earth_angle", NONUM)
ephem["moon_angle"] = kwargs.get("moon_angle", NONUM)
ephem["ephemeris_reference_frame"] = kwargs.get("ephemeris_reference_frame", NOSTR)
ephem["sun_angle"] = kwargs.get("sun_angle", NONUM)
ephem["type"] = kwargs.get("type", "DEFINITIVE")
ephem["time"] = kwargs.get("time", NONUM)
ephem["spatial_x"] = kwargs.get("spatial_x", NONUM)
ephem["spatial_y"] = kwargs.get("spatial_y", NONUM)
ephem["spatial_z"] = kwargs.get("spatial_z", NONUM)
ephem["velocity_x"] = kwargs.get("velocity_x", NONUM)
ephem["velocity_y"] = kwargs.get("velocity_y", NONUM)
ephem["velocity_z"] = kwargs.get("velocity_z", NONUM)
return ephem
def mk_visit(**kwargs):
"""
Create a dummy Visit instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Visit
"""
visit = stnode.Visit()
visit["engineering_quality"] = kwargs.get("engineering_quality", "OK")
visit["pointing_engdb_quality"] = kwargs.get("pointing_engdb_quality", "CALCULATED")
visit["type"] = kwargs.get("type", NOSTR)
visit["start_time"] = kwargs.get("start_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
visit["end_time"] = kwargs.get("end_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
visit["status"] = kwargs.get("status", NOSTR)
visit["total_exposures"] = kwargs.get("total_exposures", NONUM)
visit["internal_target"] = kwargs.get("internal_target", False)
visit["target_of_opportunity"] = kwargs.get("target_of_opportunity", False)
return visit
def mk_source_detection(**kwargs):
"""
Create a dummy Source Detection instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below
Returns
-------
roman_datamodels.stnode.SourceDetection
"""
sd = stnode.SourceDetection()
sd["tweakreg_catalog_name"] = kwargs.get("tweakreg_catalog_name", "filename_tweakreg_catalog.asdf")
return sd
def mk_coordinates(**kwargs):
"""
Create a dummy Coordinates instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Coordinates
"""
coord = stnode.Coordinates()
coord["reference_frame"] = kwargs.get("reference_frame", "ICRS")
return coord
def mk_aperture(**kwargs):
"""
Create a dummy Aperture instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Aperture
"""
aper = stnode.Aperture()
aper["name"] = kwargs.get("name", f"WFI_{5 + 1:02d}_FULL")
aper["position_angle"] = kwargs.get("position_angle", 30.0)
return aper
def mk_pointing(**kwargs):
"""
Create a dummy Pointing instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Pointing
"""
point = stnode.Pointing()
point["ra_v1"] = kwargs.get("ra_v1", NONUM)
point["dec_v1"] = kwargs.get("dec_v1", NONUM)
point["pa_v3"] = kwargs.get("pa_v3", NONUM)
return point
def mk_target(**kwargs):
"""
Create a dummy Target instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Target
"""
targ = stnode.Target()
targ["proposer_name"] = kwargs.get("proposer_name", NOSTR)
targ["catalog_name"] = kwargs.get("catalog_name", NOSTR)
targ["type"] = kwargs.get("type", "FIXED")
targ["ra"] = kwargs.get("ra", NONUM)
targ["dec"] = kwargs.get("dec", NONUM)
targ["ra_uncertainty"] = kwargs.get("ra_uncertainty", NONUM)
targ["dec_uncertainty"] = kwargs.get("dec_uncertainty", NONUM)
targ["proper_motion_ra"] = kwargs.get("proper_motion_ra", NONUM)
targ["proper_motion_dec"] = kwargs.get("proper_motion_dec", NONUM)
targ["proper_motion_epoch"] = kwargs.get("proper_motion_epoch", NOSTR)
targ["proposer_ra"] = kwargs.get("proposer_ra", NONUM)
targ["proposer_dec"] = kwargs.get("proposer_dec", NONUM)
targ["source_type"] = kwargs.get("source_type", "POINT")
return targ
def mk_velocity_aberration(**kwargs):
"""
Create a dummy Velocity Aberration instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.VelocityAberration
"""
vab = stnode.VelocityAberration()
vab["ra_offset"] = kwargs.get("ra_offset", NONUM)
vab["dec_offset"] = kwargs.get("dec_offset", NONUM)
vab["scale_factor"] = kwargs.get("scale_factor", NONUM)
return vab
def mk_wcsinfo(**kwargs):
"""
Create a dummy WCS Info instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Wcsinfo
"""
wcsi = stnode.Wcsinfo()
wcsi["v2_ref"] = kwargs.get("v2_ref", NONUM)
wcsi["v3_ref"] = kwargs.get("v3_ref", NONUM)
wcsi["vparity"] = kwargs.get("vparity", NONUM)
wcsi["v3yangle"] = kwargs.get("v3yangle", NONUM)
wcsi["ra_ref"] = kwargs.get("ra_ref", NONUM)
wcsi["dec_ref"] = kwargs.get("dec_ref", NONUM)
wcsi["roll_ref"] = kwargs.get("roll_ref", NONUM)
wcsi["s_region"] = kwargs.get("s_region", NOSTR)
return wcsi
def mk_cal_step(**kwargs):
"""
Create a dummy Cal Step instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.CalStep
"""
calstep = stnode.CalStep()
calstep["assign_wcs"] = kwargs.get("assign_wcs", "INCOMPLETE")
calstep["dark"] = kwargs.get("dark", "INCOMPLETE")
calstep["dq_init"] = kwargs.get("dq_init", "INCOMPLETE")
calstep["flat_field"] = kwargs.get("flat_field", "INCOMPLETE")
calstep["jump"] = kwargs.get("jump", "INCOMPLETE")
calstep["linearity"] = kwargs.get("linearity", "INCOMPLETE")
calstep["photom"] = kwargs.get("photom", "INCOMPLETE")
calstep["source_detection"] = kwargs.get("source_detection", "INCOMPLETE")
calstep["outlier_detection"] = kwargs.get("outlier_detection", "INCOMPLETE")
calstep["ramp_fit"] = kwargs.get("ramp_fit", "INCOMPLETE")
calstep["refpix"] = kwargs.get("refpix", "INCOMPLETE")
calstep["saturation"] = kwargs.get("saturation", "INCOMPLETE")
calstep["skymatch"] = kwargs.get("skymatch", "INCOMPLETE")
calstep["tweakreg"] = kwargs.get("tweakreg", "INCOMPLETE")
calstep["resample"] = kwargs.get("resample", "INCOMPLETE")
return calstep
def mk_guidestar(**kwargs):
"""
Create a dummy Guide Star instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.Guidestar
"""
guide = stnode.Guidestar()
guide["gw_id"] = kwargs.get("gw_id", NOSTR)
guide["gs_ra"] = kwargs.get("gs_ra", NONUM)
guide["gs_dec"] = kwargs.get("gs_dec", NONUM)
guide["gs_ura"] = kwargs.get("gs_ura", NONUM)
guide["gs_udec"] = kwargs.get("gs_udec", NONUM)
guide["gs_mag"] = kwargs.get("gs_mag", NONUM)
guide["gs_umag"] = kwargs.get("gs_umag", NONUM)
guide["gw_fgs_mode"] = kwargs.get("gw_fgs_mode", "WSM-ACQ-2")
guide["gs_id"] = kwargs.get("gs_id", NOSTR)
guide["gs_catalog_version"] = kwargs.get("gs_catalog_version", NOSTR)
guide["data_start"] = kwargs.get("data_start", NONUM)
guide["data_end"] = kwargs.get("data_end", NONUM)
guide["gs_ctd_x"] = kwargs.get("gs_ctd_x", NONUM)
guide["gs_ctd_y"] = kwargs.get("gs_ctd_y", NONUM)
guide["gs_ctd_ux"] = kwargs.get("gs_ctd_ux", NONUM)
guide["gs_ctd_uy"] = kwargs.get("gs_ctd_uy", NONUM)
guide["gs_epoch"] = kwargs.get("gs_epoch", NOSTR)
guide["gs_mura"] = kwargs.get("gs_mura", NONUM)
guide["gs_mudec"] = kwargs.get("gs_mudec", NONUM)
guide["gs_para"] = kwargs.get("gs_para", NONUM)
guide["gs_pattern_error"] = kwargs.get("gs_pattern_error", NONUM)
guide["gw_window_xstart"] = kwargs.get("gw_window_xstart", NONUM)
guide["gw_window_ystart"] = kwargs.get("gw_window_ystart", NONUM)
guide["gw_window_xstop"] = kwargs.get("gw_window_xstop", guide["gw_window_xstart"] + 170)
guide["gw_window_ystop"] = kwargs.get("gw_window_ystop", guide["gw_window_ystart"] + 24)
guide["gw_window_xsize"] = kwargs.get("gw_window_xsize", 170)
guide["gw_window_ysize"] = kwargs.get("gw_window_ysize", 24)
return guide
def mk_ref_file(**kwargs):
"""
Create a dummy RefFile instance with valid values for attributes
required by the schema. Utilized by the model maker utilities below.
Returns
-------
roman_datamodels.stnode.RefFile
"""
ref_file = stnode.RefFile()
ref_file["dark"] = kwargs.get("dark", "N/A")
ref_file["distortion"] = kwargs.get("distortion", "N/A")
ref_file["flat"] = kwargs.get("flat", "N/A")
ref_file["gain"] = kwargs.get("gain", "N/A")
ref_file["linearity"] = kwargs.get("linearity", "N/A")
ref_file["mask"] = kwargs.get("mask", "N/A")
ref_file["readnoise"] = kwargs.get("readnoise", "N/A")
ref_file["saturation"] = kwargs.get("saturation", "N/A")
ref_file["photom"] = kwargs.get("photom", "N/A")
ref_file["crds"] = kwargs.get("crds", {"sw_version": "12.3.1", "context_used": "roman_0815.pmap"})
return ref_file
def mk_common_meta(**kwargs):
"""
Create a dummy common metadata dictionary with valid values for attributes
Returns
-------
dict (defined by the common-1.0.0 schema)
"""
meta = mk_basic_meta(**kwargs)
meta["aperture"] = mk_aperture(**kwargs.get("aperture", {}))
meta["cal_step"] = mk_cal_step(**kwargs.get("cal_step", {}))
meta["coordinates"] = mk_coordinates(**kwargs.get("coordinates", {}))
meta["ephemeris"] = mk_ephemeris(**kwargs.get("ephemeris", {}))
meta["exposure"] = mk_exposure(**kwargs.get("exposure", {}))
meta["guidestar"] = mk_guidestar(**kwargs.get("guidestar", {}))
meta["instrument"] = mk_wfi_mode(**kwargs.get("instrument", {}))
meta["observation"] = mk_observation(**kwargs.get("observation", {}))
meta["pointing"] = mk_pointing(**kwargs.get("pointing", {}))
meta["program"] = mk_program(**kwargs.get("program", {}))
meta["ref_file"] = mk_ref_file(**kwargs.get("ref_file", {}))
meta["target"] = mk_target(**kwargs.get("target", {}))
meta["velocity_aberration"] = mk_velocity_aberration(**kwargs.get("velocity_aberration", {}))
meta["visit"] = mk_visit(**kwargs.get("visit", {}))
meta["wcsinfo"] = mk_wcsinfo(**kwargs.get("wcsinfo", {}))
return meta
def mk_photometry_meta(**kwargs):
"""
Create a dummy common metadata dictionary with valid values for attributes and add
the additional photometry metadata
Returns
-------
dict (defined by the common-1.0.0 schema with additional photometry metadata)
"""
meta = mk_common_meta(**kwargs)
meta["photometry"] = mk_photometry(**kwargs.get("photometry", {}))
return meta
def mk_resample_meta(**kwargs):
"""
Create a dummy common metadata dictionary with valid values for attributes and add
the additional photometry AND resample metadata
Returns
-------
dict (defined by the common-1.0.0 schema with additional photometry and resample metadata)
"""
meta = mk_photometry_meta(**kwargs)
meta["resample"] = mk_resample(**kwargs.get("resample", {}))
return meta
def mk_guidewindow_meta(**kwargs):
"""
Create a dummy common metadata dictionary with valid values for attributes and add
the additional guidewindow metadata
Returns
-------
dict (defined by the common-1.0.0 schema with additional guidewindow metadata)
"""
meta = mk_common_meta(**kwargs)
meta["file_creation_time"] = kwargs.get("file_creation_time", time.Time("2020-01-01T20:00:00.0", format="isot", scale="utc"))
meta["gw_start_time"] = kwargs.get("gw_start_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
meta["gw_end_time"] = kwargs.get("gw_end_time", time.Time("2020-01-01T10:00:00.0", format="isot", scale="utc"))
meta["gw_function_start_time"] = kwargs.get(
"gw_function_start_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc")
)
meta["gw_function_end_time"] = kwargs.get(
"gw_function_end_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc")
)
meta["gw_frame_readout_time"] = kwargs.get("gw_frame_readout_time", NONUM)
meta["pedestal_resultant_exp_time"] = kwargs.get("pedestal_resultant_exp_time", NONUM)
meta["signal_resultant_exp_time"] = kwargs.get("signal_resultant_exp_time", NONUM)
meta["gw_acq_number"] = kwargs.get("gw_acq_number", NONUM)
meta["gw_science_file_source"] = kwargs.get("gw_science_file_source", NOSTR)
meta["gw_mode"] = kwargs.get("gw_mode", "WIM-ACQ")
meta["gw_window_xstart"] = kwargs.get("gw_window_xstart", NONUM)
meta["gw_window_ystart"] = kwargs.get("gw_window_ystart", NONUM)
meta["gw_window_xstop"] = kwargs.get("gw_window_xstop", meta["gw_window_xstart"] + 170)
meta["gw_window_ystop"] = kwargs.get("gw_window_ystop", meta["gw_window_ystart"] + 24)
meta["gw_window_xsize"] = kwargs.get("gw_window_xsize", 170)
meta["gw_window_ysize"] = kwargs.get("gw_window_ysize", 24)
meta["gw_function_start_time"] = kwargs.get(
"gw_function_start_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc")
)
meta["gw_function_end_time"] = kwargs.get(
"gw_function_end_time", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc")
)
meta["data_start"] = kwargs.get("data_start", NONUM)
meta["data_end"] = kwargs.get("data_end", NONUM)
meta["gw_acq_exec_stat"] = kwargs.get("gw_acq_exec_stat", "StatusRMTest619")
return meta
def mk_msos_stack_meta(**kwargs):
"""
Create a dummy common metadata dictionary with valid values for attributes and add
the additional msos_stack metadata
Returns
-------
dict (defined by the common-1.0.0 schema with additional guidewindow metadata)
"""
meta = mk_common_meta(**kwargs)
meta["image_list"] = kwargs.get("image_list", NOSTR)
return meta
def mk_ref_common(reftype_, **kwargs):
"""
Create dummy metadata for reference file instances.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema)
"""
meta = {}
meta["telescope"] = kwargs.get("telescope", "ROMAN")
meta["instrument"] = kwargs.get("instrument", {"name": "WFI", "detector": "WFI01", "optical_element": "F158"})
meta["origin"] = kwargs.get("origin", "STSCI")
meta["pedigree"] = kwargs.get("pedigree", "GROUND")
meta["author"] = kwargs.get("author", "test system")
meta["description"] = kwargs.get("description", "blah blah blah")
meta["useafter"] = kwargs.get("useafter", time.Time("2020-01-01T00:00:00.0", format="isot", scale="utc"))
meta["reftype"] = kwargs.get("reftype", reftype_)
return meta
def _mk_ref_exposure(**kwargs):
"""
Create the general exposure meta data
"""
exposure = {}
exposure["type"] = kwargs.get("type", "WFI_IMAGE")
exposure["p_exptype"] = kwargs.get("p_exptype", "WFI_IMAGE|WFI_GRISM|WFI_PRISM|")
return exposure
def _mk_ref_dark_exposure(**kwargs):
"""
Create the dark exposure meta data
"""
exposure = _mk_ref_exposure(**kwargs)
exposure["ngroups"] = kwargs.get("ngroups", 6)
exposure["nframes"] = kwargs.get("nframes", 8)
exposure["groupgap"] = kwargs.get("groupgap", 0)
exposure["ma_table_name"] = kwargs.get("ma_table_name", NOSTR)
exposure["ma_table_number"] = kwargs.get("ma_table_number", NONUM)
return exposure
def mk_ref_dark_meta(**kwargs):
"""
Create dummy metadata for dark reference file instances.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema + dark reference file metadata)
"""
meta = mk_ref_common("DARK", **kwargs)
meta["exposure"] = _mk_ref_dark_exposure(**kwargs.get("exposure", {}))
return meta
def mk_ref_distoriton_meta(**kwargs):
"""
Create dummy metadata for distortion reference file instances.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema + distortion reference file metadata)
"""
meta = mk_ref_common("DISTORTION", **kwargs)
meta["input_units"] = kwargs.get("input_units", u.pixel)
meta["output_units"] = kwargs.get("output_units", u.arcsec)
return meta
def _mk_ref_photometry_meta(**kwargs):
"""
Create the photometry meta data for pixelarea reference files
"""
meta = {}
meta["pixelarea_steradians"] = kwargs.get("pixelarea_steradians", float(NONUM) * u.sr)
meta["pixelarea_arcsecsq"] = kwargs.get("pixelarea_arcsecsq", float(NONUM) * u.arcsec**2)
return meta
def mk_ref_pixelarea_meta(**kwargs):
"""
Create dummy metadata for pixelarea reference file instances.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema + pixelarea reference file metadata)
"""
meta = mk_ref_common("AREA", **kwargs)
meta["photometry"] = _mk_ref_photometry_meta(**kwargs.get("photometry", {}))
return meta
def mk_ref_units_dn_meta(reftype_, **kwargs):
"""
Create dummy metadata for reference file instances which specify DN as input/output units.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema + DN input/output metadata)
"""
meta = mk_ref_common(reftype_, **kwargs)
meta["input_units"] = kwargs.get("input_units", u.DN)
meta["output_units"] = kwargs.get("output_units", u.DN)
return meta
def mk_ref_readnoise_meta(**kwargs):
"""
Create dummy metadata for readnoise reference file instances.
Returns
-------
dict (follows reference_file/ref_common-1.0.0 schema + readnoise reference file metadata)
"""
meta = mk_ref_common("READNOISE", **kwargs)
meta["exposure"] = _mk_ref_exposure(**kwargs.get("exposure", {}))
return meta | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/_common_meta.py | 0.702632 | 0.245469 | _common_meta.py | pypi |
from roman_datamodels.datamodels import MODEL_REGISTRY as _MODEL_REGISTRY # Hide from public API
from ._basic_meta import * # noqa: F403
from ._common_meta import * # noqa: F403
from ._datamodels import * # noqa: F403
from ._ref_files import * # noqa: F403
from ._tagged_nodes import * # noqa: F403
# These makers have special names to reflect the nature of their use in the pipeline
SPECIAL_MAKERS = {
"WfiScienceRaw": "mk_level1_science_raw",
"WfiImage": "mk_level2_image",
"WfiMosaic": "mk_level3_mosaic",
}
# This is static at runtime, so we might as well compute it once
NODE_REGISTRY = {mdl: node for node, mdl in _MODEL_REGISTRY.items()}
def _camel_case_to_snake_case(value):
"""
Courtesy of https://stackoverflow.com/a/1176023
"""
import re
return re.sub(r"(?<!^)(?=[A-Z])", "_", value).lower()
def _get_node_maker(node_class):
"""
Create a dummy node of the specified class with valid values
for attributes required by the schema.
Parameters
----------
node_class : type
Node class (from stnode).
Returns
-------
maker function for node class
"""
if node_class.__name__ in SPECIAL_MAKERS:
method_name = SPECIAL_MAKERS[node_class.__name__]
else:
method_name = f"mk_{_camel_case_to_snake_case(node_class.__name__)}"
# Reference files are in their own module so the '_ref` monicker is left off
if method_name.endswith("_ref"):
method_name = method_name[:-4]
if method_name not in globals():
raise ValueError(f"Maker utility: {method_name} not implemented for class {node_class.__name__}")
return globals()[method_name]
def mk_node(node_class, **kwargs):
"""
Create a dummy node of the specified class with valid values
for attributes required by the schema.
Parameters
----------
node_class : type
Node class (from stnode).
**kwargs
Additional or overridden attributes.
Returns
-------
`roman_datamodels.stnode.TaggedObjectNode`
"""
return _get_node_maker(node_class)(**kwargs)
def mk_datamodel(model_class, **kwargs):
"""
Create a dummy datamodel of the specified class with valid values
for all attributes required by the schema.
Parameters
----------
model_class : type
One of the datamodel subclasses from datamodel
**kwargs
Additional or overridden attributes.
Returns
-------
`roman_datamodels.datamodels.Datamodel`
"""
return model_class(mk_node(NODE_REGISTRY[model_class], **kwargs)) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/__init__.py | 0.748995 | 0.26192 | __init__.py | pypi |
import warnings
import numpy as np
from astropy import units as u
from astropy.modeling import models
from roman_datamodels import stnode
from ._base import MESSAGE, save_node
from ._common_meta import (
mk_ref_common,
mk_ref_dark_meta,
mk_ref_distoriton_meta,
mk_ref_pixelarea_meta,
mk_ref_readnoise_meta,
mk_ref_units_dn_meta,
)
__all__ = [
"mk_flat",
"mk_dark",
"mk_distortion",
"mk_gain",
"mk_ipc",
"mk_linearity",
"mk_inverselinearity",
"mk_mask",
"mk_pixelarea",
"mk_wfi_img_photom",
"mk_readnoise",
"mk_saturation",
"mk_superbias",
"mk_refpix",
]
def mk_flat(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Flat instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.FlatRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
flatref = stnode.FlatRef()
flatref["meta"] = mk_ref_common("FLAT", **kwargs.get("meta", {}))
flatref["data"] = kwargs.get("data", np.zeros(shape, dtype=np.float32))
flatref["dq"] = kwargs.get("dq", np.zeros(shape, dtype=np.uint32))
flatref["err"] = kwargs.get("err", np.zeros(shape, dtype=np.float32))
return save_node(flatref, filepath=filepath)
def mk_dark(*, shape=(2, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Dark Current instance (or file) with arrays and valid values
for attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.DarkRef
"""
if len(shape) != 3:
shape = (2, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (2, 4096, 4096)")
darkref = stnode.DarkRef()
darkref["meta"] = mk_ref_dark_meta(**kwargs.get("meta", {}))
darkref["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.DN, dtype=np.float32))
darkref["dq"] = kwargs.get("dq", np.zeros(shape[1:], dtype=np.uint32))
darkref["err"] = kwargs.get("err", u.Quantity(np.zeros(shape, dtype=np.float32), u.DN, dtype=np.float32))
return save_node(darkref, filepath=filepath)
def mk_distortion(*, filepath=None, **kwargs):
"""
Create a dummy Distortion instance (or file) with arrays and valid values
for attributes required by the schema.
Parameters
----------
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.DistortionRef
"""
distortionref = stnode.DistortionRef()
distortionref["meta"] = mk_ref_distoriton_meta(**kwargs.get("meta", {}))
distortionref["coordinate_distortion_transform"] = kwargs.get(
"coordinate_distortion_transform", models.Shift(1) & models.Shift(2)
)
return save_node(distortionref, filepath=filepath)
def mk_gain(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Gain instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.GainRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
gainref = stnode.GainRef()
gainref["meta"] = mk_ref_common("GAIN", **kwargs.get("meta", {}))
gainref["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.electron / u.DN, dtype=np.float32))
return save_node(gainref, filepath=filepath)
def mk_ipc(*, shape=(3, 3), filepath=None, **kwargs):
"""
Create a dummy IPC instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of array in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.IpcRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
ipcref = stnode.IpcRef()
ipcref["meta"] = mk_ref_common("IPC", **kwargs.get("meta", {}))
if "data" in kwargs:
ipcref["data"] = kwargs["data"]
else:
ipcref["data"] = np.zeros(shape, dtype=np.float32)
ipcref["data"][int(np.floor(shape[0] / 2))][int(np.floor(shape[1] / 2))] = 1.0
return save_node(ipcref, filepath=filepath)
def mk_linearity(*, shape=(2, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Linearity instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.LinearityRef
"""
if len(shape) != 3:
shape = (2, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (2, 4096, 4096)")
linearityref = stnode.LinearityRef()
linearityref["meta"] = mk_ref_units_dn_meta("LINEARITY", **kwargs.get("meta", {}))
linearityref["dq"] = kwargs.get("dq", np.zeros(shape[1:], dtype=np.uint32))
linearityref["coeffs"] = kwargs.get("coeffs", np.zeros(shape, dtype=np.float32))
return save_node(linearityref, filepath=filepath)
def mk_inverselinearity(*, shape=(2, 4096, 4096), filepath=None, **kwargs):
"""
Create a dummy InverseLinearity instance (or file) with arrays and valid
values for attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.InverseLinearityRef
"""
if len(shape) != 3:
shape = (2, 4096, 4096)
warnings.warn("Input shape must be 3D. Defaulting to (2, 4096, 4096)")
inverselinearityref = stnode.InverselinearityRef()
inverselinearityref["meta"] = mk_ref_units_dn_meta("INVERSELINEARITY", **kwargs.get("meta", {}))
inverselinearityref["dq"] = kwargs.get("dq", np.zeros(shape[1:], dtype=np.uint32))
inverselinearityref["coeffs"] = kwargs.get("coeffs", np.zeros(shape, dtype=np.float32))
return save_node(inverselinearityref, filepath=filepath)
def mk_mask(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Mask instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.MaskRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
maskref = stnode.MaskRef()
maskref["meta"] = mk_ref_common("MASK", **kwargs.get("meta", {}))
maskref["dq"] = kwargs.get("dq", np.zeros(shape, dtype=np.uint32))
return save_node(maskref, filepath=filepath)
def mk_pixelarea(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Pixelarea instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.PixelareaRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
pixelarearef = stnode.PixelareaRef()
pixelarearef["meta"] = mk_ref_pixelarea_meta(**kwargs.get("meta", {}))
pixelarearef["data"] = kwargs.get("data", np.zeros(shape, dtype=np.float32))
return save_node(pixelarearef, filepath=filepath)
def _mk_phot_table_entry(key, **kwargs):
"""
Create single phot_table entry for a given key.
"""
if key in ("GRISM", "PRISM", "DARK"):
entry = {
"photmjsr": kwargs.get("photmjsr"),
"uncertainty": kwargs.get("uncertainty"),
}
else:
entry = {
"photmjsr": kwargs.get("photmjsr", 1.0e-15 * u.megajansky / u.steradian),
"uncertainty": kwargs.get("uncertainty", 1.0e-16 * u.megajansky / u.steradian),
}
entry["pixelareasr"] = kwargs.get("pixelareasr", 1.0e-13 * u.steradian)
return entry
def _mk_phot_table(**kwargs):
"""
Create the phot_table for the photom reference file.
"""
entries = ("F062", "F087", "F106", "F129", "F146", "F158", "F184", "F213", "GRISM", "PRISM", "DARK")
return {entry: _mk_phot_table_entry(entry, **kwargs.get(entry, {})) for entry in entries}
def mk_wfi_img_photom(*, filepath=None, **kwargs):
"""
Create a dummy WFI Img Photom instance (or file) with dictionary and valid
values for attributes required by the schema.
Parameters
----------
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.WfiImgPhotomRef
"""
wfi_img_photomref = stnode.WfiImgPhotomRef()
wfi_img_photomref["meta"] = mk_ref_common("PHOTOM", **kwargs.get("meta", {}))
wfi_img_photomref["phot_table"] = _mk_phot_table(**kwargs.get("phot_table", {}))
return save_node(wfi_img_photomref, filepath=filepath)
def mk_readnoise(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Readnoise instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.ReadnoiseRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
readnoiseref = stnode.ReadnoiseRef()
readnoiseref["meta"] = mk_ref_readnoise_meta(**kwargs.get("meta", {}))
readnoiseref["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.DN, dtype=np.float32))
return save_node(readnoiseref, filepath=filepath)
def mk_saturation(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Saturation instance (or file) with arrays and valid values
for attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.SaturationRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
saturationref = stnode.SaturationRef()
saturationref["meta"] = mk_ref_common("SATURATION", **kwargs.get("meta", {}))
saturationref["dq"] = kwargs.get("dq", np.zeros(shape, dtype=np.uint32))
saturationref["data"] = kwargs.get("data", u.Quantity(np.zeros(shape, dtype=np.float32), u.DN, dtype=np.float32))
return save_node(saturationref, filepath=filepath)
def mk_superbias(*, shape=(4096, 4096), filepath=None, **kwargs):
"""
Create a dummy Superbias instance (or file) with arrays and valid values for
attributes required by the schema.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.SuperbiasRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
superbiasref = stnode.SuperbiasRef()
superbiasref["meta"] = mk_ref_common("BIAS", **kwargs.get("meta", {}))
superbiasref["data"] = kwargs.get("data", np.zeros(shape, dtype=np.float32))
superbiasref["dq"] = kwargs.get("dq", np.zeros(shape, dtype=np.uint32))
superbiasref["err"] = kwargs.get("err", np.zeros(shape, dtype=np.float32))
return save_node(superbiasref, filepath=filepath)
def mk_refpix(*, shape=(32, 286721), filepath=None, **kwargs):
"""
Create a dummy Refpix instance (or file) with arrays and valid values for
attributes required by the schema.
Note the default shape is intrinically connected to the FFT combined with
specifics of the detector:
- 32: is the number of detector channels (amp33 is a non-observation
channel).
- 286721 is more complex:
There are 128 columns of the detector per channel, and for time read
alignment purposes, these columns are padded by 12 additional
columns. That is 140 columns per row. There are 4096 rows per
channel. Each channel is then flattened into a 1D array of
140 * 4096 = 573440 elements. Since the length is even the FFT of
this array will be of length (573440 / 2) + 1 = 286721.
Also, note the FFT gives a complex value and we are carrying full numerical
precision which means it is a complex128.
Parameters
----------
shape
(optional, keyword-only) Shape of arrays in the model.
If shape is greater than 2D, the first two dimensions are used.
filepath
(optional, keyword-only) File name and path to write model to.
Returns
-------
roman_datamodels.stnode.RefPixRef
"""
if len(shape) > 2:
shape = shape[:2]
warnings.warn(f"{MESSAGE} assuming the first two entries. The remaining is thrown out!", UserWarning)
refpix = stnode.RefpixRef()
refpix["meta"] = mk_ref_units_dn_meta("REFPIX", **kwargs.get("meta", {}))
refpix["gamma"] = kwargs.get("gamma", np.zeros(shape, dtype=np.complex128))
refpix["zeta"] = kwargs.get("zeta", np.zeros(shape, dtype=np.complex128))
refpix["alpha"] = kwargs.get("alpha", np.zeros(shape, dtype=np.complex128))
return save_node(refpix, filepath=filepath) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/maker_utils/_ref_files.py | 0.789721 | 0.348368 | _ref_files.py | pypi |
from asdf.extension import Converter, ManifestExtension
from astropy.time import Time
from ._registry import LIST_NODE_CLASSES_BY_TAG, NODE_CONVERTERS, OBJECT_NODE_CLASSES_BY_TAG, SCALAR_NODE_CLASSES_BY_TAG
__all__ = [
"TaggedObjectNodeConverter",
"TaggedListNodeConverter",
"TaggedScalarNodeConverter",
"NODE_EXTENSIONS",
]
class _RomanConverter(Converter):
"""
Base class for the roman_datamodels converters.
"""
def __init_subclass__(cls, **kwargs) -> None:
"""
Automatically create the converter objects.
"""
super().__init_subclass__(**kwargs)
if not cls.__name__.startswith("_"):
if cls.__name__ in NODE_CONVERTERS:
raise ValueError(f"Duplicate converter for {cls.__name__}")
NODE_CONVERTERS[cls.__name__] = cls()
class TaggedObjectNodeConverter(_RomanConverter):
"""
Converter for all subclasses of TaggedObjectNode.
"""
@property
def tags(self):
return list(OBJECT_NODE_CLASSES_BY_TAG.keys())
@property
def types(self):
return list(OBJECT_NODE_CLASSES_BY_TAG.values())
def select_tag(self, obj, tags, ctx):
return obj.tag
def to_yaml_tree(self, obj, tag, ctx):
return obj._data
def from_yaml_tree(self, node, tag, ctx):
return OBJECT_NODE_CLASSES_BY_TAG[tag](node)
class TaggedListNodeConverter(_RomanConverter):
"""
Converter for all subclasses of TaggedListNode.
"""
@property
def tags(self):
return list(LIST_NODE_CLASSES_BY_TAG.keys())
@property
def types(self):
return list(LIST_NODE_CLASSES_BY_TAG.values())
def select_tag(self, obj, tags, ctx):
return obj.tag
def to_yaml_tree(self, obj, tag, ctx):
return list(obj)
def from_yaml_tree(self, node, tag, ctx):
return LIST_NODE_CLASSES_BY_TAG[tag](node)
class TaggedScalarNodeConverter(_RomanConverter):
"""
Converter for all subclasses of TaggedScalarNode.
"""
@property
def tags(self):
return list(SCALAR_NODE_CLASSES_BY_TAG.keys())
@property
def types(self):
return list(SCALAR_NODE_CLASSES_BY_TAG.values())
def select_tag(self, obj, tags, ctx):
return obj.tag
def to_yaml_tree(self, obj, tag, ctx):
from ._stnode import FileDate
node = obj.__class__.__bases__[0](obj)
if tag == FileDate._tag:
converter = ctx.extension_manager.get_converter_for_type(type(node))
node = converter.to_yaml_tree(node, tag, ctx)
return node
def from_yaml_tree(self, node, tag, ctx):
from ._stnode import FileDate
if tag == FileDate._tag:
converter = ctx.extension_manager.get_converter_for_type(Time)
node = converter.from_yaml_tree(node, tag, ctx)
return SCALAR_NODE_CLASSES_BY_TAG[tag](node)
# Create the ASDF extension for the STNode classes.
NODE_EXTENSIONS = [
ManifestExtension.from_uri("asdf://stsci.edu/datamodels/roman/manifests/datamodels-1.0", converters=NODE_CONVERTERS.values())
] | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/stnode/_converters.py | 0.812719 | 0.231788 | _converters.py | pypi |
import datetime
import re
import warnings
from collections import UserList
from collections.abc import MutableMapping
import asdf
import asdf.schema as asdfschema
import asdf.yamlutil as yamlutil
import numpy as np
from asdf.exceptions import ValidationError
from asdf.tags.core import ndarray
from asdf.util import HashableDict
from astropy.time import Time
from roman_datamodels.validate import ValidationWarning, _check_type, _error_message, will_strict_validate, will_validate
from ._registry import SCALAR_NODE_CLASSES_BY_KEY
__all__ = ["DNode", "LNode"]
validator_callbacks = HashableDict(asdfschema.YAML_VALIDATORS)
validator_callbacks.update({"type": _check_type})
def _value_change(path, value, schema, pass_invalid_values, strict_validation, ctx):
"""
Validate a change in value against a schema.
Trap error and return a flag.
"""
try:
_check_value(value, schema, ctx)
update = True
except ValidationError as error:
update = False
errmsg = _error_message(path, error)
if pass_invalid_values:
update = True
if strict_validation:
raise ValidationError(errmsg)
else:
warnings.warn(errmsg, ValidationWarning)
return update
def _check_value(value, schema, validator_context):
"""
Perform the actual validation.
"""
temp_schema = {"$schema": "http://stsci.edu/schemas/asdf-schema/0.1.0/asdf-schema"}
temp_schema.update(schema)
validator = asdfschema.get_validator(temp_schema, validator_context, validator_callbacks)
validator.validate(value, _schema=temp_schema)
validator_context.close()
def _validate(attr, instance, schema, ctx):
tagged_tree = yamlutil.custom_tree_to_tagged_tree(instance, ctx)
return _value_change(attr, tagged_tree, schema, False, will_strict_validate(), ctx)
def _get_schema_for_property(schema, attr):
# Check if attr is a property
subschema = schema.get("properties", {}).get(attr, None)
# Check if attr is a pattern property
props = schema.get("patternProperties", {})
for key, value in props.items():
if re.match(key, attr):
subschema = value
break
if subschema is not None:
return subschema
for combiner in ["allOf", "anyOf"]:
for subschema in schema.get(combiner, []):
subsubschema = _get_schema_for_property(subschema, attr)
if subsubschema != {}:
return subsubschema
return {}
class DNode(MutableMapping):
"""
Base class describing all "object" (dict-like) data nodes for STNode classes.
"""
_tag = None
_ctx = None
def __init__(self, node=None, parent=None, name=None):
if node is None:
self.__dict__["_data"] = {}
elif isinstance(node, dict):
self.__dict__["_data"] = node
else:
raise ValueError("Initializer only accepts dicts")
self._x_schema = None
self._schema_uri = None
self._parent = parent
self._name = name
@property
def ctx(self):
if self._ctx is None:
DNode._ctx = asdf.AsdfFile()
return self._ctx
@staticmethod
def _convert_to_scalar(key, value):
if key in SCALAR_NODE_CLASSES_BY_KEY:
value = SCALAR_NODE_CLASSES_BY_KEY[key](value)
return value
def __getattr__(self, key):
"""
Permit accessing dict keys as attributes, assuming they are legal Python
variable names.
"""
if key.startswith("_"):
raise AttributeError(f"No attribute {key}")
if key in self._data:
value = self._convert_to_scalar(key, self._data[key])
if isinstance(value, dict):
return DNode(value, parent=self, name=key)
elif isinstance(value, list):
return LNode(value)
else:
return value
else:
raise AttributeError(f"No such attribute ({key}) found in node")
def __setattr__(self, key, value):
"""
Permit assigning dict keys as attributes.
"""
if key[0] != "_":
value = self._convert_to_scalar(key, value)
if key in self._data:
if will_validate():
schema = _get_schema_for_property(self._schema(), key)
if schema == {} or _validate(key, value, schema, self.ctx):
self._data[key] = value
self.__dict__["_data"][key] = value
else:
raise AttributeError(f"No such attribute ({key}) found in node")
else:
self.__dict__[key] = value
def to_flat_dict(self, include_arrays=True):
"""
Returns a dictionary of all of the schema items as a flat dictionary.
Each dictionary key is a dot-separated name. For example, the
schema element ``meta.observation.date`` will end up in the
dictionary as::
{ "meta.observation.date": "2012-04-22T03:22:05.432" }
"""
def convert_val(val):
if isinstance(val, datetime.datetime):
return val.isoformat()
elif isinstance(val, Time):
return str(val)
return val
if include_arrays:
return {key: convert_val(val) for (key, val) in self.items()}
else:
return {
key: convert_val(val) for (key, val) in self.items() if not isinstance(val, (np.ndarray, ndarray.NDArrayType))
}
def _schema(self):
"""
If not overridden by a subclass, it will search for a schema from
the parent class, recursing if necessary until one is found.
"""
if self._x_schema is None:
parent_schema = self._parent._schema()
# Extract the subschema corresponding to this node.
subschema = _get_schema_for_property(parent_schema, self._name)
self._x_schema = subschema
return self._x_schema
def __asdf_traverse__(self):
return dict(self)
def __len__(self):
return len(self._data)
def __getitem__(self, key):
if key in self._data:
return self._data[key]
raise KeyError(f"No such key ({key}) found in node")
def __setitem__(self, key, value):
value = self._convert_to_scalar(key, value)
if isinstance(value, dict):
for sub_key, sub_value in value.items():
value[sub_key] = self._convert_to_scalar(sub_key, sub_value)
self._data[key] = value
def __delitem__(self, key):
del self._data[key]
def __iter__(self):
return iter(self._data)
def __repr__(self):
return repr(self._data)
def copy(self):
instance = self.__class__.__new__(self.__class__)
instance.__dict__.update(self.__dict__.copy())
instance.__dict__["_data"] = self.__dict__["_data"].copy()
return instance
class LNode(UserList):
"""
Base class describing all "array" (list-like) data nodes for STNode classes.
"""
_tag = None
def __init__(self, node=None):
if node is None:
self.data = []
elif isinstance(node, list):
self.data = node
elif isinstance(node, self.__class__):
self.data = node.data
else:
raise ValueError("Initializer only accepts lists")
def __getitem__(self, index):
value = self.data[index]
if isinstance(value, dict):
return DNode(value)
elif isinstance(value, list):
return LNode(value)
else:
return value
def __asdf_traverse__(self):
return list(self) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/stnode/_node.py | 0.662251 | 0.183557 | _node.py | pypi |
import copy
import asdf
from ._node import DNode, LNode
from ._registry import (
LIST_NODE_CLASSES_BY_TAG,
OBJECT_NODE_CLASSES_BY_TAG,
SCALAR_NODE_CLASSES_BY_KEY,
SCALAR_NODE_CLASSES_BY_TAG,
)
__all__ = [
"TaggedObjectNode",
"TaggedListNode",
"TaggedScalarNode",
]
def get_schema_from_tag(ctx, tag):
"""
Look up and load ASDF's schema corresponding to the tag_uri.
Parameters
----------
ctx :
An ASDF file context.
tag : str
The tag_uri of the schema to load.
"""
schema_uri = ctx.extension_manager.get_tag_definition(tag).schema_uris[0]
return asdf.schema.load_schema(schema_uri, resolve_references=True)
def name_from_tag_uri(tag_uri):
"""
Compute the name of the schema from the tag_uri.
Parameters
----------
tag_uri : str
The tag_uri to find the name from
"""
return tag_uri.split("/")[-1].split("-")[0]
class TaggedObjectNode(DNode):
"""
Base class for all tagged objects defined by RAD
There will be one of these for any tagged object defined by RAD, which has
base type: object.
"""
def __init_subclass__(cls, **kwargs) -> None:
"""
Register any subclasses of this class in the OBJECT_NODE_CLASSES_BY_TAG
registry.
"""
super().__init_subclass__(**kwargs)
if cls.__name__ != "TaggedObjectNode":
if cls._tag in OBJECT_NODE_CLASSES_BY_TAG:
raise RuntimeError(f"TaggedObjectNode class for tag '{cls._tag}' has been defined twice")
OBJECT_NODE_CLASSES_BY_TAG[cls._tag] = cls
@property
def tag(self):
return self._tag
def _schema(self):
if self._x_schema is None:
self._x_schema = self.get_schema()
return self._x_schema
def get_schema(self):
"""Retrieve the schema associated with this tag"""
return get_schema_from_tag(self.ctx, self._tag)
class TaggedListNode(LNode):
"""
Base class for all tagged list defined by RAD
There will be one of these for any tagged object defined by RAD, which has
base type: array.
"""
def __init_subclass__(cls, **kwargs) -> None:
"""
Register any subclasses of this class in the LIST_NODE_CLASSES_BY_TAG
registry.
"""
super().__init_subclass__(**kwargs)
if cls.__name__ != "TaggedListNode":
if cls._tag in LIST_NODE_CLASSES_BY_TAG:
raise RuntimeError(f"TaggedListNode class for tag '{cls._tag}' has been defined twice")
LIST_NODE_CLASSES_BY_TAG[cls._tag] = cls
@property
def tag(self):
return self._tag
class TaggedScalarNode:
"""
Base class for all tagged scalars defined by RAD
There will be one of these for any tagged object defined by RAD, which has
a scalar base type, or wraps a scalar base type.
These will all be in the tagged_scalars directory.
"""
_tag = None
_ctx = None
def __init_subclass__(cls, **kwargs) -> None:
"""
Register any subclasses of this class in the SCALAR_NODE_CLASSES_BY_TAG
and SCALAR_NODE_CLASSES_BY_KEY registry.
"""
super().__init_subclass__(**kwargs)
if cls.__name__ != "TaggedScalarNode":
if cls._tag in SCALAR_NODE_CLASSES_BY_TAG:
raise RuntimeError(f"TaggedScalarNode class for tag '{cls._tag}' has been defined twice")
SCALAR_NODE_CLASSES_BY_TAG[cls._tag] = cls
SCALAR_NODE_CLASSES_BY_KEY[name_from_tag_uri(cls._tag)] = cls
@property
def ctx(self):
if self._ctx is None:
TaggedScalarNode._ctx = asdf.AsdfFile()
return self._ctx
def __asdf_traverse__(self):
return self
@property
def tag(self):
return self._tag
@property
def key(self):
return name_from_tag_uri(self._tag)
def get_schema(self):
return get_schema_from_tag(self.ctx, self._tag)
def copy(self):
return copy.copy(self) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/stnode/_tagged.py | 0.773216 | 0.223716 | _tagged.py | pypi |
import importlib.resources
import yaml
from astropy.time import Time
from rad import resources
from . import _mixins
from ._tagged import TaggedListNode, TaggedObjectNode, TaggedScalarNode, name_from_tag_uri
__all__ = ["stnode_factory"]
# Map of scalar types in the schemas to the python types
SCALAR_TYPE_MAP = {
"string": str,
"http://stsci.edu/schemas/asdf/time/time-1.1.0": Time,
}
BASE_SCHEMA_PATH = importlib.resources.files(resources) / "schemas"
def load_schema_from_uri(schema_uri):
"""
Load the actual schema from the rad resources directly (outside ASDF)
Outside ASDF because this has to occur before the ASDF extensions are
registered.
Parameters
----------
schema_uri : str
The schema_uri found in the RAD manifest
Returns
-------
yaml library dictionary from the schema
"""
filename = f"{schema_uri.split('/')[-1]}.yaml"
if "reference_files" in schema_uri:
schema_path = BASE_SCHEMA_PATH / "reference_files" / filename
elif "tagged_scalars" in schema_uri:
schema_path = BASE_SCHEMA_PATH / "tagged_scalars" / filename
else:
schema_path = BASE_SCHEMA_PATH / filename
return yaml.safe_load(schema_path.read_bytes())
def class_name_from_tag_uri(tag_uri):
"""
Construct the class name for the STNode class from the tag_uri
Parameters
----------
tag_uri : str
The tag_uri found in the RAD manifest
Returns
-------
string name for the class
"""
tag_name = name_from_tag_uri(tag_uri)
class_name = "".join([p.capitalize() for p in tag_name.split("_")])
if tag_uri.startswith("asdf://stsci.edu/datamodels/roman/tags/reference_files/"):
class_name += "Ref"
return class_name
def docstring_from_tag(tag):
"""
Read the docstring (if it exists) from the RAD manifest and generate a docstring
for the dynamically generated class.
Parameters
----------
tag: dict
A tag entry from the RAD manifest
Returns
-------
A docstring for the class based on the tag
"""
docstring = f"{tag['description']}\n\n" if "description" in tag else ""
return docstring + f"Class generated from tag '{tag['tag_uri']}'"
def scalar_factory(tag):
"""
Factory to create a TaggedScalarNode class from a tag
Parameters
----------
tag: dict
A tag entry from the RAD manifest
Returns
-------
A dynamically generated TaggedScalarNode subclass
"""
class_name = class_name_from_tag_uri(tag["tag_uri"])
schema = load_schema_from_uri(tag["schema_uri"])
# TaggedScalarNode subclasses are really subclasses of the type of the scalar,
# with the TaggedScalarNode as a mixin. This is because the TaggedScalarNode
# is supposed to be the scalar, but it needs to be serializable under a specific
# ASDF tag.
# SCALAR_TYPE_MAP will need to be updated as new wrappers of scalar types are added
# to the RAD manifest.
if "type" in schema:
type_ = schema["type"]
elif "allOf" in schema:
type_ = schema["allOf"][0]["$ref"]
else:
raise RuntimeError(f"Unknown schema type: {schema}")
return type(
class_name,
(SCALAR_TYPE_MAP[type_], TaggedScalarNode),
{"_tag": tag["tag_uri"], "__module__": "roman_datamodels.stnode", "__doc__": docstring_from_tag(tag)},
)
def node_factory(tag):
"""
Factory to create a TaggedObjectNode or TaggedListNode class from a tag
Parameters
----------
tag: dict
A tag entry from the RAD manifest
Returns
-------
A dynamically generated TaggedObjectNode or TaggedListNode subclass
"""
class_name = class_name_from_tag_uri(tag["tag_uri"])
schema = load_schema_from_uri(tag["schema_uri"])
# Determine if the class is a TaggedObjectNode or TaggedListNode based on the
# type defined in the schema:
# - TaggedObjectNode if type is "object"
# - TaggedListNode if type is "array" (array in jsonschema represents Python list)
if schema["type"] == "object":
class_type = TaggedObjectNode
elif schema["type"] == "array":
class_type = TaggedListNode
else:
raise RuntimeError(f"Unknown schema type: {schema['type']}")
# In special cases one may need to add additional features to a tagged node class.
# This is done by creating a mixin class with the name <ClassName>Mixin in _mixins.py
# Here we mixin the mixin class if it exists.
if hasattr(_mixins, mixin := f"{class_name}Mixin"):
class_type = (class_type, getattr(_mixins, mixin))
else:
class_type = (class_type,)
return type(
class_name,
class_type,
{"_tag": tag["tag_uri"], "__module__": "roman_datamodels.stnode", "__doc__": docstring_from_tag(tag)},
)
def stnode_factory(tag):
"""
Construct a tagged STNode class from a tag
Parameters
----------
tag: dict
A tag entry from the RAD manifest
Returns
-------
A dynamically generated TaggedScalarNode, TaggedObjectNode, or TaggedListNode subclass
"""
# TaggedScalarNodes are a special case because they are not a subclass of a
# _node class, but rather a subclass of the type of the scalar.
if "tagged_scalar" in tag["schema_uri"]:
return scalar_factory(tag)
else:
return node_factory(tag) | /roman_datamodels-0.17.1-py3-none-any.whl/roman_datamodels/stnode/_factories.py | 0.901683 | 0.253301 | _factories.py | pypi |
import abc
import inspect
from dataclasses import dataclass
from importlib import import_module
from logging import getLogger
from typing import Any, Callable, List
from roman_discovery.contrib import find_modules
ModuleMatches = Callable[[str], bool]
ModuleAction = Callable[[str], Any]
ObjectMatches = Callable[[Any], bool]
ObjectAction = Callable[[Any], Any]
logger = getLogger(__name__)
class IRule(abc.ABC):
"""Generic type for a rule."""
@abc.abstractmethod
def discover(self, module_name: str):
...
@dataclass
class ModuleRule(IRule):
"""Module rule.
Defines a rule 'Run <module_action> for all modules matching <module_matches>'.
"""
name: str
module_matches: ModuleMatches
module_action: ModuleAction
def discover(self, module_name: str):
if self.module_matches(module_name): # type: ignore
logger.debug(f"{self.name} found module {module_name}")
self.module_action(module_name) # type: ignore
@dataclass
class ObjectRule(IRule):
"""Object rule.
Defines a rule 'Run <object_action> for all objects matching <object_matches>
inside modules matching <module_matches>'.
"""
name: str
module_matches: ModuleMatches
object_matches: ObjectMatches
object_action: ObjectAction
def discover(self, module_name: str):
if not self.module_matches(module_name): # type: ignore
return
module_obj = import_module(module_name)
for object_name, obj in inspect.getmembers(
module_obj, predicate=self.object_matches # type: ignore
):
logger.debug(f"{self.name} found {object_name} in {module_name}")
self.object_action(obj) # type: ignore
def discover(import_path: str, rules: List[IRule]):
"""Discover all objects.
Scan the package, find all modules and objects, matching the given set of rules,
and apply actions defined in them.
Args:
import_path: top-level module name to start scanning. Usually, it's a name of
your application, e.g., "myapp". If your application doesn't have a single
top-level module, you will probably call it for all top-level modules.
rules: a list of module and objects rules. Each rule contains the
match specification and the action, if the object matches.
"""
for module_name in find_modules(import_path=import_path, recursive=True):
for rule in rules:
rule.discover(module_name) | /roman_discovery-0.3.2-py3-none-any.whl/roman_discovery/discovery.py | 0.785638 | 0.165863 | discovery.py | pypi |
import fnmatch
import inspect
from dataclasses import dataclass
from typing import Any, List, Type
@dataclass
class MatchByPattern:
"""Module matcher that selects module names by patterns.
Constructor accepts the list of Unix shell-style wildcards for module names. E.g.
the following instance will match all files "models.py" and "models/<something>.py"
in a flat list of packages inside your application.
matcher = MatchByPattern(["*.models", "*.models.*"])
"""
patterns: List[str]
def __call__(self, value: str) -> bool:
for pattern in self.patterns:
if fnmatch.fnmatch(value, pattern):
return True
return False
@dataclass
class MatchByType:
"""Object matcher that selects instances by type.
Constructor accepts a type or a tuple of types. E.g., the following instance will
find all Flask blueprints in a module.
from flask import Blueprint
matcher = MatchByType(Blueprint)
"""
object_type: Type
def __call__(self, obj: Any):
return isinstance(obj, self.object_type)
@dataclass
class MatchBySubclass:
"""Object matcher that select classes that are subclasses of a given type.
Constructor accepts a type or a tuple of types. E.g., the following instance will
find all Django models in a model.
from django.db import models
matcher = MatchBySubclass(models.Model)
"""
object_type: Type
def __call__(self, obj: Any):
return (
inspect.isclass(obj)
and issubclass(obj, self.object_type)
and obj != self.object_type
)
@dataclass
class MatchByAttribute:
"""Object matcher that selects having an attribute with the given name.
Constructor accepts an attribute name as a string. E.g., the following instance
will find all objects that have an attribute `init_app` (a common way for
initializing Flask plugins.)
MatchByAttribute("init_app")
"""
attribute_name: str
def __call__(self, obj: Any):
return hasattr(obj, self.attribute_name)
@dataclass
class MatchByCallableAttribute:
"""Object matcher that selects having a callable attribute with the given name.
Constructor accepts an attribute name as a string. E.g., the following instance
will find all objects having a method `init_app()` (a common way for
initializing Flask plugins.)
MatchByCallableAttribute("init_app")
"""
attribute_name: str
def __call__(self, obj: Any):
return (
not inspect.isclass(obj)
and hasattr(obj, self.attribute_name)
and callable(getattr(obj, self.attribute_name))
) | /roman_discovery-0.3.2-py3-none-any.whl/roman_discovery/matchers.py | 0.854718 | 0.265199 | matchers.py | pypi |
from importlib import import_module
from typing import List
from roman_discovery import IRule
from roman_discovery.discovery import ModuleRule, ObjectRule
from roman_discovery.matchers import (
MatchByCallableAttribute,
MatchByPattern,
MatchByType,
)
def get_flask_rules(import_path: str, flask_app) -> List[IRule]:
"""Return a list of rules useful for the Flask application.
The following rules will be returned:
- Load SQLAlchemy models (files models.py)
- Load Flask blueprints (files controllers.py)
- Load Flask CLI commands (files cli.py)
- Initialize services (top-level file services.py)
Args:
import_path: name of the top-level module of the project (like, "myproject")
flask_app: a Flask app instance.
Returns:
A list of rules, suitable to be passed to "roman_discovery.discover()"
"""
return [
models_loader(import_path),
blueprints_loader(import_path, flask_app),
commands_loader(import_path, flask_app),
service_initializer(import_path, flask_app),
]
def models_loader(import_path):
"""Load all models."""
return ModuleRule(
name="Flask models loader",
module_matches=MatchByPattern(generate_patterns(import_path, "models")),
module_action=import_module,
)
def blueprints_loader(import_path, flask_app):
"""Find and import all blueprints in the application."""
try:
from flask import Blueprint
except ImportError:
raise RuntimeError("Flask is not installed.")
return ObjectRule(
name="Flask blueprints loader",
module_matches=MatchByPattern(generate_patterns(import_path, "controllers")),
object_matches=MatchByType(Blueprint),
object_action=flask_app.register_blueprint,
)
def commands_loader(import_path, flask_app):
"""Find all commands and register them as Flask CLI commands."""
try:
from flask.cli import AppGroup
except ImportError:
raise RuntimeError("Flask is not installed.")
return ObjectRule(
name="Flask CLI commands loader",
module_matches=MatchByPattern(generate_patterns(import_path, "cli")),
object_matches=MatchByType(AppGroup),
object_action=flask_app.cli.add_command,
)
def service_initializer(import_path, flask_app):
"""Find and initialize all instances of Flask applications.
Notice that the initialize scans for top-level services files, and doesn't
walk over all your app's domain package.
"""
return ObjectRule(
name="Flask service initializer",
module_matches=MatchByPattern([f"{import_path}.services"]),
object_matches=MatchByCallableAttribute("init_app"),
object_action=lambda obj: obj.init_app(app=flask_app),
)
def generate_patterns(import_path: str, module_prefix: str) -> List[str]:
"""Generate a list of patterns to discover.
For example, gen_patterns("myapp", "models") generates patterns that make matchers
discover the content in the following files.
myapp/users/models.py
myapp/invoices/models.py
(etc. for all domain packages beyond "users" and "invoices")
...
myapp/users/models_roles.py
myapp/users/models_groups.py
(etc. for all modules started with "models_" in all domain packages)
...
myapp/users/models/roles.py
myapp/users/models/groups.py
(if you prefer nested structures)
"""
return [
f"{import_path}.*.{module_prefix}",
f"{import_path}.*.{module_prefix}_*",
f"{import_path}.*.{module_prefix}.*",
] | /roman_discovery-0.3.2-py3-none-any.whl/roman_discovery/flask.py | 0.831896 | 0.170025 | flask.py | pypi |
from dataclasses import dataclass
from typing import List, TypeVar, Tuple
from mappings import (
get_mappings,
consonant_kaars,
get_word_maps,
Mappings,
amkaar,
aNNkaar,
Ri,
)
@dataclass
class State:
remaining: str = ''
consumed: str = ''
processed: str = ''
as_is: bool = False
def copy(self, **kwargs):
return State(self.remaining, self.consumed, self.processed, **kwargs)
class Converter:
def __init__(self):
self.mappings = get_mappings()
self.word_maps = get_word_maps()
def consume(self, state: State) -> State:
current = state.remaining
consumed = state.consumed
processed = state.processed
if state.as_is:
if current[0] == '}':
return State(current[1:], consumed + current[:1], processed, False)
else:
return State(current[1:], consumed + current[0], processed + current[0], True)
# Handle escape sequences
if current.startswith('{{'):
return State(current[2:], consumed + current[:2], processed + '{')
if current.startswith('{'):
return State(current[1:], consumed + current[:1], processed, True)
# Check if word is in direct word mappings
direct_mapping = self.word_maps.get(current)
if direct_mapping:
return State(current[len(direct_mapping):], consumed + current[:len(direct_mapping)], processed + direct_mapping)
# Handle amkaar and aaNkar
if current.startswith('M'):
return State(current[1:], consumed + current[0], processed + amkaar)
if current.startswith('NN'):
return State(current[2:], consumed + current[:2], processed + aNNkaar)
# Special case for Ri and Ree
if current.startswith('RI'):
if consumed[-1] != 'a':
return State(current[2:], consumed + current[:2], processed[:-1] + Ri)
else:
return State(current[2:], consumed + current[:2], processed + Ri)
# Handle other mappings
for k, v in self.mappings.items():
if current.startswith(k):
return State(current[len(k):], consumed + current[:len(k)], processed + v)
# Default case
return State(current[1:], consumed + current[0], processed + current[0])
def convert(self, text: str) -> str:
if not text:
return text
state = State(text)
while state.remaining:
state = self.consume(state)
return state.processed | /roman_nepali_translator-0.1.1-py3-none-any.whl/nepali_translator/converter.py | 0.859118 | 0.304106 | converter.py | pypi |
from dataclasses import dataclass
from typing import List, TypeVar, Tuple
from mappings import (
get_mappings,
consonant_kaars,
get_word_maps,
Mappings,
amkaar,
aNNkaar,
Ri,
)
@dataclass
class State:
remaining: str = ''
consumed: str = ''
processed: str = ''
as_is: bool = False
def copy(self, **kwargs):
return State(self.remaining, self.consumed, self.processed, **kwargs)
class Converter:
def __init__(self):
self.mappings = get_mappings()
self.word_maps = get_word_maps()
def consume(self, state: State) -> State:
current = state.remaining
consumed = state.consumed
processed = state.processed
if state.as_is:
if current[0] == '}':
return State(current[1:], consumed + current[:1], processed, False)
else:
return State(current[1:], consumed + current[0], processed + current[0], True)
# Handle escape sequences
if current.startswith('{{'):
return State(current[2:], consumed + current[:2], processed + '{')
if current.startswith('{'):
return State(current[1:], consumed + current[:1], processed, True)
# Check if word is in direct word mappings
direct_mapping = self.word_maps.get(current)
if direct_mapping:
return State(current[len(direct_mapping):], consumed + current[:len(direct_mapping)], processed + direct_mapping)
# Handle amkaar and aaNkar
if current.startswith('M'):
return State(current[1:], consumed + current[0], processed + amkaar)
if current.startswith('NN'):
return State(current[2:], consumed + current[:2], processed + aNNkaar)
# Special case for Ri and Ree
if current.startswith('RI'):
if consumed[-1] != 'a':
return State(current[2:], consumed + current[:2], processed[:-1] + Ri)
else:
return State(current[2:], consumed + current[:2], processed + Ri)
# Handle other mappings
for k, v in self.mappings.items():
if current.startswith(k):
return State(current[len(k):], consumed + current[:len(k)], processed + v)
# Default case
return State(current[1:], consumed + current[0], processed + current[0])
def convert(self, text: str) -> str:
if not text:
return text
state = State(text)
while state.remaining:
state = self.consume(state)
return state.processed | /roman_nepali_translator-0.1.1-py3-none-any.whl/nepali_translator/translator.py | 0.859118 | 0.304106 | translator.py | pypi |
from .arabic_to_roman import arabic_to_roman
class RomanToArabic(object):
_char_2_number = {"M": 1000, "D": 500, "C": 100, "L": 50, "X": 10, "V": 5, "I": 1}
@staticmethod
def convert(roman: str) -> int:
""" Convert a Roman numeral to an Arabic Numeral.
To convert Arabic numerals we chose the algorithm from Paul M. Winkler
presented in :code:`"Python Cookbook" by David Ascher, Alex Martelli ISBN: 0596001673`.
since it is arguably the most readable algorithm.
Args:
roman (str): Roman numeral represented as string.
Raises:
TypeError: roman is not a string
ValueError: roman is not a valid Roman numeral
Returns:
int : int encoding the input as Arabic numeral
"""
if not isinstance(roman, str):
raise TypeError("expected string, got {}".format(type(roman)))
roman = roman.upper()
nums = RomanToArabic._char_2_number
sum = 0
for i in range(len(roman)):
try:
value = nums[roman[i]]
# If the next place holds a larger number, this value is negative
if i + 1 < len(roman) and nums[roman[i + 1]] > value:
sum -= value
else:
sum += value
except KeyError:
raise ValueError("roman is not a valid Roman numeral: {}".format(roman))
# only if the inverse gives input as result it was a vaid roman numeral
if arabic_to_roman(sum) == roman:
return sum
else:
raise ValueError("roman is not a valid Roman numeral: {}".format(roman))
def roman_to_arabic(arabic: int) -> str:
""" Convert a Roman numeral to an Arabic Numeral.
Shorthand for :py:meth:`RomanToArabic.convert`, see
:py:meth:`RomanToArabic.convert` for full documentation.
Args:
roman (str): Roman numeral represented as string.
Raises:
TypeError: roman is not a string
ValueError: roman is not a valid Roman numeral
Returns:
int : int encoding the input as Arabic numeral
"""
return RomanToArabic.convert(arabic) | /roman_numerals_webservice-0.4.1.tar.gz/roman_numerals_webservice-0.4.1/roman_numerals_webservice/roman_numerals/roman_to_arabic.py | 0.684791 | 0.426262 | roman_to_arabic.py | pypi |
import numbers
class ArabicToRoman(object):
_int_nums = (
(1000, "M"),
(900, "CM"),
(500, "D"),
(400, "CD"),
(100, "C"),
(90, "XC"),
(50, "L"),
(40, "XL"),
(10, "X"),
(9, "IX"),
(5, "V"),
(4, "IV"),
(1, "I"),
)
@staticmethod
def convert(arabic: int) -> str:
"""Convert an Arabic numeral to a Roman numeral
To convert Arabic numerals we chose the algorithm from Paul M. Winkler
presented in :code:`"Python Cookbook" by David Ascher, Alex Martelli ISBN: 0596001673`.
since it is arguably the most readable algorithm.
Args:
arabic (int): Arabic numeral represented as integer. The number must be be in :code:`[1,...,3999]`
Raises:
TypeError: arabic does not satisfy :code:`isinstance(arabic, numbers.Integral)` must be true.
ValueError: arabic does not satisfy :code:`1 <= v <= 3999`
Returns:
str : string encoding the input as Roman numeral
"""
if not isinstance(arabic, numbers.Integral):
raise TypeError("expected integer, got {}".format(type(arabic)))
if arabic < 1 or arabic > 3999:
raise ValueError("Argument must be between 1 and 3999")
result = []
for val, chars in ArabicToRoman._int_nums:
count = int(arabic // val)
result.append(chars * count)
arabic -= val * count
return "".join(result)
def arabic_to_roman(arabic: int) -> str:
"""Convert an Arabic numeral to a Roman numeral
Shorthand for :py:meth:`ArabicToRoman.convert`, see
:py:meth:`ArabicToRoman.convert` for full documentation.
Args:
arabic: Arabic numeral represented as integer.
Raises:
TypeError: arabic does not satisfy :code:`isinstance(arabic, numbers.Integral)` must be true.
ValueError: arabic does not satisfy :code:`1 <= v <= 3999`
Returns:
str : string encoding the input as Roman numeral
"""
return ArabicToRoman.convert(arabic) | /roman_numerals_webservice-0.4.1.tar.gz/roman_numerals_webservice-0.4.1/roman_numerals_webservice/roman_numerals/arabic_to_roman.py | 0.867528 | 0.442456 | arabic_to_roman.py | pypi |
from re import (
VERBOSE, compile as re_compile, fullmatch, match, sub as substitute,)
# Operation Modes
STANDARD = 1
LOWERCASE = 2
ROMAN_NUMERAL_TABLE = [
(1000, 'Ⅿ'),
(900, 'ⅭⅯ'),
(500, 'Ⅾ'),
(400, 'ⅭⅮ'),
(100, 'Ⅽ'),
(90, 'ⅩⅭ'),
(50, 'Ⅼ'),
(40, 'ⅩⅬ'),
(10, 'Ⅹ'),
(9, 'Ⅸ'),
(5, 'Ⅴ'),
(4, 'Ⅳ'),
(1, 'Ⅰ'),
]
SHORTENINGS = [
('ⅩⅠⅠ', 'Ⅻ'),
('ⅩⅠ', 'Ⅺ'),
('ⅤⅠⅠⅠ', 'Ⅷ'),
('ⅤⅠⅠ', 'Ⅶ'),
('ⅤⅠ', 'Ⅵ'),
('ⅠⅠⅠ', 'Ⅲ'),
('ⅠⅠ', 'Ⅱ'),
]
STANDARD_TRANS = 'ⅯⅮⅭⅬⅫⅪⅩⅨⅧⅦⅥⅤⅣⅢⅡⅠ'
LOWERCASE_TRANS = 'ⅿⅾⅽⅼⅻⅺⅹⅸⅷⅶⅵⅴⅳⅲⅱⅰ'
def convert_to_numeral(decimal_integer: int, mode: int = STANDARD) -> str:
"""Convert a decimal integer to a Roman numeral"""
if (not isinstance(decimal_integer, int)
or isinstance(decimal_integer, bool)):
raise TypeError("decimal_integer must be of type int")
if (not isinstance(mode, int)
or isinstance(mode, bool)
or mode not in [LOWERCASE, STANDARD]):
raise ValueError(
"mode must be "
"roman_numerals.STANDARD "
"or roman_numerals.LOWERCASE "
)
return_list = []
remainder = decimal_integer
for integer, numeral in ROMAN_NUMERAL_TABLE:
repetitions, remainder = divmod(remainder, integer)
return_list.append(numeral * repetitions)
numeral_string = ''.join(return_list)
for full_string, shortening in SHORTENINGS:
numeral_string = substitute(
r'%s$' % full_string,
shortening,
numeral_string,
)
if mode == LOWERCASE:
trans_to_lowercase = str.maketrans(STANDARD_TRANS, LOWERCASE_TRANS)
numeral_string = numeral_string.translate(trans_to_lowercase)
return numeral_string
NUMERAL_PATTERN = re_compile(
"""
Ⅿ* # thousands
(ⅭⅯ|ⅭⅮ|Ⅾ?Ⅽ{0,3}) # hundreds - ⅭⅯ (900), ⅭⅮ (400), ⅮⅭⅭⅭ (800), ⅭⅭⅭ (300)
(ⅩⅭ|ⅩⅬ|Ⅼ?Ⅹ{0,3}) # tens - ⅩⅭ (90), ⅩⅬ (40), ⅬⅩⅩⅩ (80), ⅩⅩⅩ (30)
(Ⅸ|Ⅳ|Ⅴ?Ⅰ{0,3}) # ones - Ⅸ (9), Ⅳ (4), ⅤⅠⅠⅠ (8), ⅠⅠⅠ (3)
""",
VERBOSE
)
def convert_to_integer(roman_numeral: str) -> int:
"""Convert a Roman numeral to a decimal integer"""
if not isinstance(roman_numeral, str):
raise TypeError("decimal_integer must be of type int")
if roman_numeral == '':
raise ValueError("roman_numeral cannot be an empty string")
# ensure all characters are in the standard/uppercase set
trans_to_uppercase = str.maketrans(LOWERCASE_TRANS, STANDARD_TRANS)
# named partial_numeral because it will be shortened in loop below
partial_numeral = roman_numeral.translate(trans_to_uppercase)
# remove Unicode shortenings in favor of chars in conversion table
for full_string, shortening in SHORTENINGS:
partial_numeral = substitute(
r'%s$' % shortening,
full_string,
partial_numeral,
)
if not fullmatch(NUMERAL_PATTERN, partial_numeral):
raise ValueError(
"the string %s is not a valid numeral" % roman_numeral
)
# convert uppercase roman numerals to integer
return_value = 0
for integer, numeral in ROMAN_NUMERAL_TABLE:
pattern_match = match(r'^(%s)+' % numeral, partial_numeral)
if pattern_match:
chars_matched = len(pattern_match.group())
numerals_matched = chars_matched // len(numeral)
return_value += numerals_matched * integer
partial_numeral = partial_numeral[chars_matched:]
return return_value | /roman-numerals-0.1.0.tar.gz/roman-numerals-0.1.0/src/roman_numerals/__init__.py | 0.434941 | 0.267627 | __init__.py | pypi |
import re
from .errors import RTextError, RomanValidationError
CASE_UPPER = 1
CASE_LOWER = -1
CASE_IGNORE = 0
def rn_validator(rn: str, case: int = CASE_UPPER):
"""
Check the roman number for validity.
:param rn: roman number
:param case: upper (1), lower (-1) or ignore (0) case. Default upper (1).
:return: True if roman number is valid or False if not.
:rtype: bool
"""
if not isinstance(rn, str):
raise RomanValidationError(f"Invalid data type {type(rn)}, must be str")
elif case not in (CASE_UPPER, CASE_LOWER, CASE_IGNORE):
raise RomanValidationError("Not found type of case. Possible values: -1, 0 or 1")
regex = r"\b(N)\b|\b(?=[MDCLXVI])(M*)(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})\b"
regex_l = r"\b(n)\b|\b(?=[mdclxvi])(m*)(c[md]|d?c{0,3})(x[cl]|l?x{0,3})(i[xv]|v?i{0,3})\b"
if case == CASE_UPPER:
return True if re.fullmatch(regex, rn) is not None else False
if case == CASE_LOWER:
return True if re.fullmatch(regex_l, rn) is not None else False
if case == CASE_IGNORE:
return True if re.fullmatch(regex, rn, re.IGNORECASE) is not None else False
class RText:
"""Class for the work with text: found/replace roman numbers and integers."""
def __init__(self, text: str):
if not isinstance(text, str):
raise RTextError("неверный формат входных данных")
self.text = text
self.__regex_int = r"(?<![.,])\b\d+\b(?![.,]\d+)"
self.__regex_roman = r"\bN\b|\b(?=[MDCLXVI])(M*)(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})\b"
self.__regex_roman_l = r"\bn\b|\b(?=[mdclxvi])(m*)(c[md]|d?c{0,3})(x[cl]|l?x{0,3})(i[xv]|v?i{0,3})\b"
def rnb(self, case: int = CASE_UPPER):
"""
Find all roman numbers in input text.
:param case: upper (1), lower (-1) or ignore(0). Default and recommended upper (1).
:return: list of all found roman numbers
:rtype: list
"""
if case not in (CASE_UPPER, CASE_LOWER, CASE_IGNORE):
raise RTextError("Not found type of case. Possible values: -1, 0 or 1")
if case == CASE_UPPER:
return [r.group() for r in re.finditer(self.__regex_roman, self.text) if len(r.group()) > 0]
elif case == CASE_LOWER:
return [r.group() for r in re.finditer(self.__regex_roman_l, self.text) if len(r.group()) > 0]
elif case == CASE_IGNORE:
return [r.group() for r in re.finditer(self.__regex_roman, self.text, re.IGNORECASE) if len(r.group()) > 0]
def nb(self):
"""
Find all positive integers in input text.
:return: list of all found integers.
:rtype: list
"""
return [int(x) for x in re.findall(self.__regex_int, self.text)]
def from_roman(self, case: int = CASE_UPPER):
"""
Replace all found roman numbers to integers.
:param case: upper (1), lower (-1) or ignore (0). Default and recommended upper (1).
:return: input text with replacements roman to integers.
:rtype: str
"""
if case not in (CASE_UPPER, CASE_LOWER, CASE_IGNORE):
raise RTextError("Not found type of case. Possible values: -1, 0 or 1")
if case == CASE_UPPER:
return re.sub(self.__regex_roman, lambda rn: self._repr_from_roman(rn), self.text)
elif case == CASE_LOWER:
return re.sub(self.__regex_roman_l, lambda rn: self._repr_from_roman(rn), self.text)
elif case == CASE_IGNORE:
return re.sub(self.__regex_roman, lambda rn: self._repr_from_roman(rn), self.text, flags=re.IGNORECASE)
@staticmethod
def _repr_from_roman(rn):
if len(rn.group()) == 0:
return ""
if str(rn.group()).upper() == "N":
return "0"
list_3 = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM"]
list_2 = ["", "X", "XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"]
list_1 = ["", "I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX"]
gr = rn.groups()
result = sum([
list_1.index(gr[3].upper()), list_2.index(gr[2].upper()) * 10,
list_3.index(gr[1].upper()) * 100, len(gr[0]) * 1000
])
return str(result)
def to_roman(self, case: int = CASE_UPPER):
"""
Replace all found integers to roman numbers.
:param case: upper (1) or lower (-1). Default upper (1).
:return: input text with replacements integers to roman numbers.
:rtype: str
"""
if case not in (CASE_UPPER, CASE_LOWER):
raise RTextError("Not found type of case. Possible values: -1 or 1")
return re.sub(self.__regex_int, lambda n: self._repr_to_roman(int(n.group()), case), self.text)
@staticmethod
def _repr_to_roman(n, case):
if n == 0:
return "N" if case == CASE_UPPER else "n"
list_3 = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM"]
list_2 = ["", "X", "XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"]
list_1 = ["", "I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX"]
result = "{}{}{}{}".format(
(n // 1000) * "M",
list_3[n // 100 % 10],
list_2[n // 10 % 10],
list_1[n % 10]
)
return result if case == CASE_UPPER else result.lower()
def __str__(self):
return self.text
def __repr__(self):
return f"{self.text[:50]}..." if len(self.text) > 53 else self.text | /roman_nums-0.0.1.tar.gz/roman_nums-0.0.1/roman_nums/utils.py | 0.750644 | 0.361221 | utils.py | pypi |
import re
from .errors import (RomanNumsError, RomanValidationError)
__version__ = "0.0.1"
__author__ = "Kovalenko Nikolay"
__email__ = "kovalenko.n.r-g@yandex.ru"
CASE_UPPER = 1
CASE_LOWER = -1
def from_roman(rn: str, ignore_case: bool = False):
"""
Make an integer from roman number.
:param rn: input roman number.
:param ignore_case: if True, you can input roman number in lower or upper case.
:return: positive integer from roman number.
:rtype: int
"""
pattern = r"^N|(?=[MDCLXVI])(M*)(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
if not isinstance(rn, str):
raise RomanNumsError(f"Invalid data type {type(rn)}, must be str")
match = re.fullmatch(pattern, rn, re.IGNORECASE if ignore_case is True else False)
if match is None:
raise RomanNumsError("Number not found")
if match.group().upper() == "N":
return 0
list_3 = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM"]
list_2 = ["", "X", "XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"]
list_1 = ["", "I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX"]
gr = match.groups()
result = sum([
list_1.index(gr[3].upper()), list_2.index(gr[2].upper()) * 10,
list_3.index(gr[1].upper()) * 100, len(gr[0].upper()) * 1000
])
return result
def to_roman(n: int, case: int = CASE_UPPER):
"""
Make roman number from positive integer.
:param n: positive integer
:param case: upper (1) or lower (-1). Default upper (1)
:return: roman number
:rtype: str
"""
if not isinstance(n, int):
raise RomanNumsError(f"Invalid data type {type(n)}, must be int")
elif case not in (CASE_UPPER, CASE_LOWER):
raise RomanNumsError("Not found type of case. Possible values: -1, 1")
elif n == 0:
return "N" if case == CASE_UPPER else "n"
elif n < 0:
raise RomanNumsError("Only positive integer")
list_3 = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM"]
list_2 = ["", "X", "XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"]
list_1 = ["", "I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX"]
result = "{}{}{}{}".format(
(n // 1000) * "M",
list_3[n // 100 % 10],
list_2[n // 10 % 10],
list_1[n % 10]
)
return result if case == CASE_UPPER else result.lower() | /roman_nums-0.0.1.tar.gz/roman_nums-0.0.1/roman_nums/__init__.py | 0.549641 | 0.315987 | __init__.py | pypi |
"""Convert to and from Roman numerals"""
__author__ = "Mark Pilgrim (f8dy@diveintopython.org)"
__version__ = "1.4"
__date__ = "8 August 2001"
__copyright__ = """Copyright (c) 2001 Mark Pilgrim
This program is part of "Dive Into Python", a free Python tutorial for
experienced programmers. Visit http://diveintopython.org/ for the
latest version.
This program is free software; you can redistribute it and/or modify
it under the terms of the Python 2.1.1 license, available at
http://www.python.org/2.1.1/license.html
"""
import argparse
import re
import sys
# Define exceptions
class RomanError(Exception):
pass
class OutOfRangeError(RomanError):
pass
class NotIntegerError(RomanError):
pass
class InvalidRomanNumeralError(RomanError):
pass
# Define digit mapping
romanNumeralMap = (('M', 1000),
('CM', 900),
('D', 500),
('CD', 400),
('C', 100),
('XC', 90),
('L', 50),
('XL', 40),
('X', 10),
('IX', 9),
('V', 5),
('IV', 4),
('I', 1))
def toRoman(n):
"""convert integer to Roman numeral"""
if not isinstance(n, int):
raise NotIntegerError("decimals can not be converted")
if not (-1 < n < 5000):
raise OutOfRangeError("number out of range (must be 0..4999)")
# special case
if n == 0:
return 'N'
result = ""
for numeral, integer in romanNumeralMap:
while n >= integer:
result += numeral
n -= integer
return result
# Define pattern to detect valid Roman numerals
romanNumeralPattern = re.compile("""
^ # beginning of string
M{0,4} # thousands - 0 to 4 M's
(CM|CD|D?C{0,3}) # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
# or 500-800 (D, followed by 0 to 3 C's)
(XC|XL|L?X{0,3}) # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
# or 50-80 (L, followed by 0 to 3 X's)
(IX|IV|V?I{0,3}) # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
# or 5-8 (V, followed by 0 to 3 I's)
$ # end of string
""", re.VERBOSE)
def fromRoman(s):
"""convert Roman numeral to integer"""
if not s:
raise InvalidRomanNumeralError('Input can not be blank')
# special case
if s == 'N':
return 0
if not romanNumeralPattern.search(s):
raise InvalidRomanNumeralError('Invalid Roman numeral: %s' % s)
result = 0
index = 0
for numeral, integer in romanNumeralMap:
while s[index:index + len(numeral)] == numeral:
result += integer
index += len(numeral)
return result
def parse_args():
parser = argparse.ArgumentParser(
prog='roman',
description='convert between roman and arabic numerals'
)
parser.add_argument('number', help='the value to convert')
parser.add_argument(
'-r', '--reverse',
action='store_true',
default=False,
help='convert roman to numeral (case insensitive) [default: False]')
args = parser.parse_args()
args.number = args.number
return args
def main():
args = parse_args()
if args.reverse:
u = args.number.upper()
r = fromRoman(u)
print(r)
else:
i = int(args.number)
n = toRoman(i)
print(n)
return 0
if __name__ == "__main__": # pragma: no cover
sys.exit(main()) # pragma: no cover | /roman-4.1.tar.gz/roman-4.1/src/roman.py | 0.666388 | 0.411436 | roman.py | pypi |
> # Roman Calibration Pipeline
[](https://roman-pipeline.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/spacetelescope/romancal/actions/workflows/roman_ci.yml)
[](http://www.stsci.edu)
[](http://www.astropy.org/)

**Roman requires Python 3.9 or above and a C compiler for dependencies.**
**Linux and MacOS platforms are tested and supported. Windows is not currently supported.**
## Installation
The easiest way to install the latest `romancal` release into a fresh virtualenv or conda environment is
pip install romancal
### Detailed Installation
The `romancal` package can be installed into a virtualenv or conda environment via `pip`. We recommend that for each
installation you start by creating a fresh environment that only has Python installed and then install the `romancal`
package and its dependencies into that bare environment. If using conda environments, first make sure you have a recent
version of Anaconda or Miniconda installed. If desired, you can create multiple environments to allow for switching
between different versions of the `romancal` package (e.g. a released version versus the current development version).
In all cases, the installation is generally a 3-step process:
* Create a conda environment
* Activate that environment
* Install the desired version of the `romancal` package into that environment
Details are given below on how to do this for different types of installations, including tagged releases, DMS builds
used in operations, and development versions. Remember that all conda operations must be done from within a bash shell.
### Installing latest releases
You can install the latest released version via `pip`. From a bash shell:
conda create -n <env_name> python
conda activate <env_name>
pip install romancal
> **Note**\
> Alternatively, you can also use `virtualenv` to create an environment;
> however, this installation method is not supported by STScI if you encounter issues.
You can also install a specific version (from `romancal 0.1.0` onward):
conda create -n <env_name> python
conda activate <env_name>
pip install romancal==0.10.0
### Installing the development version from Github
You can install the latest development version (not as well tested) from the Github main branch:
conda create -n <env_name> python
conda activate <env_name>
pip install git+https://github.com/spacetelescope/romancal
### Installing for Developers
If you want to be able to work on and test the source code with the `romancal` package, the high-level procedure to do
this is to first create a conda environment using the same procedures outlined above, but then install your personal
copy of the code overtop of the original code in that environment. Again, this should be done in a separate conda
environment from any existing environments that you may have already installed with released versions of the `romancal`
package.
As usual, the first two steps are to create and activate an environment:
conda create -n <env_name> python
conda activate <env_name>
To install your own copy of the code into that environment, you first need to fork and clone the `romancal` repo:
cd <where you want to put the repo>
git clone https://github.com/spacetelescope/romancal
cd romancal
> **Note**\
> Installing via `setup.py` (`python setup.py install`, `python setup.py develop`, etc.) is deprecated and does not work.
Install from your local checked-out copy as an "editable" install:
pip install -e .
If you want to run the unit or regression tests and/or build the docs, you can make sure those dependencies are
installed too:
pip install -e ".[test]"
pip install -e ".[docs]"
pip install -e ".[test,docs]"
Need other useful packages in your development environment?
pip install ipython pytest-xdist
## Calibration References Data System (CRDS) Setup
CRDS is the system that manages the reference files needed to run the pipeline. Inside the STScI network, the pipeline
works with default CRDS setup with no modifications. To run the pipeline outside the STScI network, CRDS must be
configured by setting two environment variables:
export CRDS_PATH=$HOME/crds_cache
export CRDS_SERVER_URL=https://roman-crds.stsci.edu
## Documentation
Documentation (built daily from the Github `main` branch) is available at:
https://roman-pipeline.readthedocs.io/en/latest/
To build the docs yourself, clone this repository and build the documentation with:
pip install -e ".[docs]"
cd docs
make html
## Contributions and Feedback
We welcome contributions and feedback on the project. Please follow the
[contributing guidelines](CONTRIBUTING.md) to submit an issue or a pull request.
We strive to provide a welcoming community to all of our users by abiding with the [Code of Conduct](CODE_OF_CONDUCT.md)
.
If you have questions or concerns regarding the software, please open
an [issue](https://github.com/spacetelescope/romancal/issues).
## Software vs DMS build version map
| roman tag | DMS build | CRDS_CONTEXT | Date | Notes |
|-----------|-----------|--------------|-----------|---------------------------------------|
| 0.1.0 | 0.0 | 003 | Nov 2020 | Release for Build 0.0 |
| 0.2.0 | 0.1 | 004 | Mar 2021 | Release for Build 0.1 |
| 0.3.0 | 0.2 | 007 | May 2021 | Release for Build 0.2 |
| 0.3.1 | 0.2 | 007 | Jun 2021 | Release for Build 0.2 CRDS tests |
| 0.4.2 | 0.3 | 011 | Sep 2021 | Release for Build 0.3 |
| 0.5.0 | 0.4 | 023 | Dec 2021 | Release for Build 0.4 |
| 0.6.0 | 0.5 | 030 | Mar 2022 | Release for Build 0.5 |
| 0.7.0 | 22Q3_B6 | 032 | May 2022 | Release for Build 22Q3_B6 (Build 0.6) |
| 0.7.1 | 22Q3_B6 | 032 | May 2022 | Release for Build 22Q3_B6 (Build 0.6) |
| 0.8.0 | 22Q4_B7 | 038 | Aug 2022 | Release for Build 22Q4_B7 (Build 0.7) |
| 0.8.1 | 22Q4_B7 | 038 | Aug 2022 | Release for Build 22Q4_B7 (Build 0.7) |
| 0.9.0 | 23Q1_B8 | 039 | Nov 2022 | Release for Build 23Q1_B8 (Build 8) |
| 0.10.0 | 23Q2_B9 | 041 | Feb 2023 | Release for Build 23Q2_B9 (Build 9) |
| 0.11.0 | 23Q3_B10 | 047 | May 2023 | Release for Build 23Q3_B10 (Build 10) |
| 0.12.0 | 23Q4_B11 | 051 | Aug 2023 | Release for Build 23Q4_B11 (Build 11) |
Note: CRDS_CONTEXT values flagged with an asterisk in the above table are estimates
(formal CONTEXT deliveries are only provided with final builds).
## Unit Tests
### Setup
The test suite require access to a CRDS cache, but currently (2021-02-09) the shared /grp/crds cache does not include
Roman files. Developers inside the STScI network can sync a cache from roman-crds-test.stsci.edu (if working from home,
be sure to connect to the VPN first):
```bash
$ export CRDS_SERVER_URL=https://roman-crds-test.stsci.edu
$ export CRDS_PATH=$HOME/roman-crds-test-cache
$ crds sync --contexts roman-edit
```
The CRDS_READONLY_CACHE variable should not be set, since references will need to be downloaded to your local cache as
they are requested.
### Running tests
Unit tests can be run via `pytest`. Within the top level of your local `roman` repo checkout:
pip install -e ".[test]"
pytest
Need to parallelize your test runs over 8 cores?
pip install pytest-xdist
pytest -n 8
## Regression Tests
Latest regression test results can be found here (STScI staff only):
https://plwishmaster.stsci.edu:8081/job/RT/job/romancal/
To run the regression tests on your local machine, get the test dependencies and set the environment variable
TEST_BIGDATA to our Artifactory server
(STSci staff members only):
pip install -e ".[test]"
export TEST_BIGDATA=https://bytesalad.stsci.edu/artifactory
To run all the regression tests (except the very slow ones):
pytest --bigdata romancal/regtest
You can control where the test results are written with the
`--basetemp=<PATH>` arg to `pytest`. _NOTE that `pytest` will wipe this directory clean for each test session, so make
sure it is a scratch area._
If you would like to run a specific test, find its name or ID and use the `-k` option:
pytest --bigdata romancal/regtest -k test_flat
If developers need to update the truth files in our nightly regression tests, there are instructions in this wiki.
https://github.com/spacetelescope/jwst/wiki/Maintaining-Regression-Tests
| /romancal-0.12.0.tar.gz/romancal-0.12.0/README.md | 0.403449 | 0.790126 | README.md | pypi |
from docutils.nodes import SkipNode, Text, caption, figure, raw, reference
from sphinx.roles import XRefRole
# Element classes
class page_ref(reference):
pass
class num_ref(reference):
pass
# Visit/depart functions
def skip_page_ref(self, node):
raise SkipNode
def latex_visit_page_ref(self, node):
self.body.append(f"\\pageref{{{node['refdoc']}:{node['reftarget']}}}")
raise SkipNode
def latex_visit_num_ref(self, node):
fields = node["reftarget"].split("#")
if len(fields) > 1:
label, target = fields
ref_link = f"{node['refdoc']}:{target}"
latex = f"\\hyperref[{ref_link}]{{{label} \\ref*{{{ref_link}}}}}"
self.body.append(latex)
else:
self.body.append(f"\\ref{{{node['refdoc']}:{fields[0]}}}")
raise SkipNode
def doctree_read(app, doctree):
# first generate figure numbers for each figure
env = app.builder.env
figid_docname_map = getattr(env, "figid_docname_map", {})
for figure_info in doctree.traverse(figure):
for id in figure_info["ids"]:
figid_docname_map[id] = env.docname
env.figid_docname_map = figid_docname_map
def doctree_resolved(app, doctree, docname):
i = 1
figids = {}
for figure_info in doctree.traverse(figure):
if app.builder.name != "latex" and app.config.number_figures:
for cap in figure_info.traverse(caption):
cap[0] = Text(
"%s %d: %s" % (app.config.figure_caption_prefix, i, cap[0])
)
for id in figure_info["ids"]:
figids[id] = i
i += 1
# replace numfig nodes with links
if app.builder.name != "latex":
for ref_info in doctree.traverse(num_ref):
if "#" in ref_info["reftarget"]:
label, target = ref_info["reftarget"].split("#")
labelfmt = label + " %d"
else:
labelfmt = "%d"
target = ref_info["reftarget"]
if target not in figids:
continue
if app.builder.name == "html":
target_doc = app.builder.env.figid_docname_map[target]
link = f"{app.builder.get_relative_uri(docname, target_doc)}#{target}"
html = (
f'<a class="pageref" href="{link}">{labelfmt %(figids[target])}</a>'
)
ref_info.replace_self(raw(html, html, format="html"))
else:
ref_info.replace_self(Text(labelfmt % (figids[target])))
def clean_env(app):
app.builder.env.i = 1
app.builder.env.figid_docname_map = {}
def setup(app):
app.add_config_value("number_figures", True, True)
app.add_config_value("figure_caption_prefix", "Figure", True)
app.add_node(
page_ref,
text=(skip_page_ref, None),
html=(skip_page_ref, None),
latex=(latex_visit_page_ref, None),
)
app.add_role("page", XRefRole(nodeclass=page_ref))
app.add_node(num_ref, latex=(latex_visit_num_ref, None))
app.add_role("num", XRefRole(nodeclass=num_ref))
app.connect("builder-inited", clean_env)
app.connect("doctree-read", doctree_read)
app.connect("doctree-resolved", doctree_resolved) | /romancal-0.12.0.tar.gz/romancal-0.12.0/docs/exts/numfig.py | 0.418935 | 0.317955 | numfig.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.