markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 3: Run the simulationIn the following step we'll run the simulations for the 6 different cases. For each run, we need 3 input files: the scene, the simulation configuration, and the simulator setup file. The first and last of these remain the same for each run, and we loop through the list of 6 simulation config files.After the simulation has run, the code renames the output directory to include the simulation settings to the directory.
cfg_files = glob.glob('*_simconfig.ini') print(cfg_files) # configure the simulator engine - this requires no editing from the default simulator_config = SimulatorConfig.from_default() for f in cfg_files[:1]: tmp = f.split('.') fcomps = tmp[0].split('_') sim = MiriSimulation.from_configfiles(f) sim.run() outdir = sorted(glob.glob('*_*_mirisim'), key=os.path.getmtime )[-1] new_outdir = 'wasp103_imtso_{0}_{1}_{2}'.format(fcomps[1], fcomps[2], outdir) os.rename(outdir, new_outdir) print(outdir, new_outdir)
2021-02-24 14:10:04,456 - INFO - Using simulation configuration: wasp103_FULL_5G1I1E_simconfig.ini 2021-02-24 14:10:04,458 - INFO - Using scene configuration: wasp103_scene.ini 2021-02-24 14:10:04,460 - INFO - MIRISim version: 2.3.0 2021-02-24 14:10:04,461 - INFO - MIRI Simulation started. 2021-02-24 14:10:04,463 - INFO - Output will be saved to: 20210224_141004_mirisim 2021-02-24 14:10:04,464 - INFO - Storing configs in output directory. 2021-02-24 14:10:04,467 - INFO - Storing dither pattern in output directory. 2021-02-24 14:10:04,468 - INFO - Using $CDP_DIR for location of CDP files: /Users/kendrew//CDP_2.3 2021-02-24 14:10:04,469 - INFO - Setting up simulated Observation, with following settings: 2021-02-24 14:10:04,470 - INFO - Configuration Path: IMA_FULL 2021-02-24 14:10:04,471 - INFO - Primary optical path: IMA 2021-02-24 14:10:04,472 - INFO - IMA Filter: F770W 2021-02-24 14:10:04,473 - INFO - IMA Subarray: FULL 2021-02-24 14:10:04,474 - INFO - IMA detector readout mode: FAST 2021-02-24 14:10:04,475 - INFO - IMA detector # exposures: 1 2021-02-24 14:10:04,476 - INFO - IMA detector # integrations: 1 2021-02-24 14:10:04,477 - INFO - IMA detector # frames: 5 2021-02-24 14:10:04,478 - INFO - Parsing: Background 2021-02-24 14:10:04,479 - INFO - Initializing Background 2021-02-24 14:10:04,480 - INFO - Parsing: point_1 2021-02-24 14:10:04,481 - INFO - Initializing Point 2021-02-24 14:10:04,481 - INFO - Simulating a single pointing. 2021-02-24 14:10:04,482 - WARNING - Matching against local CDP cache only. 2021-02-24 14:10:04,483 - ERROR - The criteria given (DISTORTION, detector=MIRIMAGE) did not match any CDP files. 2021-02-24 14:10:04,484 - ERROR - No data model could be retrieved.
BSD-3-Clause
TSO-imaging-sims/datalabs-sim/MIRI_im_tso_datalabs.ipynb
STScI-MIRI/TSO-MIRI-simulations
Melodia: A Python Library for Protein Structure and Dynamics Analysis Structure Similarity Analysis
import dill import warnings import melodia as mel import seaborn as sns from os import path from Bio.PDB.PDBExceptions import PDBConstructionWarning warnings.filterwarnings("ignore", category=PDBConstructionWarning)
_____no_output_____
Apache-2.0
examples/nb_py_melodia_pir_clustering.ipynb
rwmontalvao/Melodia
Parser an alignment in the PIR file format
# Dill can be used for storage if path.exists('model.dill'): with open('model.dill', 'rb') as file: align = dill.load(file) else: align = mel.parser_pir_file('model.ali') with open('model.dill', 'wb') as file: dill.dump(align, file) palette='Dark2' colors=7 sns.color_palette(palette, colors) mel.cluster_alignment(align=align, threshold=1.1, long=True) mel.save_align_to_ps(align=align, ps_file='model', palette=palette, colors=colors) mel.save_pymol_script(align=align, pml_file='clusters_model', palette=palette, colors=colors)
_____no_output_____
Apache-2.0
examples/nb_py_melodia_pir_clustering.ipynb
rwmontalvao/Melodia
Clustered Feature Importance The goal of these notebook is demostrate the Clustered Feature Imporatance, a feature importance method suggested by **Dr. Marcos Lopez de Prado** in the [paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3517595) and the book Machine Learning for Asset Managers. The aim of CFI is to cluster similar features and apply the feature importance analysis at the cluster level. This way clusters are mutually dissimilar and the method is tends tame the substitution effect and by using information theory along we can also reduce the multicollinearity of the dataset.
# General Imports import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score, log_loss from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier from sklearn.model_selection._split import KFold # Import MlFinLab tools import mlfinlab as ml from mlfinlab.util.generate_dataset import get_classification_data from mlfinlab.clustering.feature_clusters import get_feature_clusters from mlfinlab.cross_validation import ml_cross_val_score from mlfinlab.feature_importance import (mean_decrease_impurity, mean_decrease_accuracy, plot_feature_importance) from mlfinlab.clustering.onc import get_onc_clusters
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
**Clustered Feature Importance or CFI algorithm can be implemented in a two step process as mentioned in the book.** Step - 1 : Features Clustering As first step we need to generate the clusters or subsets of features we want to analyse with feature importance methods. This can be done using feature cluster module of mlfinlab. It uses various parameters to generating feature clusters as in the book. * The algorithm projects the observed features into a metric space by applying a dependence matric function either correlation based or information theory based. Information-theoretic metrics have the advantage of recognizing redundant features that are the result of nonlinear combinations of informative features (i.e. multicollinearity). * Next, we need to determine the optimal number of clusters. The user can either specify the number cluster to use, this will apply a hierarchical clustering on the defined distance matrix of dependence matrix for a given linkage method for clustering, or the user can use the ONC algorithm which uses K-Means clustering, to automate the task of either getting the optimal number of clusters or get both optimal number of clusters and cluster compositions. But the *caveat* of these process is that some silhouette scores may be low due one feature being a combination of multiple features across clusters. This is a problem, because ONC cannot assign one feature to multiple clusters. Hence, the following transformation may help reduce the multicollinearity of the system:
# Generating a synthetic dataset for testing # We generate 40 features, 5 informative ('I_') , 30 redudent ('R_') and rest (5) noisy ('N_') features # with 10000 rows of samples # Redundent features are those which share large amount of information among each other and also with informative features # That is the redudent features are those with substitution effect X, y = get_classification_data(n_features=40, n_informative=5, n_redundant=30, n_samples=10000, sigma=0.1) X.head(3) # Now we get the feature clusters dep_matrix = 'linear' # Linear correlation base dependence matric # The n_cluster is set to None for getting the Optimal Number of Clusters using ONC Algorithm clusters = get_feature_clusters(X, dependence_metric=dep_matrix, distance_metric=None, linkage_method=None, n_clusters=None) clusters
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As we can see that algorithm have not detected any features with low silhoutte score. So, there is no need replace the features with their residuals*. Now, that we have identified the number clusters (six in this case) and composition of features with in each cluster, we can move to the next step. ( *This will be discussed in the later part of this notebook) Step - 2 : Clustered Importance Clustered Feature Importance can be implemented by simply passing the feature clusters obtained in Step-1 to the **clustered_subsets** argument of the MDI or MDA feature importance algorithm. We can apply MDI and MDA on groups of similar features, rather than on individual features and obtain the importance of the cluster as whole instead of individual features. This way we can anlayse how mutually dissimilar clusters interact with model and possibly isolate the noisy/non-infomative clusters.
# Setup for feature importance algorithm # We define a classifier clf_base = DecisionTreeClassifier(criterion='entropy', max_features=1, class_weight='balanced', min_weight_fraction_leaf=0) clf = BaggingClassifier(base_estimator=clf_base, n_estimators=1000, max_features=1., max_samples=1., oob_score=True, n_jobs=-1) # Fit the classifier fit = clf.fit(X,y) # Setting up cross-validation generator # Use Purged K-Fold generator while using it on real financial dataset to avoid leakage cvGen = KFold(n_splits=10) oos_score = ml_cross_val_score(clf, X, y, cv_gen=cvGen, sample_weight_train=None, scoring=log_loss).mean()
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Clustered MDI We compute the clustered MDI as the sum of the MDI values of the features that constitute that cluster. If there is one feature per cluster, then MDI and clustered MDI are the same.
clustered_mdi_imp = mean_decrease_impurity(clf,X.columns,clustered_subsets=clusters) plot_feature_importance(clustered_mdi_imp,oob_score=clf.oob_score_, oos_score=oos_score, save_fig=True, output_path='images/clustered_mdi.png')
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As expected the clusters of non-informative features are given the least importnance and the clusters with redundent and informative features are placed above the noise cluster. This is very usefull for detecting features that are non-informative without the presence of some other features within the same cluster. Clustered MDA As an extension to normal MDA to tackle multi-collinearity and (linear or non-linear) substitution effect. Its implementation was also discussed by Dr. Marcos Lopez de Prado in the Clustered Feature Importance [Presentaion Slides](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3517595)
clustered_mda_imp = mean_decrease_accuracy(clf, X, y, cv_gen=cvGen, clustered_subsets=clusters, scoring=log_loss) plot_feature_importance(clustered_mda_imp,oob_score=clf.oob_score_, oos_score=oos_score, save_fig=True, output_path='images/clustered_mda.png')
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
The clustered MDA has also correctly identified the noisy cluster and placed it below. The CaveatNow that we saw how to implement the CFI with MDI and MDA, we have to discuss the *caveat* of normal ONC algorithm that was mentioned in the Step -1 of this notebook. To understand the caveat of the normal ONC algorithm, we need a understanding of how it works. ONC finds the optimal number of clusters as well as the composition of those clusters, where each feature belongs to one and only one cluster. Features thatbelong to the same cluster share a large amount of information, and features that belong to different clusters share only a relatively small amount of information. The consistency composition of the clusters are determined by the [silhouette score](https://en.wikipedia.org/wiki/Silhouette_(clustering)) of the features. The silhouette ranges from βˆ’1 to +1, where a high value indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters. So, there may be some features with low silhouette score and this is a problem, because ONC cannot assign one feature to multiple clusters. In this case, the following transformation may help reduce the multicollinearity of the system : For each cluster $k = 1 . . . K$, replace the features included in that cluster with residual features, so that it do not contain any information outside cluster $k$. That is let $D_{k}$ be the subset of index features $D = {1,...,F}$ included in cluster $k$, where $D_{k}\subset{D}\ , ||D_{k}|| > 0 \ , \forall{k}\ ; \ D_{k} \bigcap D_{l} = \Phi\ , \forall k \ne l\ ; \bigcup \limits _{k=1} ^{k} D_{k} = D$ . Then, for a given feature $X_{i}$ where $i \in D_{k}$, we compute the residual feature $\hat \varepsilon _{i}$ by fitting the following equation for regression -$$X_{n,j} = \alpha _{i} + \sum \limits _{j \in \{ \bigcup _{l<k}\ D_{l} \} } \beta _{i,j} X_{n,j} + \varepsilon _{n,i}$$Where $n = 1,....,N$ is the index of observations per feature. But if the degrees of freedom in the above regression is too low, one option is to use as regressors linear combinations of the features within each cluster by following a minimum variance weighting scheme so that only $K-1$ betas need to be estimated.This transformation is not necessary if the silhouette scores clearly indicate that features belong to their respective clusters.
corr0, clstrs, silh = get_onc_clusters(X.corr(), repeat=3) plt.figure(figsize=(16,9)) sns.heatmap(corr0,cmap='viridis'); silh
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As we can see there is very low correlation among clusters. Hence, we need not to tranform anything in this dataset. The silhouette score also confirm the same, as there no features have silhouette score below zero. Now let us artificially generate a dataset that can introduce features with low silhouette score. Here the sigmaStd argument of get_classification_data will help us to generate a dataset with high substitution effect.
# We set the value of sigmaStd to 4 to introduce high substitution effect X_, y_ = get_classification_data(n_features=40, n_informative=5, n_redundant=30, n_samples=1000, sigma=5) # Now lets check if we obtained our desired dataset corr0, clstrs, silh = get_onc_clusters(X_.corr()) clstrs
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Now, lets see if there is any features with low silhouette score. If yes then we can correct it with the transformation mentioned above (transformation is appiled automatically).
# This function has built-in detection property that detects the features with low silhouette score # and corrects it with transformation clusters = get_feature_clusters(X_, dependence_metric=dep_matrix, distance_metric=None, linkage_method=None, n_clusters=None)
3 feature/s found with low silhouette score Index(['N_0', 'N_4', 'R_0'], dtype='object'). Returning the transformed dataset
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
We have got the dataset with some features that has some negative silhouette score. Due to this all of noisy features are placed with the informative and redundent feature clusters. **This is the caveat of the ONC algorithm**
clusters
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As we can see the composition after transformation has changed and now we have 3 clusters instead of 2. Though this is not perfect but it has done a must better job in clustering than the normal ONC algorithm. Also the get_feature_clusters function can detect the problem of low degree of freedom of the regression model used for generating the residual $\hat \varepsilon _{i}$ for replacing the orginal feature $X_{i}$ as mentioned above. Using Hierarchical Clustering
dist_matrix = 'angular' # Angular distance matric linkage = 'single' # Linkage method for hierarchical clustering clusters_ = get_feature_clusters(X, dependence_metric=dep_matrix, distance_metric=dist_matrix, linkage_method=linkage, n_clusters=None) clusters_
_____no_output_____
MIT
Cluster_Feature_Importance.ipynb
HartmutD/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Unsplash Joint Query SearchUsing this notebook you can search for images from the [Unsplash Dataset](https://unsplash.com/data) using natural language queries. The search is powered by OpenAI's [CLIP](https://github.com/openai/CLIP) neural network.This notebook uses the precomputed feature vectors for almost 2 million images from the full version of the [Unsplash Dataset](https://unsplash.com/data). If you want to compute the features yourself, see [here](https://github.com/haltakov/natural-language-image-searchon-your-machine).This project was mostly based on the [project](https://github.com/haltakov/natural-language-image-search) created by [Vladimir Haltakov](https://twitter.com/haltakov) and the full code is open-sourced on [GitHub](https://github.com/haofanwang/natural-language-joint-query-search).
!git clone https://github.com/haofanwang/natural-language-joint-query-search.git cd natural-language-joint-query-search
/content/natural-language-joint-query-search
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Setup EnvironmentIn this section we will setup the environment. First we need to install CLIP and then upgrade the version of torch to 1.7.1 with CUDA support (by default CLIP installs torch 1.7.1 without CUDA). Google Colab currently has torch 1.7.0 which doesn't work well with CLIP.
!pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html !pip install ftfy regex tqdm
Looking in links: https://download.pytorch.org/whl/torch_stable.html Collecting torch==1.7.1+cu101 [?25l Downloading https://download.pytorch.org/whl/cu101/torch-1.7.1%2Bcu101-cp36-cp36m-linux_x86_64.whl (735.4MB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 735.4MB 24kB/s WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/torchvision/ [?25hCollecting torchvision==0.8.2+cu101 [?25l Downloading https://download.pytorch.org/whl/cu101/torchvision-0.8.2%2Bcu101-cp36-cp36m-linux_x86_64.whl (12.8MB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12.8MB 114kB/s [?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (1.19.5) Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (0.8) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (3.7.4.3) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.8.2+cu101) (7.0.0) Installing collected packages: torch, torchvision Found existing installation: torch 1.7.0+cu101 Uninstalling torch-1.7.0+cu101: Successfully uninstalled torch-1.7.0+cu101 Found existing installation: torchvision 0.8.1+cu101 Uninstalling torchvision-0.8.1+cu101: Successfully uninstalled torchvision-0.8.1+cu101 Successfully installed torch-1.7.1+cu101 torchvision-0.8.2+cu101 Collecting ftfy [?25l Downloading https://files.pythonhosted.org/packages/04/06/e5c80e2e0f979628d47345efba51f7ba386fe95963b11c594209085f5a9b/ftfy-5.9.tar.gz (66kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 71kB 8.8MB/s [?25hRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (2019.12.20) Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.41.1) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy) (0.2.5) Building wheels for collected packages: ftfy Building wheel for ftfy (setup.py) ... [?25l[?25hdone Created wheel for ftfy: filename=ftfy-5.9-cp36-none-any.whl size=46451 sha256=9ebbd9cc943e4a7d486233233aef6bcea6db5cb3fd6f1061bf945e202d4052f6 Stored in directory: /root/.cache/pip/wheels/5e/2e/f0/b07196e8c929114998f0316894a61c752b63bfa3fdd50d2fc3 Successfully built ftfy Installing collected packages: ftfy Successfully installed ftfy-5.9
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Download the Precomputed DataIn this section the precomputed feature vectors for all photos are downloaded. In order to compare the photos from the Unsplash dataset to a text query, we need to compute the feature vector of each photo using CLIP. We need to download two files:* `photo_ids.csv` - a list of the photo IDs for all images in the dataset. The photo ID can be used to get the actual photo from Unsplash.* `features.npy` - a matrix containing the precomputed 512 element feature vector for each photo in the dataset.The files are available on [Google Drive](https://drive.google.com/drive/folders/1WQmedVCDIQKA2R33dkS1f980YsJXRZ-q?usp=sharing).
from pathlib import Path # Create a folder for the precomputed features !mkdir unsplash-dataset # Download the photo IDs and the feature vectors !gdown --id 1FdmDEzBQCf3OxqY9SbU-jLfH_yZ6UPSj -O unsplash-dataset/photo_ids.csv !gdown --id 1L7ulhn4VeN-2aOM-fYmljza_TQok-j9F -O unsplash-dataset/features.npy # Download from alternative source, if the download doesn't work for some reason (for example download quota limit exceeded) if not Path('unsplash-dataset/photo_ids.csv').exists(): !wget https://transfer.army/api/download/TuWWFTe2spg/EDm6KBjc -O unsplash-dataset/photo_ids.csv if not Path('unsplash-dataset/features.npy').exists(): !wget https://transfer.army/api/download/LGXAaiNnMLA/AamL9PpU -O unsplash-dataset/features.npy
Downloading... From: https://drive.google.com/uc?id=1FdmDEzBQCf3OxqY9SbU-jLfH_yZ6UPSj To: /content/natural-language-joint-query-search/unsplash-dataset/photo_ids.csv 23.8MB [00:00, 111MB/s] Downloading... From: https://drive.google.com/uc?id=1L7ulhn4VeN-2aOM-fYmljza_TQok-j9F To: /content/natural-language-joint-query-search/unsplash-dataset/features.npy 2.03GB [00:40, 50.3MB/s]
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Define FunctionsSome important functions from CLIP for processing the data are defined here. The `encode_search_query` function takes a text description and encodes it into a feature vector using the CLIP model.
def encode_search_query(search_query): with torch.no_grad(): # Encode and normalize the search query using CLIP text_encoded, weight = model.encode_text(clip.tokenize(search_query).to(device)) text_encoded /= text_encoded.norm(dim=-1, keepdim=True) # Retrieve the feature vector from the GPU and convert it to a numpy array return text_encoded.cpu().numpy()
_____no_output_____
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
The `find_best_matches` function compares the text feature vector to the feature vectors of all images and finds the best matches. The function returns the IDs of the best matching photos.
def find_best_matches(text_features, photo_features, photo_ids, results_count=3): # Compute the similarity between the search query and each photo using the Cosine similarity similarities = (photo_features @ text_features.T).squeeze(1) # Sort the photos by their similarity score best_photo_idx = (-similarities).argsort() # Return the photo IDs of the best matches return [photo_ids[i] for i in best_photo_idx[:results_count]]
_____no_output_____
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
We can load the pretrained public CLIP model.
import torch from CLIP.clip import clip # Load the open CLIP model device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device, jit=False)
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 354M/354M [00:02<00:00, 138MiB/s]
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
We can now load the pre-extracted unsplash image features.
import pandas as pd import numpy as np # Load the photo IDs photo_ids = pd.read_csv("unsplash-dataset/photo_ids.csv") photo_ids = list(photo_ids['photo_id']) # Load the features vectors photo_features = np.load("unsplash-dataset/features.npy") # Print some statistics print(f"Photos loaded: {len(photo_ids)}")
Photos loaded: 1981161
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Search Unsplash Now we are ready to search the dataset using natural language. Check out the examples below and feel free to try out your own queries.In this project, we support more types of searching than the [original project](https://github.com/haltakov/natural-language-image-search).1. Text-to-Image Search2. Image-to-Image Search3. Text+Text-to-Image Search4. Image+Text-to-Image SearchNote: 1. As the Unsplash API limit is hit from time to time, we don't display the image, but show the link to download the image.2. As the pretrained CLIP model is mainly trained with English texts, if you want to try with different language, please use Google translation API or NMT model to translate first. Text-to-Image Search "Tokyo Tower at night"
search_query = "Tokyo Tower at night." text_features = encode_search_query(search_query) # Find the best matches best_photo_ids = find_best_matches(text_features, photo_features, photo_ids, 5) for photo_id in best_photo_ids: print("https://unsplash.com/photos/{}/download".format(photo_id))
https://unsplash.com/photos/Hfjoa3qqytM/download https://unsplash.com/photos/9tOyu48-P7M/download https://unsplash.com/photos/OCgMGflYgVg/download https://unsplash.com/photos/msYlh78QagI/download https://unsplash.com/photos/UYmsWq6Cf1c/download
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
"Two children are playing in the amusement park."
search_query = "Two children are playing in the amusement park." text_features = encode_search_query(search_query) # Find the best matches best_photo_ids = find_best_matches(text_features, photo_features, photo_ids, 5) for photo_id in best_photo_ids: print("https://unsplash.com/photos/{}/download".format(photo_id))
https://unsplash.com/photos/VPq1DiHNShY/download https://unsplash.com/photos/nQlKkqq6qEw/download https://unsplash.com/photos/lgXRsUVWl88/download https://unsplash.com/photos/b10qqhvwWg4/download https://unsplash.com/photos/xUDUhI_qsKQ/download
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Image-to-Image Search
from PIL import Image source_image = "./images/borna-hrzina-8IPrifbjo-0-unsplash.jpg" with torch.no_grad(): image_feature = model.encode_image(preprocess(Image.open(source_image)).unsqueeze(0).to(device)) image_feature = (image_feature / image_feature.norm(dim=-1, keepdim=True)).cpu().numpy() # Find the best matches best_photo_ids = find_best_matches(image_feature, photo_features, photo_ids, 5) for photo_id in best_photo_ids: print("https://unsplash.com/photos/{}/download".format(photo_id))
https://unsplash.com/photos/8IPrifbjo-0/download https://unsplash.com/photos/2Hzzw1qfVTQ/download https://unsplash.com/photos/q1gXY48Ej78/download https://unsplash.com/photos/OYaw40WnhSc/download https://unsplash.com/photos/DpeXitxtix8/download
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Text+Text-to-Image Search
search_query = "red flower" search_query_extra = "blue sky" text_features = encode_search_query(search_query) text_features_extra = encode_search_query(search_query_extra) mixed_features = text_features + text_features_extra # Find the best matches best_photo_ids = find_best_matches(mixed_features, photo_features, photo_ids, 5) for photo_id in best_photo_ids: print("https://unsplash.com/photos/{}/download".format(photo_id))
https://unsplash.com/photos/NewdN4HJaWM/download https://unsplash.com/photos/r6DXsecvS4w/download https://unsplash.com/photos/Ye-PdCxCmEQ/download https://unsplash.com/photos/AFT4cSrnVZk/download https://unsplash.com/photos/qKBVUBtZJCU/download
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
Image+Text-to-Image Search
source_image = "./images/borna-hrzina-8IPrifbjo-0-unsplash.jpg" search_text = "cars" with torch.no_grad(): image_feature = model.encode_image(preprocess(Image.open(source_image)).unsqueeze(0).to(device)) image_feature = (image_feature / image_feature.norm(dim=-1, keepdim=True)).cpu().numpy() text_feature = encode_search_query(search_text) # image + text modified_feature = image_feature + text_feature best_photo_ids = find_best_matches(modified_feature, photo_features, photo_ids, 5) for photo_id in best_photo_ids: print("https://unsplash.com/photos/{}/download".format(photo_id))
https://unsplash.com/photos/8IPrifbjo-0/download https://unsplash.com/photos/2Hzzw1qfVTQ/download https://unsplash.com/photos/6FpUtZtjFjM/download https://unsplash.com/photos/Qm8pvpJ-uGs/download https://unsplash.com/photos/c3ddbxzQtdM/download
Apache-2.0
natural_language_joint_query_search/colab/unsplash_image_search.ipynb
g-luo/CLIP_Explainability
RadarCOVID-Report Data Extraction
import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import pycountry import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Constants
from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Parameters
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
COVID-19 Cases
report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe(): return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv") confirmed_df_ = download_cases_dataframe() confirmed_df_.iloc[0] confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]] confirmed_df.rename( columns={ "date": "sample_date", "iso_code": "country_code", }, inplace=True) def convert_iso_alpha_3_to_alpha_2(x): try: return pycountry.countries.get(alpha_3=x).alpha_2 except Exception as e: logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}") return None confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2) confirmed_df.dropna(inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) confirmed_days_df["sample_date_string"] = \ confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_days_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions report_source_regions = report_backend_client.source_regions_for_date( date=extraction_datetime.date()) report_source_regions = sort_source_regions_for_display( source_regions=report_source_regions) report_source_regions def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None): source_regions_at_date_df = confirmed_days_df.copy() source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: source_regions_for_date_function(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df.tail() #%% source_regions_for_summary_df_ = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df_.tail() #%% confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df = \ confirmed_source_regions_group_df.merge( confirmed_days_df[["sample_date_string"]].rename( columns={"sample_date_string": "sample_date"}), how="right") confirmed_source_regions_group_df["new_cases"] = \ confirmed_source_regions_group_df["new_cases"].clip(lower=0) confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan) confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) result_df = confirmed_output_df.copy() result_df.tail() #%% result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left") result_df.sort_values("sample_date_string", inplace=True) result_df.fillna(method="ffill", inplace=True) result_df.tail() #%% result_df[["new_cases", "covid_cases"]].plot() if columns_suffix: result_df.rename( columns={ "new_cases": "new_cases_" + columns_suffix, "covid_cases": "covid_cases_" + columns_suffix}, inplace=True) return result_df, source_regions_for_summary_df_ confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe( report_backend_client.source_regions_for_date) confirmed_es_df, _ = get_cases_dataframe( lambda date: [spain_region_country_code], columns_suffix=spain_region_country_code.lower())
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Extract API TEKs
raw_zip_path_prefix = "Data/TEKs/Raw/" base_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=base_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head()
/opt/hostedtoolcache/Python/3.8.8/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop(
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Dump API TEKs
tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier] tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_base_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_base_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Load TEK Dumps
import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Daily New TEKs
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Hourly New TEKs
hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Official Statistics
import requests import pandas.io.json official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics") official_stats_response.raise_for_status() official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json()) official_stats_df = official_stats_df_.copy() official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True) official_stats_df.head() official_stats_column_map = { "date": "sample_date", "applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated", "communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated", } accumulated_suffix = "_accumulated" accumulated_values_columns = \ list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values())) interpolated_values_columns = \ list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns)) official_stats_df = \ official_stats_df[official_stats_column_map.keys()] \ .rename(columns=official_stats_column_map) official_stats_df["extraction_date"] = extraction_date official_stats_df.head() official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json" previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True) previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True) official_stats_df = official_stats_df.append(previous_official_stats_df) official_stats_df.head() official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)] official_stats_df.sort_values("extraction_date", ascending=False, inplace=True) official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True) official_stats_df.head() official_stats_stored_df = official_stats_df.copy() official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d") official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True) official_stats_df.drop(columns=["extraction_date"], inplace=True) official_stats_df = confirmed_days_df.merge(official_stats_df, how="left") official_stats_df.sort_values("sample_date", ascending=False, inplace=True) official_stats_df.head() official_stats_df[accumulated_values_columns] = \ official_stats_df[accumulated_values_columns] \ .astype(float).interpolate(limit_area="inside") official_stats_df[interpolated_values_columns] = \ official_stats_df[accumulated_values_columns].diff(periods=-1) official_stats_df.drop(columns="sample_date", inplace=True) official_stats_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Data Merge
result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( official_stats_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df = confirmed_es_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0) result_summary_df.head(daily_plot_days) def compute_aggregated_results_summary(days) -> pd.DataFrame: aggregated_result_summary_df = result_summary_df.copy() aggregated_result_summary_df["covid_cases_for_ratio"] = \ aggregated_result_summary_df.covid_cases.mask( aggregated_result_summary_df.shared_diagnoses == 0, 0) aggregated_result_summary_df["covid_cases_for_ratio_es"] = \ aggregated_result_summary_df.covid_cases_es.mask( aggregated_result_summary_df.shared_diagnoses_es == 0, 0) aggregated_result_summary_df = aggregated_result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(days).agg({ "covid_cases": "sum", "covid_cases_es": "sum", "covid_cases_for_ratio": "sum", "covid_cases_for_ratio_es": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum", "shared_diagnoses_es": "sum", }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int) aggregated_result_summary_df["teks_per_shared_diagnosis"] = \ (aggregated_result_summary_df.shared_teks_by_upload_date / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \ (aggregated_result_summary_df.shared_diagnoses / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (aggregated_result_summary_df.shared_diagnoses_es / aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0) return aggregated_result_summary_df aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7) aggregated_result_with_7_days_window_summary_df.head() last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1] last_7_days_summary aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13) last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1] last_14_days_summary
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Report Results
display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases (Source Countries)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)", "shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)", "shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)", "shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)", "covid_cases_es": "COVID-19 Cases (Spain)", "app_downloads_es": "App Downloads (Spain – Official)", "shared_diagnoses_es": "Shared Diagnoses (Spain – Official)", "shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", "covid_cases_es", "app_downloads_es", "shared_diagnoses_es", "shared_diagnoses_per_covid_case_es", ] summary_percentage_columns= [ "shared_diagnoses_per_covid_case_es", "shared_diagnoses_per_covid_case", ]
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Daily Summary Table
result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Daily Summary Plots
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 30), legend=False) ax_ = summary_ax_list[0] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) for percentage_column in summary_percentage_columns: percentage_column_index = summary_columns.index(percentage_column) summary_ax_list[percentage_column_index].yaxis \ .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
/opt/hostedtoolcache/Python/3.8.8/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() /opt/hostedtoolcache/Python/3.8.8/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() /opt/hostedtoolcache/Python/3.8.8/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. if not layout[ax.rowNum + 1, ax.colNum]: /opt/hostedtoolcache/Python/3.8.8/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead. if not layout[ax.rowNum + 1, ax.colNum]:
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Daily Generation to Upload Period Table
display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Hourly Summary Plots
hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Publish Results
github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "", } general_columns = \ list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values())) general_formatter = lambda x: f"{x}" if x != 0 else "" display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns))) daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.item() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.item() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.item() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.item() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.item() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} πŸ‡ͺπŸ‡Ί" def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi df = df.copy() df_styler = df.style.format(display_formatters) media_path = get_temporary_image_path() dfi.export(df_styler, media_path) return media_path summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Save Results
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png")
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Publish Results as JSON
def generate_summary_api_results(df: pd.DataFrame) -> list: api_df = df.reset_index().copy() api_df["sample_date_string"] = \ api_df["sample_date"].dt.strftime("%Y-%m-%d") api_df["source_regions"] = \ api_df["source_regions"].apply(lambda x: x.split(",")) return api_df.to_dict(orient="records") summary_api_results = \ generate_summary_api_results(df=result_summary_df) today_summary_api_results = \ generate_summary_api_results(df=extraction_date_result_summary_df)[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_api_results, last_7_days=last_7_days_summary, last_14_days=last_14_days_summary, daily_results=summary_api_results) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Publish on README
with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
Publish on Twitter
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" def format_shared_diagnoses_per_covid_case(value) -> str: if value == 0: return "–" return f"≀{value:.2%}" display_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case) display_last_14_days_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"]) display_last_14_days_shared_diagnoses_per_covid_case_es = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"]) status = textwrap.dedent(f""" #RadarCOVID – {extraction_date_with_hour} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: ≀{shared_diagnoses:.0f} - Usage Ratio: {display_shared_diagnoses_per_covid_case} Last 14 Days: - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case} - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-03-11.ipynb
pvieito/Radar-STATS
1. Adding Student Details
import time import numpy as np from json import loads, dumps data = {} history = {} reg_no = str(input('Enter your registraion no: ')) name = str(input('Name : ')) mail = str(input('Mail-ID : ')) phone = str(input('Phone No : ')) section = str(input('Section : ')) dct = {} dct['name'] = name dct['mail'] = mail dct['phone'] = phone dct['section'] = section data[reg_no] = dct data
_____no_output_____
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Saving student details in JSON file
from json import loads, dumps type(data) txt = dumps(data) txt fd = open('data.json','w') fd.write(txt) fd.close()
_____no_output_____
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Loading the data from JSON
fd = open('data.json','r') txt = fd.read() fd.close()
_____no_output_____
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Adding user details in JSON Directly
fd = open('data.json','r') txt = fd.read() fd.close() data = loads(txt) reg_no = str(input('Enter your registraion no: ')) name = str(input('Name : ')) mail = str(input('Mail-ID : ')) phone = str(input('Phone No : ')) section = str(input('Section : ')) dct = {} dct['name'] = name dct['mail'] = mail dct['phone'] = phone dct['section'] = section data[reg_no] = dct txt = dumps(data) fd = open('data.json','w') fd.write(txt) fd.close()
Enter your registraion no: 11602256 Name : Sahil Mail-ID : sahil@gmail.com Phone No : 857346957834 Section : K1632
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Get User Details based on Reg No
fd = open('data.json','r') txt = fd.read() fd.close() data = loads(txt) user_reg = str(input('Enter the registration no: ')) print('-'*35) print('Name : ', data[user_reg]['name']) print('Mail : ', data[user_reg]['mail']) print('Phone : ', data[user_reg]['phone']) print('Section : ', data[user_reg]['section']) print('-'*35)
Enter the registration no: 11602258 ----------------------------------- Name : Shivam Mail : shivam@gmail.com Phone : 8735497534 Section : K1632 -----------------------------------
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Get User Details based on Name
fd = open('data.json','r') txt = fd.read() fd.close() data = loads(txt) name = input('Enter the name: ') for key in data.keys(): if(name.lower() == data[key]['name'].lower()): print('-'*35) print("Registration No : ", key) print('Name : ', data[key]['name']) print('Mail : ', data[key]['mail']) print('Phone : ', data[key]['phone']) print('Section : ', data[key]['section']) print('-'*35)
Enter the name: rohit ----------------------------------- Registration No : 11602255 Name : Rohit Mail : rohit@gmail.com Phone : 85739465343 Section : K1632 -----------------------------------
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
Saving Search History in JSON
fd = open('data.json','r') txt = fd.read() fd.close() data = loads(txt) name = input('Enter the name: ') for key in data.keys(): if(name.lower() == data[key]['name'].lower()): print('-'*35) print("Registration No : ", key) print('Name : ', data[key]['name']) print('Mail : ', data[key]['mail']) print('Phone : ', data[key]['phone']) print('Section : ', data[key]['section']) print('-'*35) if (name in history.keys()): history[name]['frequency'] += 1 history[name]['time'] = time.ctime() else: log = {} log['time'] = time.ctime() log['frequency'] = 1 history[name] = log txt = dumps(history) fd = open('History.json','w') fd.write(txt) fd.close()
_____no_output_____
Apache-2.0
Week - 6/UMS with JSON/UMS with JSON.ipynb
AshishJangra27/Data-Science-Specialization
BCC and FCC
def average_quantities(E_list,V_list,S_list,Comp_list): average_E_list=np.empty(len(Comp_list)) average_S_list=np.empty(len(Comp_list)) average_V_list=np.empty(len(Comp_list)) average_b_list=np.empty(len(Comp_list)) average_nu_list=np.empty(len(Comp_list)) delta_Vn_list=np.empty([len(Comp_list),len(E_list)]) for i in range(len(Comp_list)): c = Comp_list[i] #print(c) avg_E = np.dot(E_list,c) avg_S = np.dot(S_list,c) avg_nu = avg_E/(2*avg_S)-1 avg_V = np.dot(V_list,c) delta_Vn = V_list-avg_V avg_b = (4*avg_V)**(1/3)/(2**0.5) average_E_list[i]=(avg_E) average_S_list[i]=(avg_S) average_V_list[i]=(avg_V) average_b_list[i]=(avg_b) average_nu_list[i]=(avg_nu) delta_Vn_list[i,:]=(delta_Vn) return average_E_list,average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list def curtin_BCC(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep): kc = 1.38064852*10**(-23) #J/K J2eV=6.2415093433*10**18 ep0 = 10**4 aver_S = average_S_list aver_b = average_b_list sum_cndVn_b6_list = np.empty(len(Comp_list)) dEb_list=np.empty(len(Comp_list)) Ty0_list=np.empty(len(Comp_list)) delta_ss_list=np.empty(len(Comp_list)) for i in range(len(Comp_list)): c = Comp_list[i] #print(delta_Vn_list[i,:]) #print(delta_Vn_list[i,:]**2) sum_cndVn_b6 = np.dot(c,delta_Vn_list[i,:]**2)/average_b_list[i]**6 #print(sum_cndVn_b6) sum_cndVn_b6_list[i]=sum_cndVn_b6 q_nu = ((1 + average_nu_list)/(1 - average_nu_list)) dEb = 2.00 * 0.123**(1/3) * aver_S * aver_b**3 * q_nu**(2/3) * sum_cndVn_b6**(1/3) Ty0 = 0.040 * 0.123**(-1/3) * aver_S * q_nu**(4/3) * sum_cndVn_b6**(2/3) Ty_T = Ty0 * (1 - ((kc*T)/(dEb) * np.log(ep0/ep))**(2/3) ) if Ty_T<=Ty0/2: Ty_T = Ty0 * np.exp(-1/0.55* kc*T/dEb*np.log(ep0/ep)) delta_ss = 3.06*Ty_T dEb_list[i]=dEb Ty0_list[i]=Ty0 delta_ss_list[i]=delta_ss return dEb_list, Ty0_list, delta_ss_list def curtin_BCC_old(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep): kc = 1.38064852*10**(-23) #J/K J2eV=6.2415093433*10**18 ep0 = 10**4 aver_S = average_S_list aver_b = average_b_list sum_cndVn_b6_list = np.empty(len(Comp_list)) dEb_list=np.empty(len(Comp_list)) Ty0_list=np.empty(len(Comp_list)) delta_ss_list=np.empty(len(Comp_list)) for i in range(len(Comp_list)): c = Comp_list[i] #print(delta_Vn_list[i,:]) #print(delta_Vn_list[i,:]**2) sum_cndVn_b6 = np.dot(c,delta_Vn_list[i,:]**2)/average_b_list[i]**6 #print(sum_cndVn_b6) sum_cndVn_b6_list[i]=sum_cndVn_b6 q_nu = ((1 + average_nu_list)/(1 - average_nu_list)) dEb = 2.00 * 0.123**(1/3) * aver_S * aver_b**3 * q_nu**(2/3) * sum_cndVn_b6**(1/3) Ty0 = 0.040 * 0.123**(-1/3) * aver_S * q_nu**(4/3) * sum_cndVn_b6**(2/3) Ty_T = Ty0 * (1 - ((kc*T)/(dEb) * np.log(ep0/ep))**(2/3) ) delta_ss = 3.06*Ty_T dEb_list[i]=dEb Ty0_list[i]=Ty0 delta_ss_list[i]=delta_ss return dEb_list, Ty0_list, delta_ss_list # Mo-Ta-Nb V_list=np.array([15.941,18.345,18.355])*1e-30 E_list=np.array([326.78,170.02,69.389])*1e9 S_list=np.array([126.4,62.8,24.2])*1e9 Comp_list = np.array([[0.75,0.,0.25]]) ep = 1e-3 T = 1573 average_E_list,average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list= average_quantities(E_list,V_list,S_list,Comp_list) dEb_list, Ty0_list, delta_ss_list=curtin_BCC(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) dEb_list2, Ty0_list2, delta_ss_list2=curtin_BCC_old(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) T_list = np.linspace(0,1600,170) dEb_list_comp0 = np.empty(len(T_list)) Ty0_list_comp0 = np.empty(len(T_list)) delta_ss_list_comp0 = np.empty(len(T_list)) dEb_list_comp0_old = np.empty(len(T_list)) Ty0_list_comp0_old = np.empty(len(T_list)) delta_ss_list_comp0_old = np.empty(len(T_list)) for i in range(len(T_list)): T = T_list[i] dEb_list, Ty0_list, delta_ss_list=curtin_BCC(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) dEb_list_comp0[i]=(dEb_list[0]) Ty0_list_comp0[i]=(Ty0_list[0]) delta_ss_list_comp0[i]=(delta_ss_list[0]/1e6) dEb_list2, Ty0_list2, delta_ss_list2=curtin_BCC_old(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) dEb_list_comp0_old[i]=(dEb_list2[0]) Ty0_list_comp0_old[i]=(Ty0_list2[0]) delta_ss_list_comp0_old[i]=(delta_ss_list2[0]/1e6) plt.plot(T_list,delta_ss_list_comp0) plt.plot(T_list,delta_ss_list_comp0_old) Comp_list = np.array([[0.1,0.00,0.9]]) average_E_list,average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list= average_quantities(E_list,V_list,S_list,Comp_list) dEb_list, Ty0_list, delta_ss_list=curtin_BCC(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) T_list = np.linspace(0,1600,170) dEb_list_comp0 = np.empty(len(T_list)) Ty0_list_comp0 = np.empty(len(T_list)) delta_ss_list_comp0 = np.empty(len(T_list)) dEb_list_comp0_old = np.empty(len(T_list)) Ty0_list_comp0_old = np.empty(len(T_list)) delta_ss_list_comp0_old = np.empty(len(T_list)) for i in range(len(T_list)): T = T_list[i] dEb_list, Ty0_list, delta_ss_list=curtin_BCC(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) dEb_list_comp0[i]=(dEb_list[0]) Ty0_list_comp0[i]=(Ty0_list[0]) delta_ss_list_comp0[i]=(delta_ss_list[0]/1e6) dEb_list2, Ty0_list2, delta_ss_list2=curtin_BCC_old(average_S_list,average_V_list,average_b_list,average_nu_list,delta_Vn_list,Comp_list,T,ep) dEb_list_comp0_old[i]=(dEb_list2[0]) Ty0_list_comp0_old[i]=(Ty0_list2[0]) delta_ss_list_comp0_old[i]=(delta_ss_list2[0]/1e6) plt.plot(T_list,delta_ss_list_comp0) plt.plot(T_list,delta_ss_list_comp0_old)
_____no_output_____
MIT
sspredict/test/test_edge.ipynb
DS-Wen/SSPredict
Implement an AccelerometerIn this notebook you will define your own `get_derivative_from_data` function and use it to differentiate position data ONCE to get velocity information and then again to get acceleration information.In part 1 I will demonstrate what this process looks like and then in part 2 you'll implement the function yourself. ----- Part 1 - Reminder and Demonstration
# run this cell for required imports from helpers import process_data from helpers import get_derivative_from_data as solution_derivative from matplotlib import pyplot as plt # load the parallel park data PARALLEL_PARK_DATA = process_data("parallel_park.pickle") # get the relevant columns timestamps = [row[0] for row in PARALLEL_PARK_DATA] displacements = [row[1] for row in PARALLEL_PARK_DATA] # calculate first derivative speeds = solution_derivative(displacements, timestamps) # plot plt.title("Position and Velocity vs Time") plt.xlabel("Time (seconds)") plt.ylabel("Position (blue) and Speed (orange)") plt.scatter(timestamps, displacements) plt.scatter(timestamps[1:], speeds) plt.show()
_____no_output_____
MIT
4_8_Vehicle_Motion_and_Calculus/Implement an Accelerometer.ipynb
mustafa1adel/CVND_Localization_Exercises
But you just saw that acceleration is the derivative of velocity... which means we can use the same derivative function to calculate acceleration!
# calculate SECOND derivative accelerations = solution_derivative(speeds, timestamps[1:]) # plot (note the slicing of timestamps from 2 --> end) plt.scatter(timestamps[2:], accelerations) plt.show()
_____no_output_____
MIT
4_8_Vehicle_Motion_and_Calculus/Implement an Accelerometer.ipynb
mustafa1adel/CVND_Localization_Exercises
As you can see, this parallel park motion consisted of four segments with different (but constant) acceleration. We can plot all three quantities at once like this:
plt.title("x(t), v(t), a(t)") plt.xlabel("Time (seconds)") plt.ylabel("x (blue), v (orange), a (green)") plt.scatter(timestamps, displacements) plt.scatter(timestamps[1:], speeds) plt.scatter(timestamps[2:], accelerations) plt.show()
_____no_output_____
MIT
4_8_Vehicle_Motion_and_Calculus/Implement an Accelerometer.ipynb
mustafa1adel/CVND_Localization_Exercises
---- Part 2 - Implement it yourself!
def get_derivative_from_data(position_data, time_data): # TODO - try your best to implement this code yourself! # if you get really stuck feel free to go back # to the previous notebook for a hint. return # Testing part 1 - visual testing of first derivative # compare this output to the corresponding graph above. speeds = get_derivative_from_data(displacements, timestamps) plt.title("Position and Velocity vs Time") plt.xlabel("Time (seconds)") plt.ylabel("Position (blue) and Speed (orange)") plt.scatter(timestamps, displacements) plt.scatter(timestamps[1:], speeds) plt.show() # Testing part 2 - visual testing of second derivative # compare this output to the corresponding graph above. speeds = get_derivative_from_data(displacements, timestamps) accelerations = get_derivative_from_data(speeds, timestamps[1:]) plt.title("x(t), v(t), a(t)") plt.xlabel("Time (seconds)") plt.ylabel("x (blue), v (orange), a (green)") plt.scatter(timestamps, displacements) plt.scatter(timestamps[1:], speeds) plt.scatter(timestamps[2:], accelerations) plt.show()
_____no_output_____
MIT
4_8_Vehicle_Motion_and_Calculus/Implement an Accelerometer.ipynb
mustafa1adel/CVND_Localization_Exercises
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Weyl Scalars and Invariants: An Introduction to Einstein Toolkit Diagnostic Thorns Author: Patrick Nelson & Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Numerical results from this module have been confirmed to agree with the trusted WeylScal4 Einstein Toolkit thorn to roundoff error. NRPy+ Source Code for this module: * [WeylScal4NRPD/WeylScalars_Cartesian.py](../edit/WeylScal4NRPD/WeylScalars_Cartesian.py)* [WeylScal4NRPD/WeylScalarInvariants_Cartesian.py](../edit/WeylScal4NRPD/WeylScalarInvariants_Cartesian.py)which are fully documented in the NRPy+ [Tutorial-WeylScalars-Cartesian](Tutorial-WeylScalars-Cartesian.ipynb) module on using NRPy+ to construct the Weyl scalars and invariants as SymPy expressions. Introduction:In the [previous tutorial notebook](Tutorial-WeylScalars-Cartesian.ipynb), we constructed within SymPy full expressions for the real and imaginary components of all five Weyl scalars $\psi_0$, $\psi_1$, $\psi_2$, $\psi_3$, and $\psi_4$ as well as the Weyl invariants. So that we can easily access these expressions, we have ported the Python code needed to generate the Weyl scalar SymPy expressions to [WeylScal4NRPD/WeylScalars_Cartesian.py](../edit/WeylScal4NRPD/WeylScalars_Cartesian.py), and the Weyl invariant SymPy expressions to [WeylScal4NRPD/WeylScalarInvariants_Cartesian.py](../edit/WeylScal4NRPD/WeylScalarInvariants_Cartesian.py).Here we will work through the steps necessary to construct an Einstein Toolkit diagnostic thorn (module), starting from these SymPy expressions, which computes these expressions using ADMBase gridfunctions as input. This tutorial is in two steps:1. Call on NRPy+ to convert the SymPy expressions for the Weyl Scalars and associated Invariants into one C-code kernel for each.1. Write the C code and build up the needed Einstein Toolkit infrastructure (i.e., the .ccl files). Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](nrpy): Call on NRPy+ to convert the SymPy expressions for the Weyl scalars and associated invariants into one C-code kernel for each1. [Step 2](etk): Interfacing with the Einstein Toolkit 1. [Step 2.a](etkc): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](cclfiles): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](etk_list): Add the C file to Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expressions for the Weyl scalars and associated invariants into one C-code kernel for each \[Back to [top](toc)\]$$\label{nrpy}$$WARNING: It takes some time to generate the CSE-optimized C code kernels for these quantities, especially the Weyl scalars... expect 5 minutes on a modern computer.
from outputC import * # NRPy+: Core C code output module import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface import loop as lp # NRPy+: loop infrasructure import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking # Step 1: Set the coordinate system for the numerical grid to Cartesian. par.set_parval_from_str("reference_metric::CoordSystem","Cartesian") rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc. # Step 2: Set the finite differencing order to FD_order to 4 par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 4) # Step 3: Create output directories !mkdir WeylScal4NRPD 2>/dev/null # 2>/dev/null: Don't throw an error or warning if the directory already exists. !mkdir WeylScal4NRPD/src 2>/dev/null # 2>/dev/null: Don't throw an error or warning if the directory already exists. # Step 4: Generate symbolic expressions # Since we are writing an Einstein Toolkit thorn, we must set our memory access style to "ETK". par.set_parval_from_str("grid::GridFuncMemAccess","ETK") import BSSN.Psi4_tetrads as BP4t par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley") #par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True") import BSSN.Psi4 as BP4 print("Generating symbolic expressions for psi4...") start = time.time() BP4.Psi4() end = time.time() print("(BENCH) Finished psi4 symbolic expressions in "+str(end-start)+" seconds.") psi4r = gri.register_gridfunctions("AUX","psi4r") psi4r0pt = gri.register_gridfunctions("AUX","psi4r0pt") psi4r1pt = gri.register_gridfunctions("AUX","psi4r1pt") psi4r2pt = gri.register_gridfunctions("AUX","psi4r2pt") # Construct RHSs: psi4r_lhrh = [lhrh(lhs=gri.gfaccess("out_gfs","psi4r"),rhs=BP4.psi4_re_pt[0]+BP4.psi4_re_pt[1]+BP4.psi4_re_pt[2]), lhrh(lhs=gri.gfaccess("out_gfs","psi4r0pt"),rhs=BP4.psi4_re_pt[0]), lhrh(lhs=gri.gfaccess("out_gfs","psi4r1pt"),rhs=BP4.psi4_re_pt[1]), lhrh(lhs=gri.gfaccess("out_gfs","psi4r2pt"),rhs=BP4.psi4_re_pt[2])] # Generating the CSE is the slowest # operation in this notebook, and much of the CSE # time is spent sorting CSE expressions. Disabling # this sorting makes the C codegen 3-4x faster, # but the tradeoff is that every time this is # run, the CSE patterns will be different # (though they should result in mathematically # *identical* expressions). You can expect # roundoff-level differences as a result. start = time.time() print("Generating C code kernel for psi4r...") psi4r_CcodeKernel = fin.FD_outputC("returnstring",psi4r_lhrh,params="outCverbose=False,CSE_sorting=none") end = time.time() print("(BENCH) Finished psi4r C code kernel generation in "+str(end-start)+" seconds.") psi4r_looped = lp.loop(["i2","i1","i0"],["2","2","2"],["cctk_lsh[2]-2","cctk_lsh[1]-2","cctk_lsh[0]-2"],\ ["1","1","1"],["#pragma omp parallel for","",""],"",""" const CCTK_REAL xx0 = xGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)]; const CCTK_REAL xx1 = yGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)]; const CCTK_REAL xx2 = zGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)]; """+psi4r_CcodeKernel) with open("WeylScal4NRPD/src/WeylScal4NRPD_psi4r.h", "w") as file: file.write(str(psi4r_looped))
Generating symbolic expressions for psi4... (BENCH) Finished psi4 symbolic expressions in 1.7516753673553467 seconds. Generating C code kernel for psi4r... (BENCH) Finished psi4r C code kernel generation in 40.00490069389343 seconds.
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{etk}$$ Step 2.a: Constructing the Einstein Toolkit calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{etkc}$$Now that we have generated the C code kernels (`WeylScal4NRPD_psis.h` and `WeylScal4NRPD_invars.h`) express the Weyl scalars and invariants as CSE-optimized finite-difference expressions, we next need to write the C code functions that incorporate these kernels and are called by the Einstein Toolkit scheduler.
%%writefile WeylScal4NRPD/src/WeylScal4NRPD.c #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "cctk.h" #include "cctk_Arguments.h" #include "cctk_Parameters.h" void WeylScal4NRPD_calc_psi4r(const cGH* restrict const cctkGH,const int *cctk_lsh,const int *cctk_nghostzones, const CCTK_REAL invdx0,const CCTK_REAL invdx1,const CCTK_REAL invdx2, const CCTK_REAL *xGF,const CCTK_REAL *yGF,const CCTK_REAL *zGF, const CCTK_REAL *hDD00GF,const CCTK_REAL *hDD01GF,const CCTK_REAL *hDD02GF,const CCTK_REAL *hDD11GF,const CCTK_REAL *hDD12GF,const CCTK_REAL *hDD22GF, const CCTK_REAL *aDD00GF,const CCTK_REAL *aDD01GF,const CCTK_REAL *aDD02GF,const CCTK_REAL *aDD11GF,const CCTK_REAL *aDD12GF,const CCTK_REAL *aDD22GF, const CCTK_REAL *trKGF,const CCTK_REAL *cfGF, CCTK_REAL *psi4rGF, CCTK_REAL *psi4r0ptGF, CCTK_REAL *psi4r1ptGF, CCTK_REAL *psi4r2ptGF) { DECLARE_CCTK_PARAMETERS; #include "WeylScal4NRPD_psi4r.h" } extern void WeylScal4NRPD_mainfunction(CCTK_ARGUMENTS) { DECLARE_CCTK_PARAMETERS; DECLARE_CCTK_ARGUMENTS; if(cctk_iteration % WeylScal4NRPD_calc_every != 0) { return; } const CCTK_REAL invdx0 = 1.0 / (CCTK_DELTA_SPACE(0)); const CCTK_REAL invdx1 = 1.0 / (CCTK_DELTA_SPACE(1)); const CCTK_REAL invdx2 = 1.0 / (CCTK_DELTA_SPACE(2)); /* Now, to calculate psi4: */ WeylScal4NRPD_calc_psi4r(cctkGH,cctk_lsh,cctk_nghostzones, invdx0,invdx1,invdx2, x,y,z, hDD00GF,hDD01GF,hDD02GF,hDD11GF,hDD12GF,hDD22GF, aDD00GF,aDD01GF,aDD02GF,aDD11GF,aDD12GF,aDD22GF, trKGF,cfGF, psi4rGF, psi4r0ptGF,psi4r1ptGF,psi4r2ptGF); } # First we convert from ADM to BSSN, as is required to convert initial data # (given using) ADM quantities, to the BSSN evolved variables import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as atob IDhDD,IDaDD,IDtrK,IDvetU,IDbetU,IDalpha,IDcf,IDlambdaU = \ atob.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Cartesian","DoNotOutputADMInputFunction",os.path.join("WeylScal4NRPD","src")) # Store the original list of registered gridfunctions; we'll want to unregister # all the *SphorCart* gridfunctions after we're finished with them below. orig_glb_gridfcs_list = [] for gf in gri.glb_gridfcs_list: orig_glb_gridfcs_list.append(gf) alphaSphorCart = gri.register_gridfunctions( "AUXEVOL", "alphaSphorCart") betaSphorCartU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL", "betaSphorCartU") BSphorCartU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL", "BSphorCartU") gammaSphorCartDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "gammaSphorCartDD", "sym01") KSphorCartDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "KSphorCartDD", "sym01") # ADM to BSSN conversion, used for converting ADM initial data into a form readable by this thorn. # ADM to BSSN, Part 1: Set up function call and pointers to ADM gridfunctions outstr = """ #include <math.h> #include "cctk.h" #include "cctk_Arguments.h" #include "cctk_Parameters.h" void WeylScal4NRPD_ADM_to_BSSN(CCTK_ARGUMENTS) { DECLARE_CCTK_ARGUMENTS; DECLARE_CCTK_PARAMETERS; CCTK_REAL *alphaSphorCartGF = alp; """ # It's ugly if we output code in the following ordering, so we'll first # output to a string and then sort the string to beautify the code a bit. outstrtmp = [] for i in range(3): outstrtmp.append(" CCTK_REAL *betaSphorCartU"+str(i)+"GF = beta"+chr(ord('x')+i)+";\n") # outstrtmp.append(" CCTK_REAL *BSphorCartU"+str(i)+"GF = dtbeta"+chr(ord('x')+i)+";\n") for j in range(i,3): outstrtmp.append(" CCTK_REAL *gammaSphorCartDD"+str(i)+str(j)+"GF = g"+chr(ord('x')+i)+chr(ord('x')+j)+";\n") outstrtmp.append(" CCTK_REAL *KSphorCartDD"+str(i)+str(j)+"GF = k"+chr(ord('x')+i)+chr(ord('x')+j)+";\n") outstrtmp.sort() for line in outstrtmp: outstr += line # ADM to BSSN, Part 2: Set up ADM to BSSN conversions for BSSN gridfunctions that do not require # finite-difference derivatives (i.e., all gridfunctions except lambda^i (=Gamma^i # in non-covariant BSSN)): # h_{ij}, a_{ij}, trK, vet^i=beta^i,bet^i=B^i, cf (conformal factor), and alpha all_but_lambdaU_expressions = [ lhrh(lhs=gri.gfaccess("in_gfs","hDD00"),rhs=IDhDD[0][0]), lhrh(lhs=gri.gfaccess("in_gfs","hDD01"),rhs=IDhDD[0][1]), lhrh(lhs=gri.gfaccess("in_gfs","hDD02"),rhs=IDhDD[0][2]), lhrh(lhs=gri.gfaccess("in_gfs","hDD11"),rhs=IDhDD[1][1]), lhrh(lhs=gri.gfaccess("in_gfs","hDD12"),rhs=IDhDD[1][2]), lhrh(lhs=gri.gfaccess("in_gfs","hDD22"),rhs=IDhDD[2][2]), lhrh(lhs=gri.gfaccess("in_gfs","aDD00"),rhs=IDaDD[0][0]), lhrh(lhs=gri.gfaccess("in_gfs","aDD01"),rhs=IDaDD[0][1]), lhrh(lhs=gri.gfaccess("in_gfs","aDD02"),rhs=IDaDD[0][2]), lhrh(lhs=gri.gfaccess("in_gfs","aDD11"),rhs=IDaDD[1][1]), lhrh(lhs=gri.gfaccess("in_gfs","aDD12"),rhs=IDaDD[1][2]), lhrh(lhs=gri.gfaccess("in_gfs","aDD22"),rhs=IDaDD[2][2]), lhrh(lhs=gri.gfaccess("in_gfs","trK"),rhs=IDtrK), lhrh(lhs=gri.gfaccess("in_gfs","vetU0"),rhs=IDvetU[0]), lhrh(lhs=gri.gfaccess("in_gfs","vetU1"),rhs=IDvetU[1]), lhrh(lhs=gri.gfaccess("in_gfs","vetU2"),rhs=IDvetU[2]), lhrh(lhs=gri.gfaccess("in_gfs","alpha"),rhs=IDalpha), lhrh(lhs=gri.gfaccess("in_gfs","cf"),rhs=IDcf)] outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False" all_but_lambdaU_outC = fin.FD_outputC("returnstring",all_but_lambdaU_expressions, outCparams) outstr += lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"], ["1","1","1"],["#pragma omp parallel for","",""]," ",all_but_lambdaU_outC) outstr += "} // END void WeylScal4NRPD_ADM_to_BSSN(CCTK_ARGUMENTS)\n" with open("WeylScal4NRPD/src/ADM_to_BSSN.c", "w") as file: file.write(str(outstr))
_____no_output_____
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{cclfiles}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1.`interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns.1. `param.ccl`: specifies free parameters within the thorn.1. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions.Let's start with `interface.ccl`. The [official Einstein Toolkit (Cactus) documentation](http://einsteintoolkit.org/usersguide/UsersGuide.html) defines what must/should be included in an `interface.ccl` file [**here**](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2).
%%writefile WeylScal4NRPD/interface.ccl # With "implements", we give our thorn its unique name. implements: WeylScal4NRPD # By "inheriting" other thorns, we tell the Toolkit that we # will rely on variables/function that exist within those # functions. inherits: admbase Boundary Grid methodoflines # Tell the Toolkit that we want the various Weyl scalars # and invariants to be visible to other thorns by using # the keyword "public". Note that declaring these # gridfunctions *does not* allocate memory for them; # that is done by the schedule.ccl file. public: CCTK_REAL NRPyPsi4_group type=GF timelevels=3 tags='tensortypealias="Scalar" tensorweight=0 tensorparity=1' { psi4rGF,psi4r0ptGF,psi4r1ptGF,psi4r2ptGF, psi4iGF } "Psi4_group" CCTK_REAL evol_variables type = GF Timelevels=3 { aDD00GF,aDD01GF,aDD02GF,aDD11GF,aDD12GF,aDD22GF,alphaGF,cfGF,hDD00GF,hDD01GF,hDD02GF,hDD11GF,hDD12GF,hDD22GF,trKGF,vetU0GF,vetU1GF,vetU2GF } "BSSN evolved gridfunctions, sans lambdaU and partial t beta"
Overwriting WeylScal4NRPD/interface.ccl
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
We will now write the file `param.ccl`. This file allows the listed parameters to be set at runtime. We also give allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3). The first parameter specifies how many time levels need to be stored. Generally when using the ETK's adaptive-mesh refinement (AMR) driver [Carpet](https://carpetcode.org/), three timelevels are needed so that the diagnostic quantities can be properly interpolated and defined across refinement boundaries. The second parameter determines how often we will calculate $\psi_4$, and the third parameter indicates whether just $\psi_4$, all Weyl scalars, or all Weyl scalars and invariants are going to be output. The third parameter is currently specified entirely within NRPy+, so by this point it is *not* a free parameter. Thus it is not quite correct to include it in this list of *free* parameters (FIXME).
%%writefile WeylScal4NRPD/param.ccl restricted: CCTK_INT timelevels "Number of active timelevels" STEERABLE=RECOVER { 0:3 :: "" } 3 restricted: CCTK_INT WeylScal4NRPD_calc_every "WeylScal4_psi4_calc_Nth_calc_every" STEERABLE=ALWAYS { *:* :: "" } 1
Overwriting WeylScal4NRPD/param.ccl
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Finally, we will write the file `schedule.ccl`; its official documentation is found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). This file dictates when the various parts of the thorn will be run. We first assign storage for both the real and imaginary components of $\psi_4$, and then specify that we want our code run in the `MoL_PseudoEvolution` schedule group (consistent with the original `WeylScal4` Einstein Toolkit thorn), after the ADM variables are set. At this step, we declare that we will be writing code in C. We also specify the gridfunctions that we wish to read in from memory--in our case, we need all the components of $K_{ij}$ (the spatial extrinsic curvature) and $\gamma_{ij}$ (the physical [as opposed to conformal] 3-metric), in addition to the coordinate values. Note that the ETK adopts the widely-used convention that components of $\gamma_{ij}$ are prefixed in the code with $\text{g}$ and not $\gamma$.
%%writefile WeylScal4NRPD/schedule.ccl STORAGE: NRPyPsi4_group[3], evol_variables[3] STORAGE: ADMBase::metric[3], ADMBase::curv[3], ADMBase::lapse[3], ADMBase::shift[3] schedule group WeylScal4NRPD_group in MoL_PseudoEvolution after ADMBase_SetADMVars { } "Schedule WeylScal4NRPD group" schedule WeylScal4NRPD_ADM_to_BSSN in WeylScal4NRPD_group before weylscal4_mainfunction { LANG: C } "Convert ADM into BSSN variables" schedule WeylScal4NRPD_mainfunction in WeylScal4NRPD_group after WeylScal4NRPD_ADM_to_BSSN { LANG: C } "Call WeylScal4NRPD main function"
Overwriting WeylScal4NRPD/schedule.ccl
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Step 2.c: Tell the Einstein Toolkit to compile the C code \[Back to [top](toc)\]$$\label{etk_list}$$The `make.code.defn` lists the source files that need to be compiled. Naturally, this thorn has only the one C file $-$ written above $-$ to compile:
%%writefile WeylScal4NRPD/src/make.code.defn SRCS = WeylScal4NRPD.c ADM_to_BSSN.c
Overwriting WeylScal4NRPD/src/make.code.defn
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-Weyl_Scalars_and_Spacetime_Invariants.pdf](Tutorial-ETK_thorn-Weyl_Scalars_and_Spacetime_Invariants.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-WeylScal4NRPD")
Created Tutorial-ETK_thorn-WeylScal4NRPD.tex, and compiled LaTeX file to PDF file Tutorial-ETK_thorn-WeylScal4NRPD.pdf
BSD-2-Clause
BSSN/Psi4Cartesianvalidation/Tutorial-ETK_thorn-WeylScal4NRPD.ipynb
philchang/nrpytutorial
Creating EEG Objects Epoch Creation
from simpl_eeg import eeg_objects
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Module Overview The `eeg_objects` module contains helper classes for storing and manipulating relevant information regarding epochs to pass to other package functions. It contains two classes. Typically you will only you use the `eeg_objects.Epochs` directly, which by default contains a `eeg_objects.EEG_File` object in the `eeg_file` attribute. Below are the docstrings for the two classes:
# Class for reading and importing EEG files help(eeg_objects.EEG_File) # Class for storing, generating, and adjusting epoch objects help(eeg_objects.Epochs)
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Define parameters The only required parameter to create an epoch object is the `folder_path` for the experiment of interest, however additional parameters may be used to customize your epoch object. - `file_name` - If you specify a `file_name`, and the file exists in the `folder_path` directory, then it will be used as the main data file for the epoch. - If you do not specify a `file_name` then the alphabetical first file with a supported main file type in `folder_path` will be automatically loaded.- `events_file` - If you specify an `events_file`, and the file exists in the `folder_path` directory, then it will be used as the events data file for the epoch. - If you do not specify an `events_file` then the alphabetical first file with a supported events file type in `folder_path` will be automatically loaded. - If you try to load an `events_file` (automatically or manually) with over 5,000 events or if the final column in the loaded dictionary does not contain a numerical value in its first index (both forms of error catching) then the file will be rejected and will not be loaded. - If you want to force no events data to be loaded you can pass and `events_file` of `None`. - `montage` - If you specify a `montage`, it will load a standard montage with the specified name into the epoch data. - If montage data already exists in the main data file and a `montage` is provided the original data overwritten in the epoch object. - If you do not specify a `montage` and montage data already exists in the main data then it will be used instead. - If you do not specify a `montage` and montage data does not exist in the main data then one attempt will be made to load a "easycap-M1" montage. If this fails then no montage information will be loaded. - If you want to force no `montage` to be loaded data to be loaded you can pass and `events_file` of `None`.- `start_second` - If you specify a `start_second`, a single epoch will be generated with an impact event at the specified second. - If you do not specify a `start_second`, epochs will be automatically generated using the impact times found in the `impact locations.mat` file in the selected `experiment_folder`. - `tmin` - specifies the number of seconds before the impact to use,- `tmax` - specifies the number of seconds after the impact.
# path to the experiment folder folder_path = "../../data/109" # the name of the main data file to load (optional) file_name = "fixica.set" # the name of the events file to load (optional) events_file = "impact locations.mat" # the montage type to load (optional) montage = None # number of seconds before the impact, should be a negative number for before impact (optional) tmin = -1 # number of seconds after the impact (optional) tmax = 1 # if creating a custom epoch, select a starting second (optional) start_second = None
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Create epoched data The following data formats are currently supported. Note that due to limited availability of test files not all formats have been fully tested (see Notes).| | Main File | Secondary File | Events File | Notes ||-----------------------|-----------|----------------|-------------|---------------------------------------------------------|| EEGLAB | .set | .fdt | .mat | || BrainVision | .vhdr | .eeg | .vmrk | || European data format | .edf | N/A | N/A | || BioSemi data format | .bdf | N/A | N/A | Montage has not be successfully loaded with test files. || General data format | .gdf | N/A | N/A | Events have not be successfully loaded with test files. || Neuroscan CNT | .cnt | N/A | N/A | Montage has not be successfully loaded with test files. || eXimia | .nxe | N/A | N/A | Events have not be successfully loaded with test files. || Nihon Kohden EEG data | .eeg | .pnt AND .21e | .log | Montage has not be successfully loaded with test files. | - A **main file** represents the lead file used to load in your EEG data. This is the file that may be passed as your `file_name`.- A **secondary file** contains some secondary information for some data types. They will be automatically loaded to when the main file is loaded.- A **events file** contains a list of the annotations associated with events in your EEG data. This is the file that may be passed as your `events_file`.- A **montage** must exist in your epoch in order to visualize it. This contains information about your node locations in 3D space. A complete list of usable montages is available here: https://mne.tools/dev/generated/mne.channels.make_standard_montage.html. You can create epoched data using the `Epochs` class.
epochs = eeg_objects.Epochs( folder_path = folder_path, file_name = file_name, events_file = events_file, montage = montage, tmin = tmin, tmax = tmax, start_second = start_second )
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
The generated epoch data is found within the `all_epochs` attribute. Here we are generating epochs with automatically detected impact times, so we can see that there are multiple events.
epochs.all_epochs
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
If instead we create epochs with a custom start second, we will only create a single epoch with an impact the given `start_second`.
start_second = 15 # record event at second 15 custom_epoch = eeg_objects.Epochs(folder_path, tmin, tmax, start_second) custom_epoch.all_epochs
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Get information about epochs In addition to the epochs contained in the `all_epochs` attribute, the `Epoch` object also contains information about the file used and has a selected epoch for quick access.
eeg_file = epochs.eeg_file print(eeg_file.folder_path) # experiment folder path print(eeg_file.experiment) # experiment number print(eeg_file.raw) # raw data print(eeg_file.file_source) # primary data file the EEG data was loaded from print(eeg_file.events_source) # source file of events print(eeg_file.montage_source) # source of the montage (may be pre-set montage name) print(eeg_file.events) # impact times
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Select specific epoch If you have a specific epoch of interest you can specify it with the `get_epoch` method. You can retrieve it later by accessing the `epoch` attribute.
nth_epoch = 5 # the epoch of interest to select, the 6th impact single_epoch = epochs.get_epoch(nth_epoch) single_epoch epochs.epoch
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Getting an evoked object You can also use the `get_epoch` method to retrieve an evoked object, which represents an averaging of each event in your epoch. Note that evoked data is its own type of object and is not guaranteed to work with every function in this package.
evoked = epochs.get_epoch("evoked") type(evoked) evoked.info
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Decimate the epoch (optional)To reduce the size of the selected epoch you can choose to skip a selected number of time steps by calling the `skip_n_steps` method. If `use_single=True` (the default), it will only be run on the current selected epoch from the previous step, contained in the `epoch` attribute. Otherwise it will run on all the epochs contained within the `all_epochs` attribute.Skipping steps will greatly reduce animation times for the other functions in the package. The greater the number of steps skipped, the fewer the frames to animate. In the example below we are reducing the selected epoch from 4097 time steps to 81 time steps.
single_epoch.get_data().shape num_steps = 50 smaller_epoch = epochs.skip_n_steps(num_steps) smaller_epoch.get_data().shape
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Average the epoch (optional)To reduce the size of the selected epoch you can choose to average a selected number of time steps by calling the `average_n_steps` method. It will be run on the current selected epoch from the previous step, contained in the `epoch` attribute.Averaging works the same way as decimating above, but instead of simply ignoring records between steps it takes an average.
num_steps = 50 average_epoch = epochs.average_n_steps(num_steps) average_epoch.get_data()
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
MNE functions Now that you have access epoched data, you can use the `simpl_eeg` package functions as well as any [MNE functions](https://mne.tools/stable/generated/mne.Epochs.html) which act on `mne.epoch` objects. Below are some useful examples for the MNE objects contained within the object we created. Raw datahttps://mne.tools/stable/generated/mne.io.Raw.html
raw = epochs.eeg_file.raw raw.info raw.plot_psd();
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Epoch data
# first 3 epochs epochs.all_epochs.plot(n_epochs=3); # specific epoch epochs.epoch.plot(); # specific epoch with steps skipped epochs.skip_n_steps(100).plot();
_____no_output_____
MIT
docs/simpl_instructions/eeg_objects.ipynb
UBC-MDS/simpl_eeg_capstone
Neural Networks with Momentum Table of ContentsIn this lab, you will see how different values for the momentum parameters affect the convergence rate of a neural network.Neural Network Module and Function for TrainingTrain Different Neural Networks Model different values for the Momentum ParameterCompare Results of Different Momentum TermsEstimated Time Needed: 25 min Preparation We'll need the following libraries:
# Import the libraries for this lab import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from matplotlib.colors import ListedColormap from torch.utils.data import Dataset, DataLoader torch.manual_seed(1) np.random.seed(1)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Functions used to plot:
# Define a function for plot the decision region def plot_decision_regions_3class(model, data_set): cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA','#00AAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00','#00AAFF']) X=data_set.x.numpy() y=data_set.y.numpy() h = .02 x_min, x_max = X[:, 0].min() - 0.1 , X[:, 0].max() + 0.1 y_min, y_max = X[:, 1].min() - 0.1 , X[:, 1].max() + 0.1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h)) XX=torch.torch.Tensor(np.c_[xx.ravel(), yy.ravel()]) _,yhat=torch.max(model(XX),1) yhat=yhat.numpy().reshape(xx.shape) plt.pcolormesh(xx, yy, yhat, cmap=cmap_light) plt.plot(X[y[:]==0,0], X[y[:]==0,1], 'ro', label='y=0') plt.plot(X[y[:]==1,0], X[y[:]==1,1], 'go', label='y=1') plt.plot(X[y[:]==2,0], X[y[:]==2,1], 'o', label='y=2') plt.title("decision region") plt.legend()
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create the dataset class
# Create the dataset class class Data(Dataset): # modified from: http://cs231n.github.io/neural-networks-case-study/ # Constructor def __init__(self, K=3, N=500): D = 2 X = np.zeros((N * K, D)) # data matrix (each row = single example) y = np.zeros(N * K, dtype='uint8') # class labels for j in range(K): ix = range(N * j, N * (j + 1)) r = np.linspace(0.0, 1, N) # radius t = np.linspace(j * 4, (j + 1) * 4, N) + np.random.randn(N) * 0.2 # theta X[ix] = np.c_[r * np.sin(t), r * np.cos(t)] y[ix] = j self.y = torch.from_numpy(y).type(torch.LongTensor) self.x = torch.from_numpy(X).type(torch.FloatTensor) self.len = y.shape[0] # Getter def __getitem__(self, index): return self.x[index], self.y[index] # Get Length def __len__(self): return self.len # Plot the diagram def plot_data(self): plt.plot(self.x[self.y[:] == 0, 0].numpy(), self.x[self.y[:] == 0, 1].numpy(), 'o', label="y=0") plt.plot(self.x[self.y[:] == 1, 0].numpy(), self.x[self.y[:] == 1, 1].numpy(), 'ro', label="y=1") plt.plot(self.x[self.y[:] == 2, 0].numpy(),self.x[self.y[:] == 2, 1].numpy(), 'go',label="y=2") plt.legend()
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Neural Network Module and Function for Training Create Neural Network Module using ModuleList()
# Create dataset object class Net(nn.Module): # Constructor def __init__(self, Layers): super(Net, self).__init__() self.hidden = nn.ModuleList() for input_size, output_size in zip(Layers, Layers[1:]): self.hidden.append(nn.Linear(input_size, output_size)) # Prediction def forward(self, activation): L = len(self.hidden) for (l, linear_transform) in zip(range(L), self.hidden): if l < L - 1: activation = F.relu(linear_transform(activation)) else: activation = linear_transform(activation) return activation
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create the function for training the model.
# Define the function for training the model def train(data_set, model, criterion, train_loader, optimizer, epochs=100): LOSS = [] ACC = [] for epoch in range(epochs): for x, y in train_loader: optimizer.zero_grad() yhat = model(x) loss = criterion(yhat, y) optimizer.zero_grad() loss.backward() optimizer.step() LOSS.append(loss.item()) ACC.append(accuracy(model,data_set)) results ={"Loss":LOSS, "Accuracy":ACC} fig, ax1 = plt.subplots() color = 'tab:red' ax1.plot(LOSS,color=color) ax1.set_xlabel('epoch', color=color) ax1.set_ylabel('total loss', color=color) ax1.tick_params(axis = 'y', color=color) ax2 = ax1.twinx() color = 'tab:blue' ax2.set_ylabel('accuracy', color=color) # we already handled the x-label with ax1 ax2.plot(ACC, color=color) ax2.tick_params(axis='y', color=color) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() return results
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Define a function used to calculate accuracy.
# Define a function for calculating accuracy def accuracy(model, data_set): _, yhat = torch.max(model(data_set.x), 1) return (yhat == data_set.y).numpy().mean()
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Train Different Networks Model different values for the Momentum Parameter Crate a dataset object using Data
# Create the dataset and plot it data_set = Data() data_set.plot_data() data_set.y = data_set.y.view(-1)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Dictionary to contain different cost and accuracy values for each epoch for different values of the momentum parameter.
# Initialize a dictionary to contain the cost and accuracy Results = {"momentum 0": {"Loss": 0, "Accuracy:": 0}, "momentum 0.1": {"Loss": 0, "Accuracy:": 0}}
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of zero.
# Train a model with 1 hidden layer and 50 neurons Layers = [2, 50, 3] model = Net(Layers) learning_rate = 0.10 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) train_loader = DataLoader(dataset=data_set, batch_size=20) criterion = nn.CrossEntropyLoss() Results["momentum 0"] = train(data_set, model, criterion, train_loader, optimizer, epochs=100) plot_decision_regions_3class(model, data_set)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.1.
# Train a model with 1 hidden layer and 50 neurons with 0.1 momentum Layers = [2, 50, 3] model = Net(Layers) learning_rate = 0.10 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.1) train_loader = DataLoader(dataset=data_set, batch_size=20) criterion = nn.CrossEntropyLoss() Results["momentum 0.1"] = train(data_set, model, criterion, train_loader, optimizer, epochs=100) plot_decision_regions_3class(model, data_set)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.2.
# Train a model with 1 hidden layer and 50 neurons with 0.2 momentum Layers = [2, 50, 3] model = Net(Layers) learning_rate = 0.10 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.2) train_loader = DataLoader(dataset=data_set, batch_size=20) criterion = nn.CrossEntropyLoss() Results["momentum 0.2"] = train(data_set, model, criterion, train_loader, optimizer, epochs=100) plot_decision_regions_3class(model, data_set)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.4.
# Train a model with 1 hidden layer and 50 neurons with 0.4 momentum Layers = [2, 50, 3] model = Net(Layers) learning_rate = 0.10 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.4) train_loader = DataLoader(dataset=data_set, batch_size=20) criterion = nn.CrossEntropyLoss() Results["momentum 0.4"] = train(data_set, model, criterion, train_loader, optimizer, epochs=100) plot_decision_regions_3class(model, data_set)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.5.
# Train a model with 1 hidden layer and 50 neurons with 0.5 momentum Layers = [2, 50, 3] model = Net(Layers) learning_rate = 0.10 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.5) train_loader = DataLoader(dataset=data_set, batch_size=20) criterion = nn.CrossEntropyLoss() Results["momentum 0.5"] = train(data_set, model, criterion, train_loader, optimizer, epochs=100) plot_decision_regions_3class(model,data_set)
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Compare Results of Different Momentum Terms The plot below compares results of different momentum terms. We see that in general. The Cost decreases proportionally to the momentum term, but larger momentum terms lead to larger oscillations. While the momentum term decreases faster, it seems that a momentum term of 0.2 reaches the smallest value for the cost.
# Plot the Loss result for each term for key, value in Results.items(): plt.plot(value['Loss'],label=key) plt.legend() plt.xlabel('epoch') plt.ylabel('Total Loss or Cost')
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
The accuracy seems to be proportional to the momentum term.
# Plot the Accuracy result for each term for key, value in Results.items(): plt.plot(value['Accuracy'],label=key) plt.legend() plt.xlabel('epoch') plt.ylabel('Accuracy')
_____no_output_____
MIT
IBM_AI/4_Pytorch/8.4.2_NeuralNetworkswithMomentum_v2.ipynb
merula89/cousera_notebooks
Monitor Assignments and Update SQLite Table with ChangesIn this example, the "Sidewalk Repair" assignments will be monitored. When a sidewalk as been repaired, the corresponding work order will be updated in the SQLite table to be marked as "Completed".
import sqlite3 from datetime import datetime, timedelta import time import pandas as pd from arcgis.gis import GIS from arcgis.apps import workforce
_____no_output_____
Apache-2.0
notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 6 - Monitor Assignments And Update SQLite DB.ipynb
airyadriana/workforce-scripts
Connect to Organization and Get the ProjectConnect to ArcGIS Online and get the Project with assignments.
gis = GIS("https://arcgis.com", "workforce_scripts") item = gis.content.get("1f7b42024da544f6b1e557889e858ac6") project = workforce.Project(item)
_____no_output_____
Apache-2.0
notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 6 - Monitor Assignments And Update SQLite DB.ipynb
airyadriana/workforce-scripts
Connect to the SQLite Database and Review the Work OrdersLet's review what the work order table looks like.
connection = sqlite3.connect("work_orders") df = pd.read_sql_query("select * from work_orders", connection) df
_____no_output_____
Apache-2.0
notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 6 - Monitor Assignments And Update SQLite DB.ipynb
airyadriana/workforce-scripts
Monitor the Project for Completed AssignmentsLet's run a loop that will check for "Completed" "Sidewalk Repair" assignments. When an assignment is returned from ArcGIS Online, let's change the value of it's status in the SQLite table from "Backlog" to "Completed". This is accomplished by leveraging the "work_order_id" field to lightweight-join the SQLite table to the workforce assignments feature service. When running the following section, complete a "Sidewalk Repair" Assignment on the mobile app.
processed_orders = ["-1"] # Run in a loop (for demo only) for i in range(0, 12): print("Waiting...") time.sleep(5) where_clause = f"status=3 AND assignmentType=2 AND workOrderId NOT IN ({','.join(processed_orders)})" print(f"Checking for updates... {where_clause}") assignments = project.assignments.search(where_clause) for assignment in assignments: cur = connection.cursor() values = ('Completed', assignment.notes, assignment.work_order_id,) cur.execute("update work_orders set status=?, notes=? where id=?", values) connection.commit() processed_orders.append(assignment.work_order_id) print("Completed Assignment Processed")
_____no_output_____
Apache-2.0
notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 6 - Monitor Assignments And Update SQLite DB.ipynb
airyadriana/workforce-scripts
Verify the ChangesLet's verify that the changes were actually written to the SQLite table.
df = pd.read_sql_query("select * from work_orders", connection) df
_____no_output_____
Apache-2.0
notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 6 - Monitor Assignments And Update SQLite DB.ipynb
airyadriana/workforce-scripts
Multiscale Basics Tutorial*By R. Bulanadi, 28/01/20****While Project Multiscale is currently very powerful, it has a slight learning curve to understand the required functions for basic use. This notebook has been written to teach the basics of using Project Multiscale functions, by binarising the Phase channels of microscopy data obtained from a Cypher Asylum AFM. To use Project Multiscale, the Multiscale package must be loaded. Load it as below, being sure to change the directory to lead to your Multiscale package.
import sys sys.path.insert(0, '../../') #Change to your Multiscale Directory from multiscale.processing import twodim from multiscale.processing import core as pt from multiscale.processing import plot as msplt import multiscale.io
Matplotlib_scalebar was not found, please install the package.
CC-BY-4.0
examples/Basics/Multiscale_Basics_Tutorial.ipynb
Coilm/hystorian
We will now convert our raw data (`.ibw` format) into the `.hdf5` format used by Project Multiscale. First, we will set the name of both our raw `.ibw` file, and the new `.hdf5` file.
original_filename = 'SD_P4_zB5_050mV_-2550mV_0002.ibw' filename = original_filename.split('.')[0]+'.hdf5'
_____no_output_____
CC-BY-4.0
examples/Basics/Multiscale_Basics_Tutorial.ipynb
Coilm/hystorian