markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Multiple aggregationsWe may also want to apply multiple aggregations, like the mean, max, and min. We can do this with the agg() method and pass a list of aggregation functions as the argument.
annual_summary = data_annual['wlev'].agg([np.mean,np.max,np.min]) print annual_summary annual_summary.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Iterating over groupsIn some instances, we may want to iterate over each group. Each group is identifed by a key. If we know the group's key, then we can access that group with the get_group() method. For example, for each year print the mean sea level.
for year in data_annual.groups.keys(): data_year = data_annual.get_group(year) print year, data_year['wlev'].mean()
2000 3.06743417303 2001 3.05765296804 2002 3.07811187215 2003 3.11298972603 2004 3.1040974832 2005 3.12703618873 2006 3.14205230699 2007 3.0956142955 2008 3.07075714448 2009 3.08053287593
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
We had calculated the annual mean sea level earlier, but this is another way to achieve a similar result.ExerciseFor each year, plot the monthly mean water level.Solution
for year in data_annual.groups.keys(): data_year = data_annual.get_group(year) month_mean = data_year.groupby('Month')['wlev'].apply(np.mean) month_mean.plot(label=year) plt.legend()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Multiple groupsWe can also group by multiple columns. For example, we might want to group by year and month. That is, a year/month combo defines the group.
data_yearmonth = data.groupby(['Year','Month']) means = data_yearmonth['wlev'].apply(np.mean) means.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Time SeriesThe x-labels on the plot above are a little bit awkward. A different approach would be to resample the data at a monthly freqeuncy. This can be accomplished by setting the date column as an index. Then we can resample the data at a desired frequency. The resampling method is flexible but a common choice is t...
data['date_index'] = date_index data.set_index('date_index', inplace=True)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Now we can resample at a monthly frequency and plot.
data_monthly = data['wlev'].resample('M', how='mean') data_monthly.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Docker Exercise 09 Getting started with Docker SwarmsMake sure that Swarm is enabled on your Docker Desktop by typing `docker system info`, and looking for a message `Swarm: active` (you might have to scroll up a little).If Swarm isn't running, simply type `docker swarm init` in a shell prompt to set it up. Create the...
docker network create --driver overlay --subnet=172.10.1.0/24 ex09-frontend docker network create --driver overlay --subnet=172.10.2.0/23 ex09-backend
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
Save the MySQL configurationSave the following to your `development.env` file.
MYSQL_USER=sys_admin MYSQL_PASSWORD=sys_password MYSQL_ROOT_PASSWORD=root_password
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
Create your Docker Swarm configuration
version: "3" networks: ex09-frontend: external: true ex09-backend: external: true services: ex09-db: image: mysql:8.0 command: --default-authentication-plugin=mysql_native_password ports: - "3306:3306" networks: - ex09-backend env_file: - ./development.env ex09-...
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack deploy -c php-mysqli-apache.yml php-mysqli-apache
### Veify the stack has been deployed
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack ls
### Verify all the containers have been deployed
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack ps php-mysqli-apache
### Verify the load balancers have all the replicas and mapped the ports
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack services php-mysqli-apache
### See what containers are on the nodemanager in the swarm
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker ps
### Verify that the stack is working correctly
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
local node mastercurl http://localhost:8080
### Destory and remove the stack
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
specs
spces_df.head() specs_df.shape specs_df.describe() specs_df['info'][0] json.loads(specs_df['args'][0]) specs_df['info'][3] json.loads(specs_df['args'][3])
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
train
train_df.head() train_df.shape train_df.describe() train_df.event_id.nunique() train_df.game_session.nunique() train_df.timestamp.min() train_df.timestamp.max() train_df.installation_id.nunique() train_df.event_count.nunique() sns.distplot(train_df.event_count, ) sns.distplot(np.log(train_df.event_count)) sns.distplot(...
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
train labels
train_labels_df.head() train_labels_df.shape train_labels_df.game_session.nunique() train_labels_df.installation_id.nunique() train_labels_df.query('game_session == "0848ef14a8dc6892"')
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
test
test_df.head() test_df.shape test_df.event_id.nunique() test_df.game_session.nunique() test_df.installation_id.nunique() test_df.title.unique() len(test_df.query('~(title=="Bird Measurer (Assessment)") & event_code==4100')) len(test_df.query('title=="Bird Measurer (Assessment)" & event_code==4110')) test_df.query('inst...
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
sample submission
sample_submission = pd.read_csv(DATA_DIR / 'sample_submission.csv') sample_submission.head() sample_submission.shape
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
(Optional) Cancel existing runs
for run in exp.get_runs(): print(run.id) if run.status=="Running": run.cancel() from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "udacityAzureML" try: compute_target = ComputeTarget(workspace=ws, name=cluster_n...
_____no_output_____
MIT
udacity-project.ipynb
abhiojha8/Optimizing_ML_Pipeline_Azure
Extending LSTMs: LSTMs with Peepholes and GRUs
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import print_function import collections import math import numpy as np import os import random import tensorflow as tf import zipfile from matplotlib import pylab from six.mov...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Downloading StoriesStories are automatically downloaded from https://www.cs.cmu.edu/~spok/grimmtmp/, if not detected in the disk. The total size of stories is around ~500KB. The dataset consists of 100 stories.
url = 'https://www.cs.cmu.edu/~spok/grimmtmp/' # Create a directory if needed dir_name = 'stories' if not os.path.exists(dir_name): os.mkdir(dir_name) def maybe_download(filename): """Download a file if not present""" print('Downloading file: ', dir_name+ os.sep+filename) if not os.path.exists(dir_...
100 files found.
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Reading dataData will be stored in a list of lists where the each list represents a document and document is a list of words. We will then break the text into bigrams
def read_data(filename): with open(filename) as f: data = tf.compat.as_str(f.read()) # make all the text lowercase data = data.lower() data = list(data) return data documents = [] global documents for i in range(num_files): print('\nProcessing file %s'%os.path.join(dir_name,filenames[i])...
Processing file stories\001.txt Data size (Characters) (Document 0) 3667 Sample string (Document 0) ['in', ' o', 'ld', 'en', ' t', 'im', 'es', ' w', 'he', 'n ', 'wi', 'sh', 'in', 'g ', 'st', 'il', 'l ', 'he', 'lp', 'ed', ' o', 'ne', ', ', 'th', 'er', 'e ', 'li', 've', 'd ', 'a ', 'ki', 'ng', '\nw', 'ho', 'se', ' d', '...
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Building the Dictionaries (Bigrams)Builds the following. To understand each of these elements, let us also assume the text "I like to go to school"* `dictionary`: maps a string word to an ID (e.g. {I:0, like:1, to:2, go:3, school:4})* `reverse_dictionary`: maps an ID to a string word (e.g. {0:I, 1:like, 2:to, 3:go, 4:...
def build_dataset(documents): chars = [] # This is going to be a list of lists # Where the outer list denote each document # and the inner lists denote words in a given document data_list = [] for d in documents: chars.extend(d) print('%d Characters found.'%len(chars)) count =...
449177 Characters found. Most common words (+UNK) [('e ', 15229), ('he', 15164), (' t', 13443), ('th', 13076), ('d ', 10687)] Least common words (+UNK) [('rz', 1), ('zi', 1), ('i?', 1), ('\ts', 1), ('".', 1), ('hc', 1), ('sd', 1), ('z ', 1), ('m?', 1), ('\tc', 1), ('oz', 1), ('iq', 1), ('pw', 1), ('tz', 1), ('yr', 1)] ...
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Generating Batches of DataThe following object generates a batch of data which will be used to train the RNN. More specifically the generator breaks a given sequence of words into `batch_size` segments. We also maintain a cursor for each segment. So whenever we create a batch of data, we sample one item from each segm...
class DataGeneratorOHE(object): def __init__(self,text,batch_size,num_unroll): # Text where a bigram is denoted by its ID self._text = text # Number of bigrams in the text self._text_size = len(self._text) # Number of datapoints in a batch of data self._batch_siz...
Unrolled index 0 Inputs: e (1), ki (131), d (48), w (11), be (70), Output: li (98), ng (33), au (195), er (14), au (195), Unrolled index 1 Inputs: li (98), ng (33), au (195), er (14), au (195), Output: ve (41), w (169), gh (106), e (1), ti (112), Unrolled index 2 Inputs: ve (41), ...
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining the LSTM, LSTM with Peepholes and GRUs* A LSTM has 5 main components * Cell state, Hidden state, Input gate, Forget gate, Output gate* A LSTM with peephole connections * Introduces several new sets of weights that connects the cell state to the gates* A GRU has 3 main components * Hidden state, Reset gate ...
num_nodes = 128 batch_size = 64 num_unrollings = 50 dropout = 0.2 # Use this in the CSV filename when saving # when using dropout filename_extension = '' if dropout>0.0: filename_extension = '_dropout'
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Inputs and OutputsIn the code we define two different types of inputs. * Training inputs (The stories we downloaded) (batch_size > 1 with unrolling)* Validation inputs (An unseen validation dataset) (bach_size =1, no unrolling)* Test input (New story we are going to generate) (batch_size=1, no unrolling)
tf.reset_default_graph() # Training Input data. train_inputs, train_labels = [],[] # Defining unrolled training inputs for ui in range(num_unrollings): train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size],name='train_inputs_%d'%ui)) train_labels.append(tf.placeholder(tf.float32, s...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Model Parameters and Cell ComputationWe define parameters and cell computation functions for all the different variants (LSTM, LSTM with peepholes and GRUs). **Make sure you only run a single cell withing this section (either the LSTM/ LSTM with peepholes or GRUs) Standard LSTMHere we define the parameters a...
# Input gate (i_t) - How much memory to write to cell state # Connects the current input to the input gate ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.02)) # Connects the previous hidden state to the input gate im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.02)) # ...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
LSTMs with Peephole ConnectionsWe define the parameters and cell computation for a LSTM with peepholes. Note that we are using diagonal peephole connections (for more details refer the text).
# Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) ic = tf.Variable(tf.truncated_normal([1,num_nodes], stddev=0.01)) ib = tf.Variable(tf.random_uniform([...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Gated Recurrent Units (GRUs)Finally we define the parameters and cell computations for the GRU cell.
# Parameters: # Reset gate: input, previous output, and bias. rx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) rh = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) rb = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Hidden State: input, previous output,...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining LSTM/GRU/LSTM-Peephole ComputationsHere first we define the LSTM cell computations as a consice function. Then we use this function to define training and test-time inference logic.
# ========================================================= #Training related inference logic # Keeps the calculated state outputs in all the unrollings # Used to calculate loss outputs = list() # These two python variables are iteratively updated # at each step of unrolling output = saved_output if algorithm=='lstm'...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Calculating LSTM LossWe calculate the training loss of the LSTM here. It's a typical cross entropy loss calculated over all the scores we obtained for training data (`loss`).
# Before calcualting the training loss, # save the hidden state and the cell state to # their respective TensorFlow variables with tf.control_dependencies(train_state_update_ops): # Calculate the training loss by # concatenating the results from all the unrolled time steps loss = tf.reduce_mean( tf.n...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Resetting Operations for Resetting Hidden StatesSometimes the state variable needs to be reset (e.g. when starting predictions at a beginning of a new epoch). But since GRU doesn't have a cell state we have a conditioned reset_state ops
if algorithm=='lstm' or algorithm=='lstm_peephole': # Reset train state reset_train_state = tf.group(tf.assign(saved_state, tf.zeros([batch_size, num_nodes])), tf.assign(saved_output, tf.zeros([batch_size, num_nodes]))) reset_valid_state = tf.group(tf.assign(saved_valid_state, tf....
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Learning Rate and the Optimizer with Gradient ClippingHere we define the learning rate and the optimizer we're going to use. We will be using the Adam optimizer as it is one of the best optimizers out there. Furthermore we use gradient clipping to prevent any gradient explosions.
# Used for decaying learning rate gstep = tf.Variable(0, trainable=False) # Running this operation will cause the value of gstep # to increase, while in turn reducing the learning rate inc_gstep = tf.assign(gstep, gstep+1) # Decays learning rate everytime the gstep increases tf_learning_rate = tf.train.exponential_de...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Greedy Sampling to Break the RepetitionHere we write some simple logic to break the repetition in text. Specifically instead of always getting the word that gave this highest prediction probability, we sample randomly where the probability of being selected given by their prediction probabilities.
def sample(distribution): '''Greedy Sampling We pick the three best predictions given by the LSTM and sample one of them with very high probability of picking the best one''' best_inds = np.argsort(distribution)[-3:] best_probs = distribution[best_inds]/np.sum(distribution[best_inds]) best_idx = np.random.c...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Running the LSTM to Generate TextHere we train the model on the available data and generate text using the trained model for several steps. From each document we extract text for `steps_per_document` steps to train the model on. We also report the train perplexity at the end of each step. Finally we test the model by ...
# Learning rate decay related # If valid perpelxity does not decrease # continuously for this many epochs # decrease the learning rate decay_threshold = 5 # Keep counting perplexity increases decay_count = 0 min_perplexity = 1e10 # Learning rate decay logic def decay_learning_rate(session, v_perplexity): global dec...
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Running Training, Validation and GenerationWe traing the LSTM on existing training data, check the validaiton perplexity on an unseen chunk of text and generate a fresh segment of text
# Some hyperparameters needed for the training process num_steps = 26 steps_per_document = 100 docs_per_step = 10 valid_summary = 1 train_doc_count = num_files session = tf.InteractiveSession() # Capture the behavior of train/valid perplexity over time train_perplexity_ot = [] valid_perplexity_ot = [] # Initializin...
Initialized Global Variables (98).(25).(91).(5).(88).(49).(85).(96).(14).(73). Average loss at step 1: 4.500272 Perplexity at step 1: 90.041577 Valid Perplexity: 53.93 Generated Text after epoch 0 ... ======================== New text Segment ========================== her, it the spirit, "one his that to and sa...
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Similarity Recommendation* Collaborative Filtering * Similarity score is merchant similarity rank * Products list is most sold products in recent X weeks * Didn't choose most valuable products from `product_values` table is because they are largely overlapped with the top products in each merchant. * Also excl...
import pandas as pd import numpy as np import datetime import Levenshtein import warnings warnings.filterwarnings("ignore") import ray ray.shutdown() ray.init() target_merchant = '49th Parallel Grocery' all_order_train = pd.read_pickle('../all_order_train.pkl') all_order_test = pd.read_pickle('../all_order_test.pkl')...
(32355508, 12) (94436, 12)
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Merchant Similarity Score* Here, I converted the 3 similarity factors (top products, size, name) into 1 score, higher score represents higher similarity.* Commapring with sorting by 3 factors, 1 similarity score brings a bit different results.
@ray.remote def get_merchant_data(merchant_df, top=10): merchant_size = merchant_df[['merchant', 'product_id']].astype('str').drop_duplicates()\ .groupby(['merchant'], as_index=False)['product_id']\ .agg('count').res...
_____no_output_____
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Recent Popular ProductsExcluding top products of the target merchant.
all_order_train.head() latest_period = 2 # in weeks week_lst = sorted(all_order_train['week_number'].unique())[-latest_period:] week_lst prod_ct_df = all_order_train.loc[all_order_train['week_number'].isin(week_lst)][['product_id', 'product_name', 'order_id']].astype('str').drop_duplicates()\ ...
['49683' '24964' '27966' '22935' '39275' '45007' '28204' '4605' '42265' '44632' '5876' '4920' '40706' '30391' '30489' '8518' '27104' '45066' '5077' '17794'] ['Cucumber Kirby' 'Organic Garlic' 'Organic Raspberries' 'Organic Yellow Onion' 'Organic Blueberries' 'Organic Zucchini' 'Organic Fuji Apple' 'Yellow Onions' ...
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Collaborative Filtering
merchant_similarity_df.head() all_order_train.head() n_merchant = 10 similar_merchant_lst = merchant_similarity_df['merchant'].values[:n_merchant] merchant_similarity_lst = merchant_similarity_df['similarity_score'].values[:n_merchant] @ray.remote def get_product_score(prod_df, product_id, product_name): total_wei...
_____no_output_____
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Forecasting Recommendations
import pandas as pd import numpy as np from sklearn.metrics import mean_squared_error from math import sqrt import matplotlib.pyplot as plt # the logger here is to remove the warnings about plotly import logging logger = logging.getLogger('fbprophet.plot') logger.setLevel(logging.CRITICAL) from fbprophet import Prophe...
Total sales increased: 2162.51
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
1️⃣ **Exercício 1.** Escreva uma função que conta a frequência de ocorrência de cada palavra em um texto (arquivo txt) e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada. **Correção:** "onde a chave é a PALAVRA considerada"
from collections import Counter def count_palavras(nome_arquivo: str): file = open(f'{nome_arquivo}.txt', 'rt') texto = file.read() palavras = [palavra for palavra in texto.split(' ')] dicionario = dict(Counter(palavras)) # dicionario2 = {i: palavras.count(i) for i in list(set(palavras))} ret...
Digite o nome do arquivo de texto: teste {'Gostaria': 1, 'de': 2, 'enfatizar': 1, 'que': 1, 'a': 2, 'hegemonia': 1, 'do': 3, 'ambiente': 2, 'político': 1, 'obstaculiza': 1, 'apreciação': 1, 'fluxo': 1, 'informações.': 1}
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
2️⃣ **Exercício 2.** Escreva uma função que apaga do dicionário anterior, todas as palavras que sejam ‘stopwords’.Ver https://gist.github.com/alopes/5358189
stopwords = ['de', 'a', 'o', 'que', 'e', 'do', 'da', 'em', 'um', 'para', 'é', 'com', 'não', 'uma', 'os', 'no', 'se', 'na', 'por', 'mais', 'as', 'dos', 'como', 'mas', 'foi', 'ao', 'ele', 'das', 'tem', 'à', 'seu', 'sua', 'ou', 'ser', 'quando', 'muito', 'há', 'nos', 'já', 'está', 'eu', 'também', 'só', 'pelo', 'pela', 'até...
Digite o nome do arquivo de texto: teste Dicionario: {'Gostaria': 1, 'de': 2, 'enfatizar': 1, 'que': 1, 'a': 2, 'hegemonia': 1, 'do': 3, 'ambiente': 2, 'político': 1, 'obstaculiza': 1, 'apreciação': 1, 'fluxo': 1, 'informações.': 1} Apos apagar stopwords: {'Gostaria': 1, 'enfatizar': 1, 'hegemonia': 1, 'ambiente': 2,...
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
3️⃣ **Exercício 3.** Escreva um programa que lê duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome.
def le_notas(dicionario = {}): nome_aluno = input('Digite o nome do aluno: ') if nome_aluno.isalpha() and nome_aluno not in dicionario.keys(): nota1 = float(input('Digite a primeira nota: (somente numeros) ')) nota2 = float(input('Digite a segunda nota: (somente numeros) ')) dicionario[n...
Digite o nome do aluno: Larissa Digite a primeira nota: (somente numeros) 1 Digite a segunda nota: (somente numeros) 2 Digite o nome do aluno: Jesus Digite a primeira nota: (somente numeros) 0 Digite a segunda nota: (somente numeros) 0 Digite o nome do aluno: Digite o nome do aluno que deseja saber a nota: Jesus Jesu...
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
4️⃣ **Exercício 4.** Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao final diga de quem foi a melhor volta da prova e em que volta; e ainda a classificação final em ordem (1o o ...
def le_tempos_corridas(array_tempos=[], numero_voltas=0): if numero_voltas < 10: tempo_volta = float( input(f'[{numero_voltas+1}] Digite o tempo: (numerico/seg) ')) if tempo_volta > 0: array_tempos.append(tempo_volta) le_tempos_corridas(array_tempos, numero_voltas...
[1] Digite o nome do corredor: Larissa [1] Digite o tempo: (numerico/seg) 10 [2] Digite o tempo: (numerico/seg) 15 [2] Digite o nome do corredor: Jesus [1] Digite o tempo: (numerico/seg) 0 # Valor invalido no tempo da volta! [1] Digite o tempo: (numerico/seg) 1 [2] Digite o tempo: (numerico/seg) 1 # Jesus teve a melhor...
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
6️⃣ **Exercício 6.** Criar 10 frozensets com 30 números aleatórios cada, e construir um dicionário que contenha a soma de cada um deles.
import random def get_random_set(size): return frozenset(random.sample(range(1, 100), size)) def get_random_sets(size, num_sets): return [get_random_set(size) for _ in range(num_sets)] def get_dict_from_sets_sum(sets): return {key: sum(value) for key, value in enumerate(sets)} _sets = get_random_sets(30, 10...
{0: 1334, 1: 1552, 2: 1762, 3: 1387, 4: 1535, 5: 1672, 6: 1422, 7: 1572, 8: 1567, 9: 1562}
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
Creating synthetic samplesAfter training the synthesizer on top of fraudulent events we are able to generate as many as desired synthetic samples, always having in mind there's a trade-off between the number of records used for the model training and privacy.
#Importing the required packages import os from ydata.synthesizers.regular import RegularSynthesizer try: os.mkdir('outputs') except FileExistsError as e: print('Directory already exists')
_____no_output_____
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
Init the synth & Samples generation
n_samples = os.environ['NSAMPLES'] model = RegularSynthesizer.load('outputs/synth_model.pkl') synth_data = model.sample(int(n_samples))
INFO: 2022-02-20 23:44:25,790 [SYNTHESIZER] - Start generating model samples.
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
Sending the synthetic samples to the next pipeline stage
OUTPUT_PATH=os.environ['OUTPUT_PATH'] from ydata.connectors.filetype import FileType from ydata.connectors import LocalConnector conn = LocalConnector() #Creating the output with the synthetic sample conn.write_file(synth_data, path=OUTPUT_PATH, file_type = FileType.CSV)
_____no_output_____
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
module name here> API details.
#hide from nbdev.showdoc import * from fastcore.test import *
_____no_output_____
Apache-2.0
00_core.ipynb
akshaysynerzip/hello_nbdev
This is a function to say hello
#export def say_hello(to): "Say hello to somebody" return f'Hello {to}!' say_hello("Akshay") test_eq(say_hello("akshay"), "Hello akshay!")
_____no_output_____
Apache-2.0
00_core.ipynb
akshaysynerzip/hello_nbdev
Demo: ShiftAmountActivityThe basic steps to set up an OpenCLSim simulation are:* Import libraries* Initialise simpy environment* Define object classes* Create objects * Create sites * Create vessels * Create activities* Register processes and run simpy----This notebook shows the workings of the ShiftAmountActivity....
import datetime, time import simpy import shapely.geometry import pandas as pd import openclsim.core as core import openclsim.model as model import openclsim.plot as plot
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
1. Initialise simpy environment
# setup environment simulation_start = 0 my_env = simpy.Environment(initial_time=simulation_start)
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
2. Define object classes
# create a Site object based on desired mixin classes Site = type( "Site", ( core.Identifiable, core.Log, core.Locatable, core.HasContainer, core.HasResource, ), {}, ) # create a TransportProcessingResource object based on desired mixin classes TransportProcessin...
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3. Create objects 3.1. Create site object(s)
# prepare input data for from_site location_from_site = shapely.geometry.Point(4.18055556, 52.18664444) data_from_site = {"env": my_env, "name": "from_site", "geometry": location_from_site, "capacity": 100, "level": 100 } # instantiate to_si...
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.2. Create vessel object(s)
# prepare input data for vessel_01 data_vessel01 = {"env": my_env, "name": "vessel01", "geometry": location_from_site, "capacity": 5, "compute_v": lambda x: 10 } # instantiate vessel_01 vessel01 = TransportProcessingResource(**data_ves...
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.3 Create activity/activities
# initialise registry registry = {} shift_amount_activity_data = model.ShiftAmountActivity( env=my_env, name="Shift amount activity", registry=registry, processor=vessel01, origin=from_site, destination=vessel01, amount=100, duration=60, )
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
4. Register processes and run simpy
model.register_processes([shift_amount_activity_data]) my_env.run()
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
5. Inspect results 5.1 Inspect logsWe can now inspect the logs. The model now shifted cargo from the from_site onto vessel01.
display(plot.get_log_dataframe(shift_amount_activity_data, [shift_amount_activity_data])) display(plot.get_log_dataframe(from_site, [shift_amount_activity_data])) display(plot.get_log_dataframe(vessel01, [shift_amount_activity_data]))
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
Early Reinforcement LearningWith the advances of modern computing power, the study of Reinforcement Learning is having a heyday. Machines are now able to learn complex tasks once thought to be solely in the domain of humans, from controlling the [heating and cooling in massive data centers](https://www.technologyrevie...
# Ensure the right version of Tensorflow is installed. !pip install tensorflow==2.5 --user
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
**NOTE**: In the output of the above cell you may ignore any WARNINGS or ERRORS related to the dependency resolver. If you get any related errors mentioned above please rerun the above cell.
!pip install gym==0.12.5 --user
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
There are [four methods from Gym](http://gym.openai.com/docs/) that are going to be useful to us in order to save the gumdrop.* `make` allows us to build the environment or game that we can pass actions to* `reset` will reset an environment to it's starting configuration and return the state of the player* `render` dis...
import gym import numpy as np import random env = gym.make('FrozenLake-v0', is_slippery=False) state = env.reset() env.render()
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
If we print the state we'll get `0`. This is telling us which square we're in. Each square is labeled from `0` to `15` from left to right, top to bottom, like this:| | | | ||-|-|-|-||0|1|2|3||4|5|6|7||8|9|10|11||12|13|14|15|
print(state)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We can make a simple print function to let us know whether it's game won, game over, or game on.
def print_state(state, done): statement = "Still Alive!" if done: statement = "Cocoa Time!" if state == 15 else "Game Over!" print(state, "-", statement)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We can control the gumdrop ourselves with the `step` method. Run the below cell over and over again trying to move from the starting position to the goal. Good luck!
#0 left #1 down #2 right #3 up # Uncomment to reset the game #env.reset() action = 2 # Change me, please! state, _, done, _ = env.step(action) env.render() print_state(state, done)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Were you able to reach the hot chocolate? If so, great job! There are multiple paths through the maze. One solution is `[1, 1, 2, 2, 1, 2]`. Let's loop through our actions in order to get used to interacting with the environment programmatically.
def play_game(actions): state = env.reset() step = 0 done = False while not done and step < len(actions): action = actions[step] state, _, done, _ = env.step(action) env.render() step += 1 print_state(state, done) actions = [1, 1, 2, 2, 1, 2] # Replace ...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Nice, so we know how to get through the maze, but how do we teach that to the gumdrop? It's just some bytes in an android phone. It doesn't have our human insight.We could give it our list of actions directly, but then it would be copying us and not really learning. This was a tricky one to the mathematicians and compu...
LAKE = np.array([[0, 0, 0, 0], [0, -1, 0, -1], [0, 0, 0, -1], [-1, 0, 0, 1]]) LAKE_WIDTH = len(LAKE[0]) LAKE_HEIGHT = len(LAKE) DISCOUNT = .9 # Change me to be a value between 0 and 1. current_values = np.zeros_like(LAKE)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
The Gym environment class has a handy property for finding the number of states in an environment called `observation_space`. In our case, there a 16 integer states, so it will label it as "Discrete". Similarly, `action_space` will tell us how many actions are available to the agent.Let's take advantage of these to mak...
print("env.observation_space -", env.observation_space) print("env.observation_space.n -", env.observation_space.n) print("env.action_space -", env.action_space) print("env.action_space.n -", env.action_space.n) STATE_SPACE = env.observation_space.n ACTION_SPACE = env.action_space.n STATE_RANGE = range(STATE_SPACE) AC...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We'll need some sort of function to figure out what the best neighboring cell is. The below function take's a cell of the lake, and looks at the current value mapping (to be called with `current_values`, and see's what the value of the adjacent state is corresponding to the given `action`.
def get_neighbor_value(state_x, state_y, values, action): """Returns the value of a state's neighbor. Args: state_x (int): The state's horizontal position, 0 is the lake's left. state_y (int): The state's vertical position, 0 is the lake's top. values (float array): The current iter...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
But this doesn't find the best action, and the gumdrop is going to need that if it wants to greedily get off the lake. The `get_max_neighbor` function we've defined below takes a number corresponding to a cell as `state_number` and the same value mapping as `get_neighbor_value`.
def get_state_coordinates(state_number): state_x = state_number % LAKE_WIDTH state_y = state_number // LAKE_HEIGHT return state_x, state_y def get_max_neighbor(state_number, values): """Finds the maximum valued neighbor for a given state. Args: state_number (int): the state to find the...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Now, let's write our value iteration code. We'll write a function that comes out one step of the iteration by checking each state and finding its maximum neighbor. The values will be reshaped so that it's in the form of the lake, but the policy will stay as a list of ints. This way, when Gym returns a state, all we nee...
def iterate_value(current_values): """Finds the future state values for an array of current states. Args: current_values (int array): the value of current states. Returns: next_values (int array): The value of states based on future states. next_policies (int array): The recomm...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
This is what our values look like after one step. Right now, it just looks like the lake. That's because we started with an array of zeros for `current_values`, and the terminal states of the lake were loaded in.
next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
And this is what our policy looks like reshaped into the form of the lake. The `-1`'s are terminal states. Right now, the agent will move left in any non-terminal state, because it sees all of those states as equal. Remember, if the gumdrop is along the leftmost side of the lake, and tries to move left, it will slip on...
np.array(next_policies).reshape((LAKE_HEIGHT ,LAKE_WIDTH))
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
There's one last step to apply the Bellman Equation, the `discount`! We'll multiply our next states by the `discount` and set that to our `current_values`. One loop done!
current_values = DISCOUNT * next_values current_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Run the below cell over and over again to see how our values change with each iteration. It should be complete after six iterations when the values no longer change. The policy will also change as the values are updated.
next_values, next_policies = iterate_value(current_values) print("Value") print(next_values) print("Policy") print(np.array(next_policies).reshape((4,4))) current_values = DISCOUNT * next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Have a completed policy? Let's see it in action! We'll update our `play_game` function to instead take our list of policies. That way, we can start in a random position and still get to the end.
def play_game(policy): state = env.reset() step = 0 done = False while not done: action = policy[state] # This line is new. state, _, done, _ = env.step(action) env.render() step += 1 print_state(state, done) play_game(next_policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Phew! Good job, team! The gumdrop made it out alive. So what became of our gumdrop hero? Well, the next day, it was making another snowman and fell onto an even more slippery and deadly lake. Doh! Turns out this story is part of a trilogy. Feel free to move onto the next section after your own sip of cocoa, coffee, tea...
env = gym.make('FrozenLake-v0', is_slippery=True) state = env.reset() env.render()
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Hmm, looks the same as before. Let's try applying our old policy and see what happens.
play_game(next_policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Was there a game over? There's a small chance that the gumdrop made it to the end, but it's much more likely that it accidentally slipped and fell into a hole. Oh no! We can try repeatedly testing the above code cell over and over again, but it might take a while. In fact, this is a similar roadblock Bellman and his co...
def find_future_values(current_values, current_policies): """Finds the next set of future values based on the current policy.""" next_values = [] for state in STATE_RANGE: current_policy = current_policies[state] state_x, state_y = get_state_coordinates(state) # If the cell has som...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
After we've calculated our new values, then we'll update the policy (and not the values) based on the maximum neighbor. If there's no change in the policy, then we're done. The below is very similar to our `get_max_neighbor` function. Can you see the differences?
def find_best_policy(next_values): """Finds the best policy given a value mapping.""" next_policies = [] for state in STATE_RANGE: state_x, state_y = get_state_coordinates(state) # No policy or best value yet max_value = -np.inf best_policy = -1 if not LAKE[state_y,...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
To complete the Policy Iteration algorithm, we'll combine the two functions above. Conceptually, we'll be alternating between updating our value function and updating our policy function.
def iterate_policy(current_values, current_policies): """Finds the future state values for an array of current states. Args: current_values (int array): the value of current states. current_policies (int array): a list where each cell is the recommended action for the state matc...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Next, let's modify the `get_neighbor_value` function to now include the slippery ice. Remember the `P` in the Bellman Equation above? It stands for the probability of ending up in a new state given the current state and action taken. That is, we'll take a weighted sum of the values of all possible states based on our c...
def get_locations(state_x, state_y, policy): left = [state_y, state_x-1] down = [state_y+1, state_x] right = [state_y, state_x+1] up = [state_y-1, state_x] directions = [left, down, right, up] num_actions = len(directions) gumdrop_right = (policy - 1) % num_actions gumdrop_left = (polic...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Then, we can add it to `get_neighbor_value` to find the weighted value of all the possible states the gumdrop can end up in.
def get_neighbor_value(state_x, state_y, values, policy): """Returns the value of a state's neighbor. Args: state_x (int): The state's horizontal position, 0 is the lake's left. state_y (int): The state's vertical position, 0 is the lake's top. values (float array): The current iter...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
For Policy Iteration, we'll start off with a random policy if only because the Gumdrop doesn't know any better yet. We'll reset our current values while we're at it.
current_values = np.zeros_like(LAKE) policies = np.random.choice(ACTION_RANGE, size=STATE_SPACE) np.array(policies).reshape((4,4))
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
As before with Value Iteration, run the cell below multiple until the policy no longer changes. It should only take 2-3 clicks compared to Value Iteration's 6.
next_values, policies = iterate_policy(current_values, policies) print("Value") print(next_values) print("Policy") print(np.array(policies).reshape((4,4))) current_values = DISCOUNT * next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Hmm, does this work? Let's see! Run the cell below to watch the gumdrop slip its way to victory.
play_game(policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
So what was the learned strategy here? The gumdrop learned to hug the left wall of boulders until it was down far enough to make a break for the exit. Instead of heading directly for it though, it took advantage of actions that did not have a hole of death in them. Patience is a virtue!We promised this story was a tril...
new_row = np.zeros((1, env.action_space.n)) q_table = np.copy(new_row) q_map = {0: 0} def print_q(q_table, q_map): print("mapping") print(q_map) print("q_table") print(q_table) print_q(q_table, q_map)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Our new `get_action` function will help us read the `q_table` and find the best action.First, we'll give the agent the ability to act randomly as opposed to choosing the best known action. This gives it the ability to explore and find new situations. This is done with a random chance to act randomly. So random!When the...
def get_action(q_map, q_table, state_row, random_rate): """Find max-valued actions and randomly select from them.""" if random.random() < random_rate: return random.randint(0, ACTION_SPACE-1) action_values = q_table[state_row] max_indexes = np.argwhere(action_values == action_values.max()) ...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Here, we'll define how the `q_table` gets updated. We'll apply the Bellman Equation as before, but since there is so much luck involved between slipping and random actions, we'll update our `q_table` as a weighted average between the `old_value` we're updating and the `future_value` based on the best action in the next...
def update_q(q_table, new_state_row, reward, old_value): """Returns an updated Q-value based on the Bellman Equation.""" learning_rate = .1 # Change to be between 0 and 1. future_value = reward + DISCOUNT * np.max(q_table[new_state_row]) return old_value + learning_rate * (future_value - old_value)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We'll update our `play_game` function to take our table and mapping, and at the end, we'll return any updates to them. Once we observe new states, we'll check our mapping and add then to the table if space isn't allocated for them already.Finally, for every `state` - `action` - `new-state` transition, we'll update the ...
def play_game(q_table, q_map, random_rate, render=False): state = env.reset() step = 0 done = False while not done: state_row = q_map[state] action = get_action(q_map, q_table, state_row, random_rate) new_state, _, done, _ = env.step(action) #Add new state to table and ...
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Ok, time to shine, gumdrop emoji! Let's do one simulation and see what happens.
# Run to refresh the q_table. random_rate = 1 q_table = np.copy(new_row) q_map = {0: 0} q_table, q_map = play_game(q_table, q_map, random_rate, render=True) print_q(q_table, q_map)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Unless the gumdrop was incredibly lucky, chances were, it fell in some death water. Q-learning is markedly different from Value Iteration or Policy Iteration in that it attempts to simulate how an animal learns in unknown situations. Since the layout of the lake is unknown to the Gumdrop, it doesn't know which states a...
for _ in range(1000): q_table, q_map = play_game(q_table, q_map, random_rate) random_rate = random_rate * .99 print_q(q_table, q_map) random_rate
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Cats have nine lives, our Gumdrop lived a thousand! Moment of truth. Can it get out of the lake now that it matters?
q_table, q_map = play_game(q_table, q_map, 0, render=True)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Web interface `s4_design_sim_tool`> Interactive web-based user interface for the CMB-S4 reference simulation tool See the [Documentation](https://cmb-s4.github.io/s4_design_sim_tool/)If your browser doesn't visualize the widget input boxes, try reloading the page and **disable your adblocker**.For support requests, [o...
# default_exp ui import ipywidgets as widgets from IPython.display import display w = {} for emission in ["foreground_emission", "CMB_unlensed", "CMB_lensing_signal"]: w[emission] = widgets.BoundedFloatText( value=1, min=0, max=1, step=0.01, description='Weight:', disabled=False ) em...
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Sky emission weightingEach sky emission has a weighting factor between 0 and 1 Foreground emissionSynchrotron, Dust, Free-free, AMEWebsky CIB, tSZ, kSZ
display(w["foreground_emission"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Unlensed CMBPlanck cosmological parameters, no tensor modes
display(w["CMB_unlensed"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
CMB lensing signalCMB lensed - CMB unlensed:* 1 for lensed CMB* 0 for unlensed CMB* `>0, <1` for residual after de-lensingFor the case of partial de-lensing, consider that lensing is a non-linear and this is a very rough approximation, still it could be useful in same cases, for example low-ell BB modes.
display(w["CMB_lensing_signal"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
CMB tensor to scalar ratioValue of the `r` cosmological parameter
display(w["CMB_tensor_to_scalar_ratio"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool