markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
データセットの用意**MNISTデータセットから「0」と「1」のみを抽出します**
#データの前処理を行うクラスインスタンス transform = transforms.Compose( [transforms.Resize((16, 16)), transforms.ToTensor(), transforms.Normalize((0.5, ), (0.5, ))]) batch_size = 100 #使用するtrainデータセット trainset = torchvision.datasets.MNIST(root='./data', train=True, ...
10000 1000
MIT
text/Chapter9.ipynb
Selubi/tutorial_python
モデルの定義
import torch.nn.functional as F #モデルの定義 class NeuralNet(torch.nn.Module): def __init__(self, n_input=256, n_hidden=16, n_output=8): super(NeuralNet, self).__init__() self.n_input = n_input #一層目と二層目の重み行列の定義 self.l1 = torch.nn.Linear(n_input, n_hidden, bias = True) se...
_____no_output_____
MIT
text/Chapter9.ipynb
Selubi/tutorial_python
一度予測をさせてみます.\前回作成したモデルは2~9までの数字しか分類できないので結果はトンチキなものになります
#モデルの予想を可視化する関数の作成 def prediction(model, num=10, c=2): with torch.no_grad(): img, t = testloader.__iter__().next() t_pred = model(img) fig = plt.figure(figsize=(12,4)) ax = [] for i in range(num): print(f'true: {t[i]}, predict: {np.argmax(t_pred[i])+c}') ...
true: 1, predict: 8 true: 1, predict: 2 true: 0, predict: 5 true: 1, predict: 8 true: 0, predict: 5 true: 0, predict: 2 true: 0, predict: 5 true: 1, predict: 3 true: 0, predict: 5 true: 1, predict: 7
MIT
text/Chapter9.ipynb
Selubi/tutorial_python
転移学習の用意をします.
for param in model.parameters(): param.requires_grad = False#モデルにある全てのパラメータを固定値に変換する model.l3 = torch.nn.Linear(model.l3.in_features, 2)#モデルの最終層のパラメータを2クラス分類ように書き換えた上で学習パラメータに設定します model.l3.requires_grad = True#デフォルトでTrueなので本来は書く必要ないですが明示的に #loss関数の設定 criterion = torch.nn.CrossEntropyLoss() #最適化手法の設定 optimiz...
true: 1, predict: 0 true: 1, predict: 1 true: 0, predict: 0 true: 1, predict: 1 true: 0, predict: 0 true: 0, predict: 1 true: 0, predict: 0 true: 1, predict: 1 true: 0, predict: 0 true: 1, predict: 1
MIT
text/Chapter9.ipynb
Selubi/tutorial_python
ある程度の分類ができるようになっていると思います.\うまく分類できているのが確認できたら,モデルのパラメータの代入部分```pythonmodel.load_state_dict(torch.load("params/model_state_dict.pth"), strict=False)```などをコメントアウトしてもう一度学習させてみましょう.おそらく分類精度が悪くなってしまうのが確認できると思います
torch.save(model.state_dict(), "params/01_model_state_dict.pth")
_____no_output_____
MIT
text/Chapter9.ipynb
Selubi/tutorial_python
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The k...
%matplotlib # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figs...
_____no_output_____
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for t...
# Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolat...
_____no_output_____
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
**Inline Question 1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the ...
# Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print...
Got 137 / 500 correct => accuracy: 0.274000
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 142 / 500 correct => accuracy: 0.284000
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
You should expect to see a slightly better performance than with `k = 1`.
# Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees ...
One loop version took 48.089830 seconds
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splittin...
Got 141 / 500 correct => accuracy: 0.282000
MIT
assign/assignment1/knn.ipynb
zhmz90/CS231N
Дистилляция BERTДообученная модель BERT показывает очень хорошее качество при решении множества NLP-задач. Однако, её не всегда можно применить на практике из-за того, что модель очень большая и работает дастаточно медленно. В связи с этим было придумано несколько способов обойти это ограничение.Один из способов - `k...
pip install transformers catboost import os import random import numpy as np import pandas as pd import torch from transformers import AutoConfig, AutoModelForSequenceClassification from transformers import AutoTokenizer from torch.utils.data import TensorDataset, DataLoader, SequentialSampler from catboost import Po...
cuda Tesla P100-PCIE-16GB
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Загрузка токенизатора, модели, конфигурации
# config config = AutoConfig.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model') # tokenizer tokenizer = AutoTokenizer.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', pad_to_max_length=True) # model model = AutoModelForSequenceClassification.from_pretrai...
_____no_output_____
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Подготовка данных
category_index = {'Водоснабжение': 8, 'Декор': 12, 'Инструменты': 4, 'Краски': 11, 'Кухни': 15, 'Напольные покрытия': 5, 'Окна и двери': 2, 'Освещение': 13, 'Плитка': 6, 'Сад': 9, 'Сантехника': 7, 'Скобяные изделия': 10, 'Столярные изделия': 1, 'Стройматериалы': 0, 'Хранение': 14, 'Электротовары': 3} cat...
_____no_output_____
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Получение логитов BERT
train_logits = [] with torch.no_grad(): model.to(device) for batch in tqdm(dataloader): batch = batch.to(device) outputs = model(batch) logits = outputs[0].detach().cpu().numpy() train_logits.extend(logits) #train_logits = np.vstack(train_logits)
_____no_output_____
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Обучение ученикаТеперь возьмём мультирегрессионную модель от CatBoost и передадим ей все полученные логиты.
data_pool = Pool(tokens, train_logits) distilled_model = CatBoostRegressor(iterations=2000, depth=4, learning_rate=.1, loss_function='MultiRMSE', verbose=200) distilled_model.fit(data_pool)
0: learn: 11.6947874 total: 275ms remaining: 9m 9s 200: learn: 9.0435970 total: 47s remaining: 7m 400: learn: 8.2920608 total: 1m 32s remaining: 6m 10s 600: learn: 7.7736947 total: 2m 18s remaining: 5m 22s 800: learn: 7.3674586 total: 3m 4s remaining: 4m 36s 1000: learn: 7.0166625 total: 3m 51s remaining: 3m 51s 1200: ...
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Сравнение качества моделей
category_index_inverted = dict(map(reversed, category_index.items()))
_____no_output_____
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Метрики Берта:
print(classification_report(labels, np.argmax(train_logits, axis=1), target_names=category_index_inverted.values()))
precision recall f1-score support Водоснабжение 0.94 0.88 0.91 13377 Декор 1.00 0.40 0.57 2716 Инструменты 1.00 0.40 0.58 540 Краски 0.97 0.81 0.88 20397 Кухни ...
MIT
BERT_distyll.ipynb
blanchefort/text_mining
Метрики модели-ученика:
tokens_pool = Pool(tokens) distilled_predicted_logits = distilled_model.predict(tokens_pool, prediction_type='RawFormulaVal') # Probability print(classification_report(labels, np.argmax(distilled_predicted_logits, axis=1), target_names=category_index_inverted.values()))
precision recall f1-score support Водоснабжение 0.90 0.53 0.67 13377 Декор 0.99 0.30 0.46 2716 Инструменты 0.00 0.00 0.00 540 Краски 0.97 0.61 0.75 20397 Кухни ...
MIT
BERT_distyll.ipynb
blanchefort/text_mining
TensorBayes Adaptation of `BayesC.cpp` Imports
import tensorflow as tf import tensorflow_probability as tfp import numpy as np tfd = tfp.distributions
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
File inputTo do
# Get the numbers of columns in the csv: # File I/O here filenames = "" csv_in = open(filenames, "r") # open the csv ncol = len(csv_in.readline().split(",")) # read the first line and count the # of columns csv_in.close() # close the csv print("Nu...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
ReproducibilitySeed setting for reproducable research.
# To do: get a numpy seed or look at how TF implements rng. # each distributions.sample() seen below can be seedeed. # ex. dist.sample(seed=32): return a sample of shape=() (scalar). # Set graph-level seed tf.set_random_seed(1234)
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Distributions functions- Random Uniform: return a sample from a uniform distribution of limits parameter `lower ` and `higher`. - Random Normal: return a sample from a normal distribution of parameter `mean` and `standard deviation`. - Random Beta: return a random quantile of a beta distribution of par...
# Note: written as a translation of BayesC.cpp # the function definitions might not be needeed, # and the declarations of the distributions could be enough def runif(lower, higher): dist = tfd.Uniform(lower, higher) return dist.sample() def rnorm(mean, sd): dist = tfd.Normal(loc= mean, scale= sd) retu...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Sampling functions- Sampling of the mean - Sampling of the variance of beta - Sampling of the error variance of Y - Sample of the mixture weight
# sample mean def sample_mu(N, Esigma2, Y, X, beta): #as in BayesC, with the N parameter mean = tf.reduce_sum(tf.subtract(Y, tf.matmul(X, beta)))/N sd = tf.sqrt(Esigma2/N) mu = rnorm(mean, sd) return mu # sample variance of beta def sample_psi2_chisq( beta, NZ, v0B, s0B): df=v0B+NZ scale=(tf.nn...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Simulate data
def build_toy_dataset(N, beta, sigmaY_true=1): features = len(beta) x = np.random.randn(N, features) y = np.dot(x, beta) + np.random.normal(0, sigmaY_true, size=N) return x, y N = 40 # number of data points M = 10 # number of features beta_true = np.random.randn(M) x, y = build_toy_dataset(N, b...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Parameters setup
# Distinction between constant and variables # Variables: values might change between evaluation of the graph # (if something changes within the graph, it should be a variable) Emu = tf.Variable(0., trainable=False) vEmu = tf.ones([N,1]) Ebeta = tf.zeros([M,1]) ny = tf.zeros(M) Ew = tf.constant(0.) epsilon = Y - tf.ma...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Tensorboard graph
writer = tf.summary.FileWriter('.') writer.add_graph(tf.get_default_graph())
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
Gibbs sampling
# Open session sess = tf.Session() # Initialize variables init = tf.global_variables_initializer() sess.run(init) num_iter = 50 print(sess.run(tf.report_uninitialized_variables())) #debug for just 1 marker 0 epsilon = tf.add(epsilon, X[:,0]*Ebeta[0]) Cj=tf.nn.l2_loss(X[:,0])*2+Esigma2/Epsi2 #adjusted variance rj= tf.m...
_____no_output_____
MIT
notebooks/TensorBayes.ipynb
jklopf/tensorbayes
track features
del_water_tracks[:5] def tracks_toDF(tracks): records = [] for track in tracks: audio_features = sp.audio_features(track['id'])[0] audio_features['artists'] = track['artists'] audio_features['name'] = track['name'] records.append(audio_features) df = pd.DataFrame.from_record...
_____no_output_____
MIT
test.ipynb
schlinkertc/Spotify
Visualize
import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') %config InlineBackend.figure_format = 'retina' %matplotlib inline sns.set(color_codes=True) sns.set(rc={'figure.figsize':(20,18)}) nick_df = tracks_toDF(nick_tracks) nick_data = nick_df.groupby('name').mean() def track_scatterPlot(data,x...
_____no_output_____
MIT
test.ipynb
schlinkertc/Spotify
by playlist
sam_playlists = sp.user_playlists(1250134147) sam_playlist_ids = [ {'playlist_id':x['id'],'playlist_name':x['name']} for x in sam_playlists['items']] sam_playlist_ids records = [] for playlist in sam_playlist_ids: playlist_items = sp.playlist_tracks(playlist['playlist_id'])['items'] track_ids = [item['track...
_____no_output_____
MIT
test.ipynb
schlinkertc/Spotify
Songs I've played on
my_tracks=[ { 'name':item['track']['name'], 'id':item['track']['id'], 'artists':[x['name'] for x in item['track']['artists']] } for item in sp.playlist_tracks("spotify:playlist:4XeFzR948Yyk1X4SsXXogr")['items'] ] my_df = tracks_toDF(my_tracks) track_scatterPlot(my_df,'valence','ener...
_____no_output_____
MIT
test.ipynb
schlinkertc/Spotify
Tutorial 3In tutorial 1 and 2, we have seen how to use great expectation as a project framework to validate data. If you don't want to useall the features that it provides you. you can just use the simple validation method directly on a dataframe
import great_expectations as ge import pandas as pd from matplotlib import pyplot as plt %matplotlib inline file_path="../data/adult_with_duplicates.csv" df = pd.read_csv(file_path) # convert pandas dataframe to ge dataframe df = ge.dataset.PandasDataset(df) print(df.columns) df.head()
_____no_output_____
Apache-2.0
02.great_expectations_validation/3.Valid_data_via_function.ipynb
pengfei99/DataQualityAndValidation
Apply validation method directly on the dataframeBelow method checks if the dataframe has the expected column names. It's equivalent to the yaml config```yaml"expectations": [ { "expectation_type": "expect_table_columns_to_match_ordered_list", "kwargs": { "column_list": [ "age", "...
column_list= [ "age", "workclass", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", ...
_____no_output_____
Apache-2.0
02.great_expectations_validation/3.Valid_data_via_function.ipynb
pengfei99/DataQualityAndValidation
Below method checks if the age value is between 0 and 120. It's equivalent to the yaml config```yaml { "expectation_type": "expect_column_values_to_be_between", "kwargs": { "column": "age", "max_value": 120.0, "min_value": 0.0 }, "meta": {} }```
# ge dataframe provides access to all validation method df.expect_column_values_to_be_between(column='age', min_value=0, max_value=120) df.expect_column_values_to_not_be_null("age") values= ("Private", "Self-emp-not-inc", "Self-emp-inc", "Federal-gov", "Local-gov", "State-gov", "Without-pay", "Never-worked") df.expect...
_____no_output_____
Apache-2.0
02.great_expectations_validation/3.Valid_data_via_function.ipynb
pengfei99/DataQualityAndValidation
Expand Hillsborough data set with additional ACS variables Header information*DataDive goal targeted:* "Expand the list of ACS variables available for analysis by joining the processed dataset with the full list of data profile variables."*Contact info*: Josh Hirner, jhirner@gmail.com
import pandas as pd
_____no_output_____
MIT
scripts/_jhirner/hillsborough_expand_acs_variables.ipynb
ahopejasen/datakind-test
Import the required data sets:(1) Procecessed housing insecurity data as `hb_proc`, (2) the ACS demographic dataset as `hb_acs`, and (3) the ACS data dictionary for interpeting `DPxx_xxxx` codes as `acs_dict`.
hb_proc = pd.read_csv("../../data/processed/hillsborough_fl_processed_2017_to_2019_20210225.csv") hb_proc.head() hb_acs = pd.read_csv("../../data/acs/hillsborough_acs5-2018_census.csv") hb_acs.head() acs_dict = pd.read_csv("../../data/acs/data_dictionary.csv") acs_dict.head()
_____no_output_____
MIT
scripts/_jhirner/hillsborough_expand_acs_variables.ipynb
ahopejasen/datakind-test
Expand the processed data set.Join `hb_proc` (processed Hillsborough housing insecurity) and `hb_acs` (Hillsborough ACS demographics) datasets on the GEO ID columns to generate the expanded Hillsborough dataset, `hb_expand`.
hb_expand = pd.merge(hb_proc, hb_acs, left_on = "census_tract_GEOID", right_on = "GEOID", how = "inner") hb_expand = hb_expand.drop(["GEOID", "index"], axis = 1) hb_expand.head()
_____no_output_____
MIT
scripts/_jhirner/hillsborough_expand_acs_variables.ipynb
ahopejasen/datakind-test
Quick evaluation for new correlationsLet's see if anything interesting popped up in this merge. For illustrative purposes only, we'll restrict this correlation to the `avg-housing-loss-rate` column from the original processed Hillsborough data.
hb_corr = hb_expand.corr(method = "spearman") # Examine correlation coefficients only for avg-housing-loss-rate, # and only with newly merged columns (i.e.: not present in the original processed data set) hb_housing_loss_corr = hb_corr["avg-housing-loss-rate"].dropna().drop(hb_proc.columns, axis = 0, errors = "ignore"...
_____no_output_____
MIT
scripts/_jhirner/hillsborough_expand_acs_variables.ipynb
ahopejasen/datakind-test
At a very cursory glance, it appears as though the expanded ACS variables offer both strong positive and strong negative correlations to housing insecurity. Export the expanded data
hb_expand.to_csv("../../data/processed/hillsborough_fl_processed_expanded_ACS_2017_to_2019_20210225.csv")
_____no_output_____
MIT
scripts/_jhirner/hillsborough_expand_acs_variables.ipynb
ahopejasen/datakind-test
Table of Contents1  Clustering based1.1  modeling1.2  prediction1.3  evaluation
from sklearn.metrics import pairwise_distances from sklearn import metrics from sklearn import mixture from sklearn.cluster import KMeans from nltk.cluster import KMeansClusterer, cosine_distance import pandas as pd from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.pipeline import Pipeline...
may use cols: ['global_index', 'doc_path', 'label', 'reply', 'reference_one', 'reference_two', 'Subject', 'From', 'Lines', 'Organization', 'contained_emails', 'long_string', 'text', 'error_message']
MIT
code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb
InscribeDeeper/Text-Classification
Clustering based- Steps: 1. Transform into TF-IDF matrix 2. Dimension reduction into 200 3. Clustering in cosine similarity space (since it is word) 4. Assign labels with majority vote based on training set labels 5. Prediction 1. Transform test set into TF-IDF matrix 2. Dimension reductio...
train_text = train['reply'] + ' ' + train['reference_one'] train_label = train['label'] test_text = test['reply'] + ' ' + test['reference_one'] test_label = test['label'] from sklearn.decomposition import TruncatedSVD def tfidf_vectorizer(train_text, test_text, min_df=3): tfidf_vect = TfidfVectorizer(stop_words=...
C:\Users\Administrator\Anaconda3\envs\py810\lib\site-packages\nltk\cluster\util.py:131: RuntimeWarning: invalid value encountered in double_scalars return 1 - (numpy.dot(u, v) / (sqrt(numpy.dot(u, u)) * sqrt(numpy.dot(v, v))))
MIT
code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb
InscribeDeeper/Text-Classification
prediction
pred = pred_clustering_model(dtm_test, clusterer, clusters_to_labels)
_____no_output_____
MIT
code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb
InscribeDeeper/Text-Classification
evaluation
from sklearn import preprocessing # le = preprocessing.LabelEncoder() # encoded_test_label = le.fit_transform(test_label) # print(metrics.classification_report(y_true = encoded_test_label, y_pred=pred, target_names=le.classes_)) print(metrics.classification_report(y_true = test_label, y_pred=pred))
_____no_output_____
MIT
code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb
InscribeDeeper/Text-Classification
Artificial Neural Networks Geo-Demographic Segmentation
import tensorflow as tf import numpy as np import pandas as pd from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config)
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
Part 1 - Data Preprocessing Data Loading
PATH = "../../../../Deep_Learning/ANN/Python/Churn_Modelling.csv" dataset = pd.read_csv(PATH) dataset.head() X = dataset.iloc[:, 3:-1].values y = dataset.iloc[:, -1].values
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
Encoding the Categorical Variables
from sklearn.preprocessing import LabelEncoder le = LabelEncoder() X[:, 2] = le.fit_transform(X[:, 2]) from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [1])], ...
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
Train Test Split
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
Feature Scaling
from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
Grid Search
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import GridSearchCV def build_classifer(optimizer='adam'): tf.random.set_seed(42) classifier = Sequential() classi...
_____no_output_____
MIT
Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb
samlaubscher/HacktoberFest2020-Contributions
LSTM ModelTrain a LSTM (long short term memory) model to forecast time series sequence of stock closing price Use 5 time steps to forecast 1 forward time step Imports
import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates import tensorflow as tf
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Upload closing price data
df = pd.read_csv('fb_rsi.csv') df['date'] = pd.to_datetime(df['date'])
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Convert pandas object to numpy array
close = np.array(df['close']) date = np.array(df['date']) print(len(close))
160679
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
There are 160,679 time steps, split 100,000 for training and the rest for testing
split = 100000 date_train = date[:split] #Training split x_train = close[:split] date_val = date[split:] #Testing split x_val = close[split:] #Variables for the windowing function below window_size = 5 #Number of timestep batch_size = 250 #Number of sequence to be loaded into the model, depends to GPU memory shuffle_b...
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Window function to split the sequence into features and labels Sequence is split into windows of 5 time steps as the feature and the next time step as the label
def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_b...
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
LSTM ModelCombine convolutional layers with LSTM layers for the complete model
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=60, kernel_size=5, strides=1, padding='causal', activation='relu', input_shape=[None, 1]), #1D convolution ...
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Train model for 600 epochs Average 7 seconds per epoch, training time took 1 hour 10 minutes
history = model.fit(train_set, epochs=600) Plot MAE and loss against epochs mae=history.history['mae'] loss=history.history['loss'] epochs=range(len(loss)) plt.plot(epochs, mae, 'r') plt.plot(epochs, loss, 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.show...
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Zooming into the last 100 epochs, although there are fluctuations, MAE and losses are still decreasing Model can be trained longer for higher accuracy
mae=history.history['mae'] loss=history.history['loss'] epochs=range(len(loss)) plt.plot(epochs[-100:], mae[-100:], 'r') plt.plot(epochs[-100:], loss[-100:], 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.show()
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Saving the model
model.save('lstm.h5')
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Function to forecast test data Similar to training, forecasting uses windows of 5 time steps to forecast forward time step
def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(250).prefetch(1) forecast = model.predict(ds) return forecast
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Forecast testing data
forecast = model_forecast(model, close[..., np.newaxis], window_size) forecast = forecast[split - window_size:-1, -1, 0]
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Visualising forecasted time seriesForecasted series looks similar to the actual stock, however the model was unable to match the peaks of the closing prices
locator = mdates.MonthLocator() fmt = mdates.DateFormatter('%b %y') plt.plot(date_val, x_val, alpha=0.5) plt.plot(date_val, forecast) x = plt.gca() x.xaxis.set_major_locator(locator) x.xaxis.set_major_formatter(fmt) plt.title('Actual vs Forecast') plt.legend(['Actual', 'Forecast']) plt.xticks(rotation=45) plt.show()
_____no_output_____
Apache-2.0
lstm_fb.ipynb
ctxj/Financial-Time-Series
Development of EDT residual
import itertools def edt(self, coords): arrivals = self.arrivals.set_index("handle") pairs = list(itertools.product(arrivals.index, arrivals.index)) r = [ (arrivals.loc[handle1, "time"] - arrivals.loc[handle2, "time"] ) - (self._tt[handle1].value(coords[:3], null=np.inf) - self._tt[handle2]...
_____no_output_____
MIT
pykonal_eq/quake_de.ipynb
malcolmw/PyKonalEQ
Precision-RecallExample of Precision-Recall metric to evaluate classifier output quality.Precision-Recall is a useful measure of success of prediction when theclasses are very imbalanced. In information retrieval, precision is ameasure of result relevancy, while recall is a measure of how many trulyrelevant results ar...
from __future__ import print_function
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
In binary classification settings--------------------------------------------------------Create simple data..................Try to differentiate the two first classes of the iris data
from sklearn import svm, datasets from sklearn.model_selection import train_test_split import numpy as np iris = datasets.load_iris() X = iris.data y = iris.target # Add noisy features random_state = np.random.RandomState(0) n_samples, n_features = X.shape X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
Compute the average precision score...................................
from sklearn.metrics import average_precision_score average_precision = average_precision_score(y_test, y_score) print('Average precision-recall score: {0:0.2f}'.format( average_precision))
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
Plot the Precision-Recall curve................................
from sklearn.metrics import precision_recall_curve import matplotlib.pyplot as plt precision, recall, _ = precision_recall_curve(y_test, y_score) plt.step(recall, precision, color='b', alpha=0.2, where='post') plt.fill_between(recall, precision, step='post', alpha=0.2, color='b') plt.xlabel...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
In multi-label settings------------------------Create multi-label data, fit, and predict...........................................We create a multi-label dataset, to illustrate the precision-recall inmulti-label settings
from sklearn.preprocessing import label_binarize # Use label_binarize to be multi-label like settings Y = label_binarize(y, classes=[0, 1, 2]) n_classes = Y.shape[1] # Split into training and test X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5, ...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
The average precision score in multi-label settings....................................................
from sklearn.metrics import precision_recall_curve from sklearn.metrics import average_precision_score # For each class precision = dict() recall = dict() average_precision = dict() for i in range(n_classes): precision[i], recall[i], _ = precision_recall_curve(Y_test[:, i], ...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
Plot the micro-averaged Precision-Recall curve...............................................
plt.figure() plt.step(recall['micro'], precision['micro'], color='b', alpha=0.2, where='post') plt.fill_between(recall["micro"], precision["micro"], step='post', alpha=0.2, color='b') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.0]) plt.title( 'Avera...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
Plot Precision-Recall curve for each class and iso-f1 curves.............................................................
from itertools import cycle # setup plot details colors = cycle(['navy', 'turquoise', 'darkorange', 'cornflowerblue', 'teal']) plt.figure(figsize=(7, 8)) f_scores = np.linspace(0.2, 0.8, num=4) lines = [] labels = [] for f_score in f_scores: x = np.linspace(0.01, 1) y = f_score * x / (2 * x - f_score) l, =...
_____no_output_____
MIT
scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb
gopala-kr/ds-notebooks
Regression test suite: Test of basic SSP GCE features Test of SSP with artificial yields,pure h1 yields, provided in NuGrid tables (no PopIII tests here). Focus are basic GCE features.You can find the documentation here.Before starting the test make sure that use the standard yield input files. Outline: $\odot$ Evolu...
#from imp import * #s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py') #%pylab nbagg import sys import sygma as s print s.__file__ reload(s) s.__file__ #import matplotlib #matplotlib.use('nbagg') import matplotlib.pyplot as plt #matplotlib.use('nbagg') import numpy as np from scipy.integ...
/Users/christian/Research/NuGRid/NuPyCEE/sygma.py
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
IMF notes: The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with(I) $N_{12}$ = k_N $\int _{m1}^{m2} m^{-2.35} dm$ Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$since the total mass $M_{12}$ in the mass interval above...
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
The total number of stars $N_{tot}$ is then:
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II) print N_tot
36877281297.2
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
With a yield ejected of $0.1 Msun$, the total amount ejected is:
Yield_tot=0.1*N_tot print Yield_tot/1e11
0.0368772812972
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
compared to the simulation:
import sygma as s reload(s) s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001, table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yiel...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Compare both results:
print Yield_tot_sim print Yield_tot print 'ratio should be 1 : ',Yield_tot_sim/Yield_tot
3687728129.72 3687728129.72 ratio should be 1 : 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of distinguishing between massive and AGB sources: Boundaries between AGB and massive for Z=0 (1e-4) at 8 (transitionmass parameter)
Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1 Yield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1 print 'Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0] print 'Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0] print 'Test total number of SNII agree with massive star yield...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Calculating yield ejection over time For plotting, take the lifetimes/masses from the yield grid:$Ini Mass & Age [yrs]1Msun = 5.67e91.65 = 1.211e92 = 6.972e83 = 2.471e84 = 1.347e85 = 8.123e76 = 5.642e77 = 4.217e712 = 1.892e715 = 1.381e720 = 9.895e625 = 7.902e6$
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\ imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \ sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') Yield_tot_sim=s1.h...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Simulation results in the plot above should agree with semi-analytical calculations. Test of parameter imf_bdys: Selection of different initial mass intervals Select imf_bdys=[5,20]
k_N=1e11*0.35/ (5**-0.35 - 20**-0.35) N_tot=k_N/1.35 * (5**-1.35 - 20**-1.35) Yield_tot=0.1*N_tot s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\ imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \ sn1a_table='yield_tables/sn1a...
Sould be 1: 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Select imf_bdys=[1,5]
k_N=1e11*0.35/ (1**-0.35 - 5**-0.35) N_tot=k_N/1.35 * (1**-1.35 - 5**-1.35) Yield_tot=0.1*N_tot s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\ imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\ sn1a_on=False, sn1a_table='yield_...
SYGMA run in progress.. SYGMA run completed - Run time: 0.32s
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Results:
print 'Sould be 1: ',Yield_tot_sim/Yield_tot
Sould be 1: 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of parameter imf_type: Selection of different IMF types power-law exponent : alpha_imf The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with$N_{12}$ = k_N $\int _{m1}^{m2} m^{-alphaimf} dm$Where k_N is the normalization constant. It can be derived from the total amount of mas...
alphaimf = 1.5 #Set test alphaimf k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2)) N_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1)) Yield_tot=0.1*N_tot s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001, table='yiel...
Should be 1 : 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Chabrier: Change interval now from [0.01,30] M<1: $IMF(m) = \frac{0.158}{m} * \exp{ \frac{-(log(m) - log(0.08))^2}{2*0.69^2}}$else: $IMF(m) = m^{-2.3}$
def imf_times_m(mass): if mass<=1: return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2)) else: return mass*0.0443*mass**(-2.3) k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] ) N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3) Yield_tot=N_tot * 0.1 s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Simulation should agree with semi-analytical calculations for Chabrier IMF. Kroupa: M<0.08: $IMF(m) = m^{-0.3}$M<0.5 : $IMF(m) = m^{-1.3}$else : $IMF(m) = m^{-2.3}$
def imf_times_m(mass): p0=1. p1=0.08**(-0.3+1.3) p2=0.5**(-1.3+2.3) p3= 1**(-2.3+2.3) if mass<0.08: return mass*p0*mass**(-0.3) elif mass < 0.5: return mass*p1*mass**(-1.3) else: #mass>=0.5: return mass*p1*p2*mass**(-2.3) k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] ) ...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Simulation results compared with semi-analytical calculations for Kroupa IMF. Test of parameter sn1a_on: on/off mechanism
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter', imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') s2=s.sygma(iolevel=0,mgal...
[0] [0.0] [100000000000.0] [3687728129.7190337] [0] [10000000.000000006] [100000000000.0] [3697728129.7190342] 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of parameter sn1a_rate (DTD): Different SN1a rate implementatinos Calculate with SNIa and look at SNIa contribution only. Calculated for each implementation from $4*10^7$ until $1.5*10^{10}$ yrs DTD taken from Vogelsberger 2013 (sn1a_rate='vogelsberger') $\frac{N_{1a}}{Msun} = \int _t^{t+\Delta t} 1.3*10^{-3} ...
#import read_yields as ry import sygma as s reload(s) plt.figure(99) #interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid #ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt') #zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7 s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e1...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Small test: Initial mass vs. lifetime from the input yield grid compared to the fit in the the Mass-Metallicity-lifetime plane (done by SYGMA) for Z=0.02. A double integration has to be performed in order to solve the complex integral from Wiersma:
#following inside function wiersma09_efolding #if timemin ==0: # timemin=1 from scipy.integrate import dblquad def spline1(x): #x=t minm_prog1a=3 #if minimum progenitor mass is larger than 3Msun due to IMF range: #if self.imf_bdys[0]>3: # minm_prog1a=self.imf_bdys[0] return max(minm_prog...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation. Compare number of WD's in range
sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range) s1.plot_sn_distr(xaxis='time',fraction=False)
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Wiersmagauss
reload(s) s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter', imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') Yield_tot_sim=...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation. Compare number of WD's in range
sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range)
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
SNIa implementation: Maoz12 $t^{-1}$
import sygma as s reload(s) s2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter', imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt', sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_table...
10000000.0 10000000.0 Should be 1: 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Check trend:
s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800) yields1=[] ages1=[] m=[1,1.65,2,3,4,5,6,7,12,15,20,25] ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6] for m1 in m: t=ages[m.index(m1)] #yields= a* dblquad(w...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of parameter tend, dt and special_timesteps First constant timestep size of 1e7
import sygma as s s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter', imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn', ...
Should be 0: 0 Should be 1: 1.0 Should be 1: 1.0 Should be 1: 1.0 Should be 1: 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note: changed tend
import sygma as s s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter', imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') pr...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Choice of dt should not change final composition: for special_timesteps:
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30], hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=Fals...
should be 1 1.0 should be 1 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of parameter mgal - the total mass of the SSP Test the total isotopic and elemental ISM matter at first and last timestep.
s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt', sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') s2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_a...
At last timestep, should be the same fraction: 0.0170583657213 0.0170583657213 0.0170583657213 At last timestep, should be the same fraction: 0.0170583657213 0.0170583657213 0.0170583657213
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of SN rate: depend on timestep size: shows always mean value of timestep; larger timestep> different mean
reload(s) s1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001, table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.tx...
SYGMA run in progress.. SYGMA run completed - Run time: 0.3s SYGMA run in progress.. SYGMA run completed - Run time: 11.46s
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Rate does not depend on timestep type:
s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1) s4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p') plt.xlim(3e7,1e10) s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2=...
_____no_output_____
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE
Test of parameter transitionmass : Transition from AGB to massive stars Check if transitionmass is properly set
import sygma as s; reload(s) s1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter', hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h...
1: 1.0 1: 1.0
BSD-3-Clause
regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb
katewomack/NuPyCEE