markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Override these arguments as needed:
address = args.address smoke_test = args.smoke_test num_actors = args.num_actors cpus_per_actor = args.cpus_per_actor num_actors_inference = args.num_actors_inference cpus_per_actor_inference = args.cpus_per_actor_inference
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Connecting to the Ray clusterNow, let's connect our Python script to this newly deployed Ray cluster!
if not ray.is_initialized(): ray.init(address=address)
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Data PreparationWe will use the [HIGGS dataset from the UCI Machine Learning datasetrepository](https://archive.ics.uci.edu/ml/datasets/HIGGS). The HIGGSdataset consists of 11,000,000 samples and 28 attributes, which is largeenough size to show the benefits of distributed computation.
LABEL_COLUMN = "label" if smoke_test: # Test dataset with only 10,000 records. FILE_URL = "https://ray-ci-higgs.s3.us-west-2.amazonaws.com/simpleHIGGS" ".csv" else: # Full dataset. This may take a couple of minutes to load. FILE_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases"...
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Split data into training and validation.
df_train, df_validation = train_test_split(df) print(df_train, df_validation)
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Distributed TrainingThe ``train_xgboost`` function contains all the logic necessary fortraining using XGBoost-Ray.Distributed training can not only speed up the process, but also allow youto use datasets that are too large to fit in memory of a single node. Withdistributed training, the dataset is sharded across diffe...
def train_xgboost(config, train_df, test_df, target_column, ray_params): train_set = RayDMatrix(train_df, target_column) test_set = RayDMatrix(test_df, target_column) evals_result = {} train_start_time = time.time() # Train the classifier bst = train( params=config, dtrain=tra...
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
We can now pass our Modin dataframes and run the function. We will use``RayParams`` to specify that the number of actors and CPUs to train with.
# standard XGBoost config for classification config = { "tree_method": "approx", "objective": "binary:logistic", "eval_metric": ["logloss", "error"], } bst, evals_result = train_xgboost( config, df_train, df_validation, LABEL_COLUMN, RayParams(cpus_per_actor=cpus_per_actor, num_actors=n...
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
PredictionWith the model trained, we can now predict on unseen data. For thepurposes of this example, we will use the same dataset for prediction asfor training.Since prediction is naively parallelizable, distributing it over multipleactors can measurably reduce the amount of time needed.
inference_df = RayDMatrix(df, ignore=[LABEL_COLUMN, "partition"]) results = predict( bst, inference_df, ray_params=RayParams( cpus_per_actor=cpus_per_actor_inference, num_actors=num_actors_inference ), ) print(results)
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Задача 1Проектирование функций для построения обучающих моделей по данным. В данной задача вам нужно разработать прототипы функций(объявление функций без реализаций) для задачи анализа данных из машинного обучения, должны быть учтены следующие шаги:* Загрузка данных из внешних источников* Обработка не заданных значени...
def loading_dataframe(path,source="file",type='csv'): """ Функция загружает файл из внешних источников. Параметры: path — путь, из которого загружается документ, source — тип документа (file (по умолчанию), http, https, ftp), type — расширение документа (txt,csv,xls). Результат: load_dat...
_____no_output_____
Apache-2.0
module_001_python/lesson_004_function/student_tasks/HomeWork.ipynb
VanyaTihonov/ML
Задача 2Задача повышенной сложности. Реализовать вывод треугольника паскаля, через функцию. Пример треугольника:![pascal triangle](pascal.png)Глубина 10 по умолчанию
def print_pascal(primary,deep=10): for i in range(1,deep+1): print(pascal(primary,i)) def pascal(primary,deep): if deep == 1: new_list = [primary] elif deep == 2: new_list = [] for i in range (deep): new_list.extend(pascal(primary,1)) else: new_list = ...
[1] [1, 1] [1, 2, 1] [1, 3, 3, 1] [1, 4, 6, 4, 1] [1, 5, 10, 10, 5, 1] [1, 6, 15, 20, 15, 6, 1] [1, 7, 21, 35, 35, 21, 7, 1] [1, 8, 28, 56, 70, 56, 28, 8, 1] [1, 9, 36, 84, 126, 126, 84, 36, 9, 1]
Apache-2.0
module_001_python/lesson_004_function/student_tasks/HomeWork.ipynb
VanyaTihonov/ML
IntroductionThis notebook describe how you can use VietOcr to train OCR model
! pip install --quiet vietocr
[?25l  |█████▌ | 10kB 26.4MB/s eta 0:00:01  |███████████ | 20kB 1.7MB/s eta 0:00:01  |████████████████▋ | 30kB 2.3MB/s eta 0:00:01  |██████████████████████▏ | 40kB 2.5MB/s eta 0:00:01  |███████████████████████████▋ ...
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Inference
import matplotlib.pyplot as plt from PIL import Image from vietocr.tool.predictor import Predictor from vietocr.tool.config import Cfg config = Cfg.load_config_from_name('vgg_transformer')
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Change weights to your weights or using default weights from our pretrained model. Path can be url or local file
# config['weights'] = './weights/transformerocr.pth' config['weights'] = 'https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA' config['cnn']['pretrained']=False config['device'] = 'cuda:0' config['predictor']['beamsearch']=False detector = Predictor(config) ! gdown --id 1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b ! ...
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Download sample dataset
! gdown https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE ! unzip -qq -o ./data_line.zip
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Train model 1. Load your config2. Train model using your dataset above Load the default config, we adopt VGG for image feature extraction
from vietocr.tool.config import Cfg from vietocr.model.trainer import Trainer
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Change the config * *data_root*: the folder save your all images* *train_annotation*: path to train annotation* *valid_annotation*: path to valid annotation* *print_every*: show train loss at every n steps* *valid_every*: show validation loss at every n steps* *iters*: number of iteration to train your model* *export*...
config = Cfg.load_config_from_name('vgg_transformer') #config['vocab'] = 'aAàÀảẢãÃáÁạẠăĂằẰẳẲẵẴắẮặẶâÂầẦẩẨẫẪấẤậẬbBcCdDđĐeEèÈẻẺẽẼéÉẹẸêÊềỀểỂễỄếẾệỆfFgGhHiIìÌỉỈĩĨíÍịỊjJkKlLmMnNoOòÒỏỎõÕóÓọỌôÔồỒổỔỗỖốỐộỘơƠờỜởỞỡỠớỚợỢpPqQrRsStTuUùÙủỦũŨúÚụỤưƯừỪửỬữỮứỨựỰvVwWxXyYỳỲỷỶỹỸýÝỵỴzZ0123456789!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ ' dataset_para...
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
you can change any of these params in this full list below
config
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
You should train model from our pretrained
trainer = Trainer(config, pretrained=True)
Downloading: "https://download.pytorch.org/models/vgg19_bn-c79401a0.pth" to /root/.cache/torch/hub/checkpoints/vgg19_bn-c79401a0.pth
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Save model configuration for inference, load_config_from_file
trainer.config.save('config.yml')
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Visualize your dataset to check data augmentation is appropriate
trainer.visualize_dataset()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Train now
trainer.train()
iter: 000200 - train loss: 1.657 - lr: 1.91e-05 - load time: 1.08 - gpu time: 158.33 iter: 000400 - train loss: 1.429 - lr: 3.95e-05 - load time: 0.76 - gpu time: 158.76 iter: 000600 - train loss: 1.331 - lr: 7.14e-05 - load time: 0.73 - gpu time: 158.38 iter: 000800 - train loss: 1.252 - lr: 1.12e-04 - load time: 1.29...
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Visualize prediction from our trained model
trainer.visualize_prediction()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Compute full seq accuracy for full valid dataset
trainer.precision()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
uMetalooper/vietocr
Heroes Of Pymoli Data Analysis* Of the 1163 active players, the vast majority are male (82%). There also exists, a smaller, but notable proportion of female players (16%).* Our peak age demographic falls between 20-24 (42%) with secondary groups falling between 15-19 (17.80%) and 25-29 (15.48%).* Our players are putti...
import pandas as pd import numpy as np
_____no_output_____
ADSL
HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_Example-checkpoint.ipynb
dimpalsuthar91/RePanda
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer wh...
from melusine.data.data_loader import load_email_data df_emails = load_email_data() df_emails = df_emails[['from','date']] df_emails['from'] df_emails['date']
_____no_output_____
Apache-2.0
tutorial/tutorial05_metadata_preprocessing.ipynb
milidris/melusine
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
from melusine.prepare_email.metadata_engineering import MetaExtension meta_extension = MetaExtension() df_emails = meta_extension.fit_transform(df_emails) df_emails.extension
_____no_output_____
Apache-2.0
tutorial/tutorial05_metadata_preprocessing.ipynb
milidris/melusine
MetaExtension transformer A **MetaDate transformer** creates new features from dates : **hour**, **minute** and **dayofweek**.
from melusine.prepare_email.metadata_engineering import MetaDate meta_date = MetaDate() df_emails = meta_date.fit_transform(df_emails) df_emails.date[0] df_emails.hour[0] df_emails.loc[0,'min'] df_emails.dayofweek[0]
_____no_output_____
Apache-2.0
tutorial/tutorial05_metadata_preprocessing.ipynb
milidris/melusine
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
from melusine.prepare_email.metadata_engineering import Dummifier dummifier = Dummifier(columns_to_dummify=['extension', 'dayofweek', 'hour', 'min']) df_meta = dummifier.fit_transform(df_emails) df_meta.columns df_meta.head()
_____no_output_____
Apache-2.0
tutorial/tutorial05_metadata_preprocessing.ipynb
milidris/melusine
Table of Contents1&nbsp;&nbsp;Seq2Seq With Attention1.1&nbsp;&nbsp;Data Preparation1.2&nbsp;&nbsp;Model Implementation1.2.1&nbsp;&nbsp;Encoder1.2.2&nbsp;&nbsp;Attention1.2.3&nbsp;&nbsp;Decoder1.2.4&nbsp;&nbsp;Seq2Seq1.3&nbsp;&nbsp;Training Seq2Seq1.4&nbsp;&nbsp;Evaluating Seq2Seq1.5&nbsp;&nbsp;Summary2&nbsp;&nbsp;Refer...
# code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', '..', 'notebook_format')) from formats import load_style load_style(css_style='custom2.css', plot_style=False) os.chdir(path) # 1. magic for inline plot ...
Ethen 2019-10-09 13:46:01 CPython 3.6.4 IPython 7.7.0 numpy 1.16.5 torch 1.1.0.post2 torchtext 0.3.1 spacy 2.1.6
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Seq2Seq With Attention Seq2Seq framework involves a family of encoders and decoders, where the encoder encodes a source sequence into a fixed length vector from which the decoder picks up and aims to correctly generates the target sequence. The vanilla version of this type of architecture looks something along the lin...
# !python -m spacy download de # !python -m spacy download en SEED = 2222 random.seed(SEED) torch.manual_seed(SEED) # tokenize sentences into individual tokens # https://spacy.io/usage/spacy-101#annotations-token spacy_de = spacy.load('de_core_news_sm') spacy_en = spacy.load('en_core_web_sm') def tokenize_de(text): ...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Model Implementation
# adjustable parameters INPUT_DIM = len(source.vocab) OUTPUT_DIM = len(target.vocab) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 ENC_HID_DIM = 512 DEC_HID_DIM = 512 N_LAYERS = 1 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
The following sections are heavily "borrowed" from the wonderful tutorial on this topic listed below.- [Jupyter Notebook: Neural Machine Translation by Jointly Learning to Align and Translate](https://nbviewer.jupyter.org/github/bentrevett/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointl...
class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout): super().__init__() self.emb_dim = emb_dim self.hid_dim = hid_dim self.input_dim = input_dim self.n_layers = n_layers self.dropout = dropout self.embedding = nn.Embed...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Notice that output's last dimension is 1024, which is the hidden dimension (512) multiplied by the number of directions (2). Whereas the hidden's first dimension is 2, representing the number of directions (2).- The returned outputs of bidirectional RNN at timestep $t$ is the output after feeding input to both the reve...
# the outputs are concatenated at the last dimension assert (outputs[-1, :, :ENC_HID_DIM] == hidden[0]).all() assert (outputs[0, :, ENC_HID_DIM:] == hidden[1]).all() # experiment with n_layers = 2 n_layers = 2 encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, n_layers, ENC_DROPOUT).to(device) outputs, hidden = enc...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Notice now the first dimension of the hidden cell becomes 4, which represents the number of layers (2) multiplied by the number of directions (2). The order of the hidden state is stacked by [forward_1, backward_1, forward_2, backward_2, ...]
assert (outputs[-1, :, :ENC_HID_DIM] == hidden[2]).all() assert (outputs[0, :, ENC_HID_DIM:] == hidden[3]).all()
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
We'll need some final touches for our actual encoder. As our encoder's hidden state will be used as the decoder's initial hidden state, we need to make sure we make them the same shape. In our example, the decoder is not bidirectional, and only needs a single context vector, $z$, to use as its initial hidden state, $s_...
class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, n_layers, dropout): super().__init__() self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.input_dim = input_dim self.n_layers = n_layers ...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Attention The next part is the hightlight. The attention layer will take in the previous hidden state of the decoder $s_{t-1}$, and all of the stacked forward and backward hidden state from the encoder $H$. The output will be an attention vector $a_t$, that is the length of the source sentece, each element of this vec...
class Attention(nn.Module): def __init__(self, enc_hid_dim, dec_hid_dim): super().__init__() self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim # enc_hid_dim multiply by 2 due to bidirectional self.fc1 = nn.Linear(enc_hid_dim * 2 + dec_hid_dim, dec_hid_dim) ...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Decoder Now comes the decoder, within the decoder, we first use the attention layer that we've created in the previous section to compute the attention weight, this gives us the weight for each source sentence that the model should pay attention to when generating the current target output in the sequence. Along with ...
class Decoder(nn.Module): def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, n_layers, dropout, attention): super().__init__() self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.output_dim = output_dim ...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Seq2Seq This part is about putting the encoder and decoder together and is very much identical to the vanilla seq2seq framework, hence the explanation is omitted.
class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src_batch, trg_batch, teacher_forcing_ratio=0.5): max_len, batch_size = trg_batch.shape ...
The model has 12,975,877 trainable parameters
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Training Seq2Seq We've done the hard work of defining our seq2seq module. The final touch is to specify the training/evaluation loop.
optimizer = optim.Adam(seq2seq.parameters()) # ignore the padding index when calculating the loss PAD_IDX = target.vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX) def train(seq2seq, iterator, optimizer, criterion): seq2seq.train() epoch_loss = 0 for batch in iterator: ...
Epoch: 01 | Time: 2m 30s Train Loss: 4.844 | Train PPL: 126.976 Val. Loss: 4.691 | Val. PPL: 108.948 Epoch: 02 | Time: 2m 30s Train Loss: 3.948 | Train PPL: 51.808 Val. Loss: 4.004 | Val. PPL: 54.793 Epoch: 03 | Time: 2m 31s Train Loss: 3.230 | Train PPL: 25.281 Val. Loss: 3.498 | Val. PPL: 33.059 Epoch...
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Evaluating Seq2Seq
seq2seq.load_state_dict(torch.load('tut2-model.pt')) test_loss = evaluate(seq2seq, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
| Test Loss: 3.237 | Test PPL: 25.467 |
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Here, we pick a random example in our dataset, print out the original source and target sentence. Then takes a look at whether the "predicted" target sentence generated by the model.
example_idx = 0 example = train_data.examples[example_idx] print('source sentence: ', ' '.join(example.src)) print('target sentence: ', ' '.join(example.trg)) src_tensor = source.process([example.src]).to(device) trg_tensor = target.process([example.trg]).to(device) print(trg_tensor.shape) seq2seq.eval() with torch.no...
_____no_output_____
MIT
deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb
certara-ShengnanHuang/machine-learning
Categorical deduction (generic and all inferences)1. Take a mix of generic and specific statements2. Create powerset of combinations of specific statements3. create a inference graph for each combination of specific statements.4. Make all possible inferences for each graph (chain)5. present the union of possible concl...
# Syllogism specific statements # First statement A __ B. # Second statement B __ C. # Third statement A ___ C -> look up tables to check if true, possible, or false. specific_statement_options = {'disjoint from','overlaps with','subset of','superset of','identical to'} # make a dictionary. key is a tuple with firs...
_____no_output_____
MIT
reasoning_engine/categorical reasoning/Categorical_deduction_generic_all_inferences.ipynb
rts1988/IntelligentTutoringSystem_Experiments
Understanding ROS NodesThis tutorial introduces ROS graph concepts and discusses the use of `roscore`, `rosnode`, and `rosrun` commandline tools.Source: [ROS Wiki](http://wiki.ros.org/ROS/Tutorials/UnderstandingNodes) Quick Overview of Graph Concepts* Nodes: A node is an executable that uses ROS to communicate with o...
%%bash --bg roscore
Starting job # 0 in a separate thread.
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
Using `rosnode``rosnode` displays information about the ROS nodes that are currently running. The `rosnode list` command lists these active nodes:
%%bash rosnode list %%bash rosnode info rosout
-------------------------------------------------------------------------------- Node [/rosout] Publications: * /rosout_agg [rosgraph_msgs/Log] Subscriptions: * /rosout [unknown type] Services: * /rosout/get_loggers * /rosout/set_logger_level contacting node http://localhost:43395/ ... Pid: 18703
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
Using `rosrun``rosrun` allows you to use the package name to directly run a node within a package (without having to know the package path).
%%bash --bg rosrun turtlesim turtlesim_node
Starting job # 2 in a separate thread.
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
NOTE: The turtle may look different in your turtlesim window. Don't worry about it - there are [many types of turtle](http://wiki.ros.org/DistributionsCurrent_Distribution_Releases) and yours is a surprise!
%%bash rosnode list
/rosout /turtlesim
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
One powerful feature of ROS is that you can reassign Names from the command-line.Close the turtlesim window to stop the node. Now let's re-run it, but this time use a [Remapping Argument](http://wiki.ros.org/Remapping%20Arguments) to change the node's name:
%%bash --bg rosrun turtlesim turtlesim_node __name:=my_turtle
Starting job # 3 in a separate thread.
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
Now, if we go back and use `rosnode list`:
%%bash rosnode list
/my_turtle /rosout /turtlesim
MIT
notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb
GimpelZhang/git_test
print('Welcome to Techno Quiz: ') ans = input('''Ready to begin (yes/no): ''') score=0 total_Q=15 if ans.lower() =='yes' : ans = input(''' 1.How to check your current python version ? A. python version B. python -V Ans:''') ...
_____no_output_____
Apache-2.0
Quiz.ipynb
sandeepkumarpradhan71/sandeepkumar
input x, truth y, predict (y-x) in bins.major changes:- in Datagenerator(), add y=y-X[output_idxs]- in create_predictions(): when unnormalizing, only multiply with std, dont add mean- included adaptive binsObservations- DOI takes much longer to train to same loss than normal categorical.- not much better performance wi...
%load_ext autoreload %autoreload 2 import xarray as xr import numpy as np import matplotlib.pyplot as plt from src.data_generator import * from src.train import * from src.utils import * from src.networks import * tf.__version__ import os import tensorflow as tf os.environ["CUDA_VISIBLE_DEVICES"]=str(0) print("Num GPUs...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Training
# model = build_resnet_categorical( # **args, input_shape=dg_train.shape, # ) # # model.summary() # categorical_loss = create_lat_categorical_loss(dg_train.data.lat, 2) # model.compile(keras.optimizers.Adam(1e-3), loss=categorical_loss) # model_history=model.fit(dg_train, epochs=50) #training is slower compared t...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Predictions
exp_id='categorical_doi_v1' model_save_dir=args['model_save_dir'] #args['ext_mean'] = xr.open_dataarray(f'{args["model_save_dir"]}/{args["exp_id"]}_mean.nc') #args['ext_std'] = xr.open_dataarray(f'{args["model_save_dir"]}/{args["exp_id"]}_std.nc') #dg_test = load_data(**args, only_test=True) model = keras.models.load_m...
257.84134 258.57373 258.57373
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Most likely class
# Using bin_mid_points of prediction with highest probability das = [] for var in ['z', 't']: idxs = np.argmax(preds[var], -1) most_likely = preds[var].mid_points[idxs] das.append(xr.DataArray( most_likely, dims=['time', 'lat', 'lon'], coords = [preds.time, preds.lat, preds.lon], nam...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
RMSE
compute_weighted_rmse(preds_ml_new, valid).load() #still very bad. for comparison, training on the same data for same epochs (loss=1.7) without difference to input method had rmse of 685
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Binned CRPS
preds['t'].mid_points #changing Observation directly instead of predictions for binned crps obs=valid-valid.shift(time=-dg_test.lead_time) obs=obs.sel(time=preds.time) #reducing to preds size obs=obs.sel(time=slice(None,'2018-12-28T22:00:00'))#removing nan values print( valid.t.isel(lat=0,lon=0).sel(time='2018-05-05T2...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
compare to - Adaptive binning Adaptive binning
#Finding bin edges on full 1 year training data (Not possible for 40 years) args['is_categorical']=False dg_train, dg_valid, dg_test = load_data(**args) args['is_categorical']=True x,y=dg_train[0]; print(x.shape, y.shape) diff=y-x[...,dg_train.output_idxs] print(diff.min(), diff.max(), diff.mean()) plt.hist(diff.resha...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Training for adaptive bins
# model2 = build_resnet_categorical( # **args, input_shape=dg_train.shape, # ) # # model.summary() # categorical_loss = create_lat_categorical_loss(dg_train.data.lat, 2) # model2.compile(keras.optimizers.Adam(1e-3), loss=categorical_loss) # model_history=model2.fit(dg_train, epochs=50) # exp_id='categorical_doi_...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Predictions for Adaptive bins
exp_id='categorical_doi_adaptive_bins_v1' model_save_dir=args['model_save_dir'] model2 = keras.models.load_model( f'{model_save_dir}/{exp_id}.h5', custom_objects={'PeriodicConv2D': PeriodicConv2D, 'categorical_loss': keras.losses.mse} ) #args #full-data (apr-dec 2018) preds = create_predictions(model2, dg_test,...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Most likely class
# Using bin_mid_points of prediction with highest probability das = [] for var in ['z', 't']: idxs = np.argmax(preds[var], -1) most_likely = preds[var].mid_points[idxs] das.append(xr.DataArray( most_likely, dims=['time', 'lat', 'lon'], coords = [preds.time, preds.lat, preds.lon], nam...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
RMSE
compute_weighted_rmse(preds_ml_new, valid).load() #almost same as non-adaptive. loss comparable (~2.9 for no-adaptive. ~2.3 for adaptive)
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Binned CRPS
preds['t'].mid_points #changing Observation directly instead of predictions for binned crps obs=valid-valid.shift(time=-dg_test.lead_time) obs=obs.sel(time=preds.time) #reducing to preds size obs=obs.sel(time=slice(None,'2018-12-28T22:00:00'))#removing nan values print( valid.t.isel(lat=0,lon=0).sel(time='2018-05-05T2...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
comparing to - without input difference
from src.data_generator import * args['bin_min']=-5; args['bin_max']=5 #checked min, max of (x-y) in train. args['num_bins'], args['bin_min'], args['bin_max'] dg_train, dg_valid, dg_test = load_data(**args) x,y=dg_train[0]; print(x.shape, y.shape) x,y=dg_valid[0]; print(x.shape, y.shape) x,y=dg_test[0]; print(x.shape,...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Binned CRPS
def compute_bin_crps(obs, preds, bin_edges): """ Last axis must be bin axis obs: [...] preds: [..., n_bins] """ obs = obs.values preds = preds.values # Convert observation a = np.minimum(bin_edges[1:], obs[..., None]) b = bin_edges[:-1] * (bin_edges[0:-1] > obs[..., None]) y ...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Adaptive binning
args['is_categorical']=False dg_train, dg_valid, dg_test = load_data(**args) args['is_categorical']=True x,y=dg_train[0]; print(x.shape, y.shape) diff=y-x[...,dg_train.output_idxs] print(diff.min(), diff.max(), diff.mean()) plt.hist(diff.reshape(-1)) diff=[] for x,y in dg_train: diff.append(y-x[...,dg_train.outp...
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
Unnormalized Data
a1=np.arange(100) mean=np.mean(a1); std=np.std(a1) a1_norm=(a1-mean)/std a1_norm a1[4]-a1[2] diff=a1_norm[4]-a1_norm[2] diff diff*std
_____no_output_____
MIT
nbs_probabilistic/07.2 - Difference of Input.ipynb
sagar-garg/WeatherBench
$f(x)=exp(\sin(\pi x))$ integrate from $-1$ to $1$.---
import math import numpy as np def f(x): return math.exp(np.sin(np.pi*x)) n=10 k=-1 result=0 for i in range(n): result+=f(k)/n result+=f(k+2/n)/n k=k+2/n print(result) n=20 k=-1 result=0 for i in range(n): result+=f(k)/n result+=f(k+2/n)/n k=k+2/n print(result) n=40 k=-1 result=0 for i in range(n): re...
_____no_output_____
MIT
sec5exercise02a.ipynb
teshenglin/computational_mathematics
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding...
import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline
_____no_output_____
MIT
Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb
prakhargurawa/PyTorch-ML
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some w...
# Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/"
_____no_output_____
MIT
Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb
prakhargurawa/PyTorch-ML
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]``...
# Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training) len(IMAGE_LIST)
_____no_output_____
MIT
Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb
prakhargurawa/PyTorch-ML
--- 1. Visualize the input images
# Select an image and its label by list index image_index = 0 selected_image = IMAGE_LIST[image_index][0] selected_label = IMAGE_LIST[image_index][1] ## TODO: Print out 1. The shape of the image and 2. The image's label `selected_label` print(selected_image.shape) print(selected_label) ## TODO: Display a night image #...
(458, 800, 3) day (700, 1280, 3) night
MIT
Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb
prakhargurawa/PyTorch-ML
# Installs %%capture !pip install --upgrade category_encoders plotly # Imports import os, sys os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git !git pull origin master !pip install -r requirements.txt os.chdir('module1') # Disable warning import wa...
_____no_output_____
MIT
Kaggle_Challenge_Assignment7.ipynb
JimKing100/DS-Unit-2-Kaggle-Challenge
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier...
import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bas...
3.0.1 3.0.0
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
ADE Classifier Pipeline (with a pretrained model)`True` : The sentence is talking about a possible ADE`False` : The sentences doesn't have any information about an ADE. ADE Classifier with BioBert ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABaIAAAEECAYAAADal0MeAAAgAElEQVR4Aeyd2ascxfv//Qe89corL7zwwgtBEE...
# Annotator that transforms a text column from dataframe into an Annotation ready for NLP documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("sentence") # Tokenizer splits words in a relevant format for NLP tokenizer = Tokenizer()\ .setInputCols(["sentence"])\ .setOutp...
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
As you can see `gastric problems` is not detected as `ADE` as it is in a negative context. So, classifier did a good job detecting that.
text="I just took a Metformin and started to feel dizzy." ade_lp_pipeline.annotate(text)['class'][0] t=''' Always tired, and possible blood clots. I was on Voltaren for about 4 years and all of the sudden had a minor stroke and had blood clots that traveled to my eye. I had every test in the book done at the hospital,...
True False True False
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
ADE Classifier trained with conversational (short) sentences This model is trained on short, conversational sentences related to ADE and is supposed to do better on the text that is short and used in a daily context. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABc4AAAESCAYAAADEw0ZmAAAgAElEQVR4Aeyd2Y8U1fu...
conv_classsifierdl = ClassifierDLModel.pretrained("classifierdl_ade_conversational_biobert", "en", "clinical/models")\ .setInputCols(["sentence", "sentence_embeddings"]) \ .setOutputCol("class") conv_ade_clf_pipeline = Pipeline( stages=[documentAssembler, tokenizer, ...
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
ADE NERExtracts `ADE` and `DRUG` entities from text. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAACAwAAAEMCAYAAABKwbYWAAAgAElEQVR4Aeydd7cURfe23w/yM2cwYUIFUdFHQYIBDKgIBlABFUQxICqgIj4ogiIIAipBERBFUARJCgYwgAFUEAUDiDk9/9a7rmLtsaZP95zuPnPgzMzNWmd1T3dVddWuq6ub3nft+n8u4d+XX37p+Pvf//6nP9lADIgBMSAGxIAYEANiQAyIATE...
documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentenceDetector = SentenceDetector()\ .setInputCols(["document"])\ .setOutputCol("sentence") tokenizer = Tokenizer()\ .setInputCols(["sentence"])\ .setOutputCol("token") word_embeddings = WordEmbeddingsM...
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
As you can see `gastric problems` is not detected as `ADE` as it is in a negative context. So, NER did a good job ignoring that. ADE NER with Bert embeddings ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAACCQAAAEKCAYAAADAEhYvAAAgAElEQVR4Aeyd95cURfu33z/ka8SMCROGRRETKqACBkwYABUwIIoBUQETPiiCD4IiJoIgIkpQBCUJJh...
documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentenceDetector = SentenceDetector()\ .setInputCols(["document"])\ .setOutputCol("sentence") tokenizer = Tokenizer()\ .setInputCols(["sentence"])\ .setOutputCol("token") bert_embeddings = BertEmbeddings....
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
Looks like Bert version of NER returns more entities than clinical embeddings version. NER and Classifier combined with AssertionDL Model
assertion_ner_converter = NerConverter() \ .setInputCols(["sentence", "token", "ner"]) \ .setOutputCol("ass_ner_chunk")\ .setWhiteList(['ADE']) biobert_assertion = AssertionDLModel.pretrained("assertion_dl_biobert", "en", "clinical/models") \ .setInputCols(["sentence", "ass_ner_chunk", "embeddings"]) \...
I feel a bit drowsy & have a little blurred vision, so far no gastric problems. I have been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it. Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to tak...
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
Looks great ! `gastric problems` is detected as `ADE` and `absent` ADE models applied to Spark Dataframes
import pyspark.sql.functions as F ! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Healthcare/data/sample_ADE_dataset.csv ade_DF = spark.read\ .option("header", "true")\ .csv("./sample_ADE_dataset.csv")\ ...
+--------------------------------------------------+-----+ | text|label| +--------------------------------------------------+-----+ |Do U know what Meds are R for bipolar depressio...|False| |# hypercholesterol: Because of elevated CKs (pe...| True| |Her weight, respirtory s...
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
**With BioBert version of NER** (will be slower but more accurate)
import pyspark.sql.functions as F ner_converter = NerConverter() \ .setInputCols(["sentence", "token", "ner"]) \ .setOutputCol("ner_chunk")\ .setWhiteList(['ADE']) ner_pipeline = Pipeline(stages=[ documentAssembler, sentenceDetector, tokenizer, bert_embeddings, ade_ner_bert, ner_c...
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
**Doing the same with clinical embeddings version** (faster results)
import pyspark.sql.functions as F ner_converter = NerConverter() \ .setInputCols(["sentence", "token", "ner"]) \ .setOutputCol("ner_chunk")\ .setWhiteList(['ADE']) ner_pipeline = Pipeline(stages=[ documentAssembler, sentenceDetector, tokenizer, word_embeddings, ade_ner, ner_converter]) ...
+----------------------------------------------------------------------+----------------------------------------------------------------------+ | text| ADE_phrases| +-------------------------------...
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
Creating sentence dataframe (one sentence per row) and getting ADE entities and categories
documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentenceDetector = SentenceDetector()\ .setInputCols(["document"])\ .setOutputCol("sentence")\ .setExplodeSentences(True) tokenizer = Tokenizer()\ .setInputCols(["sentence"])\ .setOutputC...
+------------------------------------------------------------+---------------------------------------------+-------+ | sentence| ADE_phrases| is_ADE| +------------------------------------------------------------+------------------------...
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
Creating a pretrained pipeline with ADE NER, Assertion and Classifer
# Annotator that transforms a text column from dataframe into an Annotation ready for NLP documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("sentence") # Tokenizer splits words in a relevant format for NLP tokenizer = Tokenizer()\ .setInputCols(["sentence"])\ .setOutputCol("t...
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb
gkovaig/spark-nlp-workshop
Slice 136 patient 0002
from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(d[2]), d[2]) class_weights import keras model = keras.models.load_model('trial_0001_MFCcas_dim2_128_acc.h5') m_...
_____no_output_____
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
eval on 128th slice 0002
model.evaluate([X1,X2],y,batch_size = 1024) model_info = model.fit([X1,X2],y,epochs=30,batch_size=256,class_weight= class_weights) import matplotlib.pyplot as plt plt.plot(model_info.history['acc']) #plt.plot(m_info.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['...
_____no_output_____
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
eval on 100th slice 0001
model.evaluate([X1,X2],y,batch_size = 1024) pred = model.predict([X1,X2],batch_size = 1024) pred = np.around(pred) pred1 = np.dot(pred.reshape(17589,5),np.array([0,1,2,3,4])) y1 = np.dot(y.reshape(17589,5),np.array([0,1,2,3,4])) y2 = np.argmax(y.reshape(17589,5),axis = 1) y2.all() == 0 y1.all()==0 from sklearn import m...
_____no_output_____
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
Slice 128 patient 0001
from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(d[2]), d[2]) class_weights m1.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) m1...
Epoch 1/20 14541/14541 [==============================] - 127s 9ms/step - loss: 1.3402 - acc: 0.8345 Epoch 2/20 14541/14541 [==============================] - 123s 8ms/step - loss: 1.1816 - acc: 0.9560 Epoch 3/20 14541/14541 [==============================] - 123s 8ms/step - loss: 1.0906 - acc: 0.9647 Epoch 4/20 14541/...
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
plot of inputcascade
import matplotlib.pyplot as plt plt.plot(m1_info.history['acc']) #plt.plot(m_info.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() m1.save('trial_0001_input_cascade_acc.h5') plt.plot(m_info.history['acc']) #plt.plot(m_i...
_____no_output_____
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
Training on slice 128, evaluating on 136
m.evaluate([X1,X2],y,batch_size = 1024) m.save('trial_0001_MFCcas_dim2_128_acc.h5') pred = m.predict([X1,X2],batch_size = 1024) print(((pred != 0.) & (pred != 1.)).any()) pred = np.around(pred) type(y) pred1 = np.dot(pred.reshape(17589,5),np.array([0,1,2,3,4])) pred1.shape y1 = np.dot(y.reshape(17589,5),np.array([0,1,2...
data.ipynb model.ipynb data_scan_0001.pickle training.ipynb data_trial_81.h5 trial_0001_81_accuracy.h5 data_trial_dim2_128.h5 trial_0001_81_f1.h5 data_trial.h5 trial_0001_accuracy.h5 data_trial_X.pickle trial_0001_f1.h5 data_trial_Y.pickle trial_0001_input_cascade_acc.h5 data_Y_0001.pickle trial_0001_input_cas...
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
for training over entire image, create batch of patches for one image, batch of labels in Y
import h5py import numpy as np hf = h5py.File('data_trial_dim2_128.h5', 'r') X = hf.get('dataset_1') Y = hf.get('dataset_2') y = np.zeros((26169,1,1,5)) for i in range(y.shape[0]): y[i,:,:,Y[i]] = 1 X = np.asarray(X) X.shape keras.__version__ import keras.backend as K def f1_score(y_true, y_pred): # Count posit...
_____no_output_____
MIT
model.ipynb
abhi134/Brain_Tumor_Segmentation
Load 10 years of accident data, from 2007 to 2016
#load accidents data from 2007 to 2016 dbf07= DBF('accident/accident2007.dbf') dbf08= DBF('accident/accident2008.dbf') dbf09= DBF('accident/accident2009.dbf') dbf10= DBF('accident/accident2010.dbf') dbf11 = DBF('accident/accident2011.dbf') dbf12 = DBF('accident/accident2012.dbf') dbf13 = DBF('accident/accident2013.dbf...
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
First, we want to combine accidents10 ~ accidents16 to one dataframe. Since not all of the accident data downloaded from the U.S. Department of Transportation have the same features, by using the `jion:inner` option in `pd.concat` function, we can get the intersection of features.
# rename column name in frame07 so that columns names are the same with other frames accidents07.rename(columns={'latitude': 'LATITUDE', 'longitud': 'LONGITUD'}, inplace=True) # take a look inside how the accident data file looks like #combine all accidents file allaccidents = pd.concat([accidents07,accidents08,acciden...
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
The allaccidents table recorded 320874 accidents from 2010-2016, and it has 42 features. Here are the meaning of some of the features according to the `FARS Analytical User’s Manual`. Explaination of variables*VE_TOTAL*: Number of Vehicle in crash *VE_FORMS*: Number of Motor Vehicles in Transport (MVIT) *PED*: Number o...
import warnings warnings.filterwarnings('ignore') accidents = allaccidents[['YEAR','ST_CASE','STATE','VE_TOTAL','PERSONS','FATALS','MONTH','DAY_WEEK','HOUR','NHS','LATITUDE','LONGITUD','MAN_COLL','LGT_COND','WEATHER','ARR_HOUR','ARR_MIN','CF1','DRUNK_DR']] accidents.rename(columns={'ST_CASE':'CASE_NUM','VE_TOTAL':'NUM_...
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
combine "year" and "case_num" to reindex accidents dataframe.
accidents['STATE']=accidents['STATE'].astype(int) accidents['CASE_NUM']=accidents['CASE_NUM'].astype(int) accidents['YEAR']=accidents['YEAR'].astype(int) accidents.index = list(accidents['YEAR'].astype(str) + accidents['CASE_NUM'].astype(str)) accidents.head() accidents.shape
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
Load vehicle data file which contains mortality rate We also want to study the mortality rate of fatal accidents. The data element “Fatalities in Vehicle” in the Vehicle data file from the `U.S. Department of Transportation` website provides the number of deaths in a vehicle.
vdbf07= DBF('vehicle_deaths/vehicle2007.dbf') vdbf08= DBF('vehicle_deaths/vehicle2008.dbf') vdbf09= DBF('vehicle_deaths/vehicle2009.dbf') vdbf10= DBF('vehicle_deaths/vehicle2010.dbf') vdbf11= DBF('vehicle_deaths/vehicle2011.dbf') vdbf12= DBF('vehicle_deaths/vehicle2012.dbf') vdbf13= DBF('vehicle_deaths/vehicle2013.dbf'...
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
plot
#the total accidents number each year, analysis the difference between every year year_acci=all[['YEAR','CASE_NUM']].groupby('YEAR').count() month_acci=all[['MONTH','CASE_NUM']].groupby('MONTH').count() day_acci=all[['DAY_WEEK','CASE_NUM']].groupby('DAY_WEEK').count() hour_acci=all[['HOUR','CASE_NUM']].groupby('HOUR')....
_____no_output_____
MIT
.ipynb_checkpoints/Project 3-checkpoint.ipynb
junemore/traffic-accidents-analysis
Part 9: Hither to Train, Thither to TestOK, now we know a bit about perceptrons. We'll return to that subject again. But now let's do a couple of things with our 48 colors from lesson 7:* We're going to wiggle some more - perturb the color data - in order to generate even more data.* But now we're going to randomly sp...
from keras.layers import Activation, Dense, Dropout from keras.models import Sequential import keras.optimizers, keras.utils, numpy from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer def train(rgbValues, colorNames, epochs = 3, perceptronsPerColorName = 4, batchSize =...
Using TensorFlow backend.
MIT
Part09_Hither_to_Train_Thither_to_Test.ipynb
erikma/ColorMatching
Here's our createMoreTrainingData() function, mostly the same but we've doubled the number of perturbValues by adding points in between the previous ones.
def createMoreTrainingData(colorNameToRGBMap): # The incoming color map is not typically going to be oversubscribed with e.g. # extra 'red' samples pointing to slightly different colors. We generate a # training dataset by perturbing each color by a small amount positive and # negative. We do this for e...
_____no_output_____
MIT
Part09_Hither_to_Train_Thither_to_Test.ipynb
erikma/ColorMatching
And our previous 48 crayon colors, and let's try training:
def rgbToFloat(r, g, b): # r, g, b in 0-255 range return (float(r) / 255.0, float(g) / 255.0, float(b) / 255.0) # http://www.jennyscrayoncollection.com/2017/10/complete-list-of-current-crayola-crayon.html colorMap = { # 8-crayon box colors 'red': rgbToFloat(238, 32, 77), 'yellow': rgbToFloat(252, 232,...
WARNING:tensorflow:From c:\users\erik\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by pl...
MIT
Part09_Hither_to_Train_Thither_to_Test.ipynb
erikma/ColorMatching
Not bad: We quickly got our loss down to 0.17 in only 3 epochs, but the larger batch size kept it from taking a really long time.But let's examine our new addition, the test data scoring result. From my machine: `Score: loss=0.1681, accuracy=0.9464`Note that we trained with 74,000 data points, but we kept aside an addi...
from ipywidgets import interact from IPython.core.display import display, HTML def displayColor(r, g, b): rInt = min(255, max(0, int(r * 255.0))) gInt = min(255, max(0, int(g * 255.0))) bInt = min(255, max(0, int(b * 255.0))) hexColor = "#%02X%02X%02X" % (rInt, gInt, bInt) display(HTML('<div style=...
_____no_output_____
MIT
Part09_Hither_to_Train_Thither_to_Test.ipynb
erikma/ColorMatching
In my opinion the extra perturbation data made quite a bit of difference. It guesses over 70% for gray at (0.5, 0.5, 0.5), better than before.Here's the hyperparameter slider version so you can try out different epochs, batch sizes, and perceptrons:
@interact(epochs = (1, 10), perceptronsPerColorName = (1, 12), batchSize = (1, 50)) def trainModel(epochs=4, perceptronsPerColorName=3, batchSize=16): global colorModel global colorLabels (colorModel, colorLabels) = train(rgbValues, colorNames, epochs=epochs, perceptronsPerColorName=perceptronsPerColorName,...
_____no_output_____
MIT
Part09_Hither_to_Train_Thither_to_Test.ipynb
erikma/ColorMatching