markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
3.1.1.1 Variables intialization
img_width, img_height = 224, 224 #Keras ImageDataGenerator to load, transform the images of the dataset BASE_DATA_DIR = 'data/' IMG_DATA_DIR = 'MURA-v1.1/'
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.1.2 XR_ELBOW ImageDataGenertors I am going to generate model for every study type and ensemble them. Hence i am preparing data per study type for the model to be trained on.
train_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'train/XR_FINGER' valid_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'valid/XR_FINGER' train_datagen = ImageDataGenerator( rotation_range=30, horizontal_flip=True ) test_datagen = ImageDataGenerator( rotation_range=30, horizontal_flip=True ) study_types =...
100%|██████████| 1865/1865 [00:13<00:00, 133.99it/s] 100%|██████████| 166/166 [00:01<00:00, 148.46it/s] 461it [00:01, 271.51it/s] 5106it [00:18, 269.46it/s]
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.2 Building a model As per the MURA paper, i replaced the fully connected layer with the one that has a single output, after that i applied a sigmoid nonlinearity. In the paper, the optimized weighted binary cross entropy loss. Please see below for the formula,L(X, y) = -WT,1 * ylog p(Y = 1|X) -WT,0 * (1 - y)log p(Y ...
#model parameters for training #K.set_learning_phase(1) nb_train_samples = len(train_dict['images']) nb_validation_samples = len(valid_dict['images']) epochs = 10 batch_size = 8 steps_per_epoch = nb_train_samples//batch_size print(steps_per_epoch) n_classes = 1 def build_model(): base_model = DenseNet169(input_shap...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.2.2 Training the Model
#train the module model_history = model.fit_generator( train_generator, epochs=epochs, workers=0, use_multiprocessing=False, steps_per_epoch = nb_train_samples//batch_size, validation_data=validation_generator, validation_steps=nb_validation_samples //batch_size, callbacks=callbacks_li...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.2.3 Visualizing the model
#There was a bug in keras to use pydot in the vis_utils class. In order to fix the bug, i had to comment out line#55 in vis_utils.py file and reload the module #~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/utils from keras.utils import plot_model from keras.utils.vis_utils import * import keras im...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.3 Performance Evaluation
#Now we have trained our model, we can see the metrics during the training proccess plt.figure(0) plt.plot(model_history.history['acc'],'r') plt.plot(model_history.history['val_acc'],'g') plt.xticks(np.arange(0, 5, 1)) plt.rcParams['figure.figsize'] = (8, 6) plt.xlabel("Num of Epochs") plt.ylabel("Accuracy") plt.title(...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
ROC Curve
from sklearn.metrics import roc_curve fpr_keras, tpr_keras, thresholds_keras = roc_curve(valid_dict['labels'], pred_batch) from sklearn.metrics import auc auc_keras = auc(fpr_keras, tpr_keras) plt.figure(1) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras)) p...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
Schwachstellenanalyse hinsichtlich bekannter CVE's FragestellungTreten in dem vorliegendem Java-Projekt mögliche Sicherheitslücken aufgrund von gemeldeten CVE auf? Wie kritisch werden jene Schwachstellen eingeschätzt?* relevant für Dienstleister/Entwickler und Kunden/Anwender zugleich um Angriffe von Dritten über die...
#Import of all used libraries import py2neo import pandas as pd import numpy as np import json from pandas.io.json import json_normalize import openpyxl import urllib3 from urllib3 import request import json import certifi from IPython.display import display, HTML import pygal base_html = """ <!DOCTYPE html> <html> ...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Datenimport über CVE API oder entsprechender JSONBeide Möglichkeiten besitzen eine ähnliche Datenstrukturierung. Der Request über cveapi erweitert die Strukturierung nur um wenige weitere Key-Value-Paare, weshalb das DataFrame zusätzlicher Anpassung benötigt.
# Laden der statischen CVE-Daten über JSON-Datei # Download über https://nvd.nist.gov/vuln/data-feeds api = 'false' with open('CVE/nvdcve-1.1-2021.json', encoding='utf-8') as staticData: jsonData = json.load(staticData) df_raw = pd.json_normalize(jsonData, record_path =['CVE_Items']) # Datenimport über cveapi #...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Datenaufbereitung über DataFramesDaten aus dem Request oder der JSON-Datei werden in mehreren DataFrames entschachtelt und aufbereitet, um für die darauffolgende Filterung bzgl. der verwendeten Artefakte vorbereitet zu werden.
#Aufbereitung der Daten zu einer Tabelle mit ID, Beschreibung & Schweregrad der Sicherheitslücke #DataFrame wird auf 6 Spalten gekürzt & Spalten werden umbenannt (Lesbarkeit) vulnerableList= df_raw.columns.isin(['cve.CVE_data_meta.ID', 'impact.baseMetricV3.cvssV3.confidentialityImpact', 'impact.baseMetricV3.cvssV3.int...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Auflistung der SchwachstellenDie folgende Tabelle stellt alle möglichen Schwachstellen bzgl. der verwendeten Artefakte in dem gescannten Projekt dar. Es werden neben der CVE-ID & dem im Projekt verwendeten Artefakt + Versionsnummer auch eine CVE-Beschreibung und die jeweiligen Scores zur Einschätzung der Relevanz und ...
#Neues DataFrame mit kompakten Daten (ohne Versionenvergleich) df_compact = df_result[0:0] df_compact = df_result.drop_duplicates(subset='CVE-ID', keep="first") df_compact = df_compact.loc[:,df_compact.columns.isin(['CVE-ID', 'verwendetes Artefakt', 'verwendete Version'])] df_compact = pd.merge(left=basicList, right=df...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Legende*Base Score* = Repräsentation des natürlichen Charakters und des Schweregrads einer Schwachstelle (konstant über einen längeren Zeitraum, über verschiedene Umgebungen hinweg) &rarr; beinhaltet den Impact Score & den Exploitability Score*Exploitability Score* = Reflektion der angreifbaren Komponente &rarr; beinh...
df_count = df_compact['verwendetes Artefakt'].value_counts().to_frame() df_count['Artefakt'] = df_count.index df_count.columns =['Anzahl auftretender CVE', 'Artefakt'] df_count.columns.name = None df_count = df_count.reset_index() df_count = df_count.loc[:,df_count.columns.isin(['Anzahl auftretender CVE', 'Artefakt'])]...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Verteilung der hohen Impactbewertungen über die zehn häufigsten Artefakte**Legende** * *Confidentiality Impact* = Einfluss auf die Vertraulichkeit der Informationsgewinnung durch das System (Beinhaltet z.B. begrenzter Informationszugriff, Zugriff nur per Authorisierung)* *Integrity Impact* = Einfluss auf den Wahrheits...
from pygal.style import DefaultStyle top5 = df_count.head(10) labels = [] impactTypes = ['Confidentially Impact', 'Integrity Impact', 'Availability Impact'] count_ci = [] count_ii = [] count_ai = [] line_chart = pygal.StackedBar(show_legend=True, human_readable=True, fill=True, legend_at_bottom=True, print_values=True...
_____no_output_____
Apache-2.0
Software-Analysis.ipynb
softvis-research/security-analysis
Explore and create ML datasets In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is mu...
import datalab.bigquery as bq import seaborn as sns import pandas as pd import numpy as np import shutil %%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Extract sample data from BigQuery The dataset that we will use is a BigQuery public dataset. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.Let's write a SQL query to pick up intere...
%sql --module afewrecords SELECT pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM [nyc-tlc:yellow.trips] LIMIT 10 trips = bq.Query(afewrecords).to_dataframe() trips
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,0...
%sql --module afewrecords2 SELECT pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM [nyc-tlc:yellow.trips] WHERE ABS(HASH(pickup_datetime)) % $EVERY_N == 1 trips = bq.Query(afewrecord...
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Exploring data Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
ax = sns.regplot(x="trip_distance", y="fare_amount", ci=None, truncate=True, data=trips)
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Hmm ... do you see something wrong with the data that needs addressing?It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer ...
%sql --module afewrecords3 SELECT pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM [nyc-tlc:yellow.trips] WHERE (ABS(HASH(pickup_datetime)) % $EVERY_N == 1 AND trip_distance > 0 AN...
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
What's up with the streaks at \$45 and \$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.Let's examine whether the toll amount is captured in the total amount.
tollrides = trips[trips['tolls_amount'] > 0] tollrides[tollrides['pickup_datetime'] == '2012-09-05 15:45:00']
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to b...
trips.describe()
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Hmm ... The min, max of longitude look strange.Finally, let's actually look at the start and end of a few of the trips.
def showrides(df, numlines): import matplotlib.pyplot as plt lats = [] lons = [] for iter, row in df[:numlines].iterrows(): lons.append(row['pickup_longitude']) lons.append(row['dropoff_longitude']) lons.append(None) lats.append(row['pickup_latitude']) lats.append(row['dropoff_latitude']) ...
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
As you'd expect, rides that involve a toll are longer than the typical ride. Quality control and other preprocessing We need to some clean-up of the data:New York city longitudes are around -74 and latitudes are around 41.We shouldn't have zero passengers.Clean up the total_amount column to reflect only fare_amount an...
def preprocess(trips_in): trips = trips_in.copy(deep=True) trips.fare_amount = trips.fare_amount + trips.tolls_amount del trips['tolls_amount'] del trips['total_amount'] del trips['trip_distance'] del trips['pickup_datetime'] qc = np.all([\ trips['pickup_longitude'] > -78, \ trip...
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.Let's move on to creating the ML datasets. Create ML datasets Let's split the QCed data randomly into training, validation and test sets.
shuffled = tripsqc.sample(frac=1) trainsize = int(len(shuffled['fare_amount']) * 0.70) validsize = int(len(shuffled['fare_amount']) * 0.15) df_train = shuffled.iloc[:trainsize, :] df_valid = shuffled.iloc[trainsize:(trainsize+validsize), :] df_test = shuffled.iloc[(trainsize+validsize):, :] df_train.describe() df_vali...
_____no_output_____
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) until we get to point of using Dataflow and Cloud ML.
def to_csv(df, filename): outdf = df.copy(deep=False) outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key # reorder columns so that target is first column cols = outdf.columns.tolist() cols.remove('fare_amount') cols.insert(0, 'fare_amount') print (cols) # new order of columns outdf = out...
6.0,-74.013667,40.713935,-74.007627,40.702992,2,0 9.3,-74.007025,40.730305,-73.979111,40.752267,1,1 6.9,-73.9664,40.7598,-73.9864,40.7624,1,2 36.8,-73.961938,40.773337,-73.86582,40.769607,1,3 6.5,-73.989408,40.735895,-73.9806,40.745115,1,4 5.5,-73.983033,40.739107,-73.979105,40.74436,6,5 4.9,-73.983879,40.761266,...
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Verify that datasets exist
!ls -l *.csv
-rw-r--r-- 1 root root 88622 Feb 26 02:34 taxi-test.csv -rw-r--r-- 1 root root 417222 Feb 26 02:34 taxi-train.csv -rw-r--r-- 1 root root 88660 Feb 26 02:34 taxi-valid.csv
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
%bash head taxi-train.csv
12.0,-73.987625,40.750617,-73.971163,40.78518,1,0 4.5,-73.96362,40.774363,-73.953485,40.772665,1,1 4.5,-73.989649,40.756633,-73.985597,40.765662,1,2 10.0,-73.9939498901,40.7275238037,-74.0065841675,40.7442398071,1,3 2.5,-73.950223,40.66896,-73.948112,40.668872,6,4 7.3,-73.98511,40.742173,-73.96586,40.759668,4,5 8.1,-73...
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them. Benchmark Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.My model is going to be to simply divide the mean fare_amount by the mean tr...
import datalab.bigquery as bq import pandas as pd import numpy as np import shutil def distance_between(lat1, lon1, lat2, lon2): # haversine formula to compute distance "as the crow flies". Taxis can't fly of course. dist = np.degrees(np.arccos(np.minimum(1,np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np...
Rate = $2.58056321263/km Train RMSE = 6.78227475714 Valid RMSE = 6.78227475714 Test RMSE = 5.56794896998
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
Benchmark on same datasetThe RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:
def create_query(phase, EVERY_N): """ phase: 1=train 2=valid """ base_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key, DAYOFWEEK(pickup_datetime)*1...
Final Validation Set RMSE = 8.02608564676
Apache-2.0
courses/machine_learning/datasets/create_datasets.ipynb
AmirQureshi/code-to-run-
https://www.digitalocean.com/community/tutorials/how-to-use-the-python-debuggerhttps://docs.python.org/3/library/pdb.html
import pdb def func(): a = range(20) a[30] func() %debug def func(): a = range(20) pdb.set_trace() a[30] func() import xarray as xr ds = xr.open_mfdataset('CAM*', decode_times=False) a = 20 ds.sel(lat=a) %debug
> /Users/stephanrasp/repositories/ESS-Python-Tutorial/materials/week4/pandas/_libs/hashtable_class_helper.pxi(339)pandas._libs.hashtable.Float64HashTable.get_item (pandas/_libs/hashtable.c:7415)() ipdb> u > /Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/pand...
MIT
materials/week4/debugging.ipynb
yuchiaol/ESS-Python-Tutorial
Sıfır related matchleyen caselere örnek:14173 - relatedla sorunun alakası yok denecek kadar az - bizim bulduklarımız similarity degerine göre daha mantıklı14204 - hiç related yok - bizim bulduklarımız some what similar13849 - soru ve relatedları arasında 1-2 kelime matchliyor, anlam acısından bag yok - bizim buldukları...
plt.bar(range(len(crossCheckDict)), list(crossCheckDict.values()), align='center') plt.xticks(range(len(crossCheckDict)), list(crossCheckDict.keys())) plt.show() listOfVals = crossCheckDict.values() len(listOfVals)
_____no_output_____
MIT
DOC2VEC.ipynb
utkueray/CQA
On average how many similar related questions that stack exchange and our doc2vec model suggests
sum(listOfVals)/len(listOfVals) sum(listOfVals) for key in bothRelatedAndSim.keys(): print("https://ai.stackexchange.com/questions/" + str(key)) print([i for i,j in bothRelatedAndSim[key]]) f = open("DOC2VEC_RELATEDDICT.txt", "a") for key in bothRelatedAndSim: f.write(str(key)+"\t"+str([i for i,j in bothR...
13544 [4376, 8518, 2817, 12671, 8962, 6139] ['neural-networks', 'recurrent-neural-networks', 'optimization', 'logic', 'function-approximation'] 4376 ['neural-networks'] 8518 ['neural-networks', 'machine-learning', 'backpropagation'] 2817 ['optimization', 'heuristics'] 12671 ['neural-networks', 'function-approximation']...
MIT
DOC2VEC.ipynb
utkueray/CQA
import requestsAPIKEY = "unCQQDAhgl)qZ4GZRXVVGQ((";query = "https://api.stackexchange.com/2.2/questions/" + str(54)+"?order=desc&sort=activity&site=ai&key="+APIKEY;response = requests.get(query) print(response.json()["items"][0]["tags"]) Api ile websitesi eşleşiyor ama post.xml dosyasındakiler ile eşleşmiyor import tim...
tagVsQuestion = {} for key in relatedId.keys(): #print(key, relatedId[key]) #print(tags[key]) true = 0 false = 0 for item in relatedId[key]: if item in tags.keys(): #print(item, tags[item]) #print(np.in1d(tags[key],tags[item]).any()) if np.in1d(tags[key],t...
_____no_output_____
MIT
DOC2VEC.ipynb
utkueray/CQA
related ile sorunun tagleri eşleşmeyen sayısı
print([x for x in tagVsQuestion.keys() if tagVsQuestion[x] == 0])
[15415, 15451, 15515, 16017, 16490]
MIT
DOC2VEC.ipynb
utkueray/CQA
sadec 5 tane eşleşmiyor o datalarda elimizde mevcut değil normalde çalışıyorlar
model.save("doc2vecmodel") model2 = gensim.models.doc2vec.Doc2Vec.load("doc2vecmodel") inferred_vector = model.infer_vector(["asda"]) sims = model.docvecs.most_similar([inferred_vector], topn=100) sims
_____no_output_____
MIT
DOC2VEC.ipynb
utkueray/CQA
A Simple AutoencoderWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder an...
import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms # convert data to torch.FloatTensor transform = transforms.ToTensor() # load the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True...
_____no_output_____
MIT
autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
Kshitij09/deep-learning-v2-pytorch
Visualize the Data
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # get one image from the batch img = np.squeeze(images[0]) fig = plt.figure(figsize = (5,5)) ax = fig.add_subplot(111) ax.imshow(img, cm...
_____no_output_____
MIT
autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
Kshitij09/deep-learning-v2-pytorch
--- Linear AutoencoderWe'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of **one linear layer**. The u...
import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Autoencoder(nn.Module): def __init__(self, encoding_dim): super(Autoencoder, self).__init__() ## encoder ## ## decoder ## def forward(self, x): # define feedforward behavi...
_____no_output_____
MIT
autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
Kshitij09/deep-learning-v2-pytorch
--- TrainingHere I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. We are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in in...
# specify loss function criterion = nn.MSELoss() # specify loss function optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # number of epochs to train the model n_epochs = 20 for epoch in range(1, n_epochs+1): # monitor training loss train_loss = 0.0 ################### # train the model...
_____no_output_____
MIT
autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
Kshitij09/deep-learning-v2-pytorch
Checking out the resultsBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images_flatten = images.view(images.size(0), -1) # get sample outputs output = model(images_flatten) # prep images for display images = images.numpy() # output is resized into a batch of images output = output.view(batch_s...
_____no_output_____
MIT
autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
Kshitij09/deep-learning-v2-pytorch
在开房数据中取出一条:- 1、删除最后的换行符- 2、获取该数据下的人名以及邮箱- 3、将邮箱进行简单加密- 4、拼接人名 + 性别 + 地址 + 加密邮箱,以’-’分隔
file = 'C:\\Users\\lenovo\\Desktop\\python\\movies\\day01\\kaifangX.txt' open_file = open(file,mode='r',encoding='gbk') line = open_file.readline() # 删除最后的换行符 print(line) strip_line = line.strip('\n') print(strip_line) # 获取人名 name = strip_line.find('陈萌') print(name) print(type(name)) print(strip_line[0:2]) chengmeng =...
陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,chenmeng@dist.com.cn,0 陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,chenmeng@dist.com.cn,0 ['陈萌', '010-116321', 'M', '19000101', '北京市海淀区苏州街3号大恒科技大厦北座6层', '100080', '10116', '010-828080...
Apache-2.0
day01.ipynb
1298646087/ghfhg
MINIROCKET> A Very Fast (Almost) Deterministic Transform for Time Series Classification.
#export from tsai.imports import * from tsai.utils import * from tsai.data.external import * from tsai.models.layers import * #export from sktime.transformations.panel.rocket._minirocket import _fit as minirocket_fit from sktime.transformations.panel.rocket._minirocket import _transform as minirocket_transform from skt...
_____no_output_____
Apache-2.0
nbs/111b_models.MINIROCKET.ipynb
HafizAhmadHassan/tsai
Homework Template: Earth Analytics Python Course: Spring 2020 Before submitting this assignment, be sure to restart the kernel and run all cells. To do this, pull down the Kernel drop down at the top of this notebook. Then select **restart and run all**.Make sure you fill in any place that says `YOUR CODE HERE` or "YO...
NAME = "Sarah Jaffe" COLLABORATORS = "Ruby Shaheen"
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
![Colored Bar](colored-bar.png) Week 09 Homework - Multispectral Remote Sensing II Include the Plots, Text and Outputs BelowFor all plots:* Add appropriate titles to your plot that clearly and concisely describe what the plot shows.* Be sure to use the correct bands for each plot.* Specify the source of the data used ...
# Autograding imports - do not modify this cell import matplotcheck.notebook as nb import matplotcheck.autograde as ag import matplotcheck.raster as rs # Import libraries (5 points) # Only include imports required to run this notebook import os from glob import glob import warnings import numpy as np import numpy.ma ...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Assignment Week 1 - Complete by March 18 Define Functions To Process Landsat DataFor next week (March 18), create the 3 functions below used to process landsat data.For all functions, add a docstring that uses numpy style format (for examples, review the [Intro to Earth Data Sciene textbook chapter on Functions](http...
# Add your function here. Do NOT modify the function name def crop_stack_data(files_to_crop, crop_dir_path, crop_bound): """Crops a set of tif files and saves them in a crop directory. Returns a stacked numpy array of bands. Parameters ---------- files_to_crop : list List of paths...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Function 2: mask_data (5 points)In the call below, write a function called `mask_data` that: 1. Masks a numpy array using the Landsat Cloud QA layer. * **2 inputs:** * 1) numpy array to be masked * 2) Landsat QA layer in numpy array format 2. Returns a masked array. * **1 output:** * 1) m...
# Add your function here. Do NOT modify the function name def mask_data(arr, path_to_qa): """Function that masks a numpy array using a cloud qa layer. Parameters ---------- arr : numpy array Numpy array(s) of bands of aoi scene(s). path_to_qa : str Path to QA l...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Function 3: classify_dnbr (5 points)In the cell below, write a function called `classify_dnbr` that: 1. Classifies a numpy array using classes/bins defined in the function. * **1 input:** * 1) numpy array containing dNBR data in numpy array format 2. Returns a classified numpy array. * **1 output:** ...
# Add your function here. Do NOT modify the function name def classify_dnbr(arr): """Function that creates a new numpy array of classified values from a difference normalized burn ration (dNBR) numpy array. Parameters ---------- arr : Numpy array Numpy array(s) containing dNBR dat...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Assignment Week 2 - BEGINS March 18 You will write the function below next week after learning more about MODIS h4 data in class on March 18, 2020.Be sure to add a docstring that uses numpy style format (for examples, review the [Intro to Earth Data Sciene textbook chapter on Functions](https://www.earthdatascience.or...
# Add your function here. Do NOT modify the function name def stack_modis_bands(h4_path, crop_bound): ''' Accessing, cropping, stacking and cleaning (masking nodata) all bands within h4 files and producing outputs necessary to process and plot data. Parameters ---------- h4_path : str ...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
![Colored Bar](colored-bar.png) Figure 1 Overview - Grid of 3 Color InfraRed (CIR) Plots: NAIP, Landsat and MODIS**You will be able to complete the MODIS subplot after March 18 class!**Create a single figure that contains a grid of 3 plots of color infrared (also called false color) composite images using:* Post Fire ...
# Open fire boundary in this cell # Import fire boundary .shp fire_path = os.path.join("cold-springs-fire", "vector_layers", "fire-boundary-geomac", "co_cold_springs_20160711_2200_dd83.shp") fire_crop = gpd.read_file(fire_path) print(fire_crop.t...
[-105.49580578 39.97552258 -105.45633769 39.98718058] (1, 22) {'init': 'epsg:4269'}
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Process Landsat DataIn the cells below, open and process your Landsat Data using loops and string manipulation of paths. Use the functions `crop_stack_data()` and `mask_data()` that you wrote above to open, crop and stack each Landsat data scene (pre and post fire). NotesIf you were implementing this workflow for more...
# Create loop to process Landsat data in this cell # Set paths to scene directories base_path = os.path.join("earthpy-downloads", "landsat-coldsprings-hw") all_dirs = glob(os.path.join(base_path, "*")) # Lists for naming convension dictionaries cleaned_landsat_data = {} # S...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Landsat Function Tests
# DO NOT MODIFY - test: crop_stack_data function # DO NOT MODIFY - test mask_data function # DO NOT MODIFY - test classify_dnbr function
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Process NAIP Post Fire DataIn the cell below, open and crop the post-fire NAIP data that you downloaded for homework 6.
# Process NAIP data #Import NAIP 2017 image naip_2017_path = os.path.join("cold-springs-fire", "naip", "m_3910505_nw_13_1_20170902", "m_3910505_nw_13_1_20170902.tif") #######################NODATA NOT ACCOUNTED FOR HERE = "None"##############################...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Process MODIS h4 Data - March 18th, 2020In the cells below, open and process your MODIS hdf4 data using loops and string manipulation of paths. You will learn more about working with MODIS in class on March 18th.Use the function `stack_modis_bands()` that you previously wrote in this notebook to open and crop the MODI...
# Process MODIS Data # Set paths to scene directories modis_dirs = glob(os.path.join("cold-springs-modis-h5", "*")) # Modis dictionary of scene dates modis_bands_dict = {} for i in modis_dirs: # Paths to all modis scenes all_scenes = glob(os.path.join(i, "*.hdf")) ...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
Figure 1: Plot CIR for NAIP, Landsat and MODIS Using Post Fire Data (15 points each subplot)In the cell below, create a figure with 3 subplots stacked vertically.In each subplot, plot a CIR composite image using the post-fire data for:* NAIP (first figure axis) * Landsat (second figure axis) * MODIS (third figure axi...
# Plot CIR of Post Fire NAIP, Landsat & MODIS together in one figure fig, [ax1, ax2, ax3] = plt.subplots(3, 1, figsize=(12, 18)) # Plot NAIP CIR ep.plot_rgb(naip_2017_crop, extent = naip_extent, rgb = [3, 0, 1], ax = ax1, title="NAIP CIR Image\n Post Cold Springs" \ ...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
![Colored Bar](colored-bar.png) Figure 2: Difference NDVI (dNDVI) Using Landsat & MODIS Data (20 points each subplot)Plot the NDVI difference using before and after Landsat and MODIS data that cover the Cold Springs Fire study area. For each axis be sure to:1. overlay the fire boundary (`vector_layers/fire-boundary-ge...
# Plot Difference NDVI for Landsat & MODIS together in one figure fig, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12)) # Plot Landsat dNDVI ep.plot_bands(landsat_dndvi, cmap="RdYlGn", vmin=-0.6, vmax=0.6, ax = ax1, extent=extent_landsat, title="Landsat Derived dNDVI\n 21 June vs. 23 July,...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
![Colored Bar](colored-bar.png) Figure 3 Overview: Difference NBR (dNBR) Using Landsat & MODIS Data (25 points each subplot)Create a figure that has two subplots stacked vertically using the same MODIS and Landsat data that you processed above. * Subplot one: classified dNBR using Landsat data* Subplot two: classifie...
# Calculate dNBR for Landsat - be sure to use the correct bands! # Landsat NBR processing landsat_prefire_nbr = es.normalized_diff( cleaned_landsat_data["20160621"][4], cleaned_landsat_data["20160621"][6]) landsat_postfire_nbr = es.normalized_diff( cleaned_landsat_data["20160723"][4], cleaned_landsat_...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
![Colored Bar](colored-bar.png) Landsat vs MODIS Burned Area (10 points)In the cell below, print the total area burned in classes 4 and 5 (moderate to high severity) for both datasets (Landsat and MODIS).HINT: Feel free to experiment with loops to complete this part of the homework.
# Total Burned Area in Classes 4 and 5 for Landsat and MODIS landsat_pixel_area = int(landsat_src.res[0]) * int(landsat_src.res[0]) modis_pixel_area = int(modis_meta['transform'][0]) * \ int(modis_meta['transform'][0]) burned_area_per_source = [landsat_dnbr_reclass, modis_dnbr_reclass] burned_area_per_pixel = [lan...
_____no_output_____
BSD-3-Clause
test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb
shaheen19/wildfires_causes_consequences
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://...
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import gee...
_____no_output_____
MIT
Image/edge_detection.ipynb
c11/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map
_____no_output_____
MIT
Image/edge_detection.ipynb
c11/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset # Load a Landsat 8 image, select the panchromatic band. image = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318').select('B8') # Perform Canny edge detection and display the result. canny = ee.Algorithms.CannyEdgeDetector(**{ 'image': image, 'threshold': 10, 'sigma': 1 }) Map.setCenter(...
_____no_output_____
MIT
Image/edge_detection.ipynb
c11/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
Image/edge_detection.ipynb
c11/earthengine-py-notebooks
Input pipeline into KerasIn this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend. Learning Objectives1. Use tf.data to read CSV files2. Load the training data into m...
%%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT !pip install tensorflow==2.1.0 --user
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click __Restart the kernel__ on the notebook so that the Python environment picks up the new packages.
import os, json, math import numpy as np import shutil import logging # SET TF ERROR LOG VERBOSITY logging.getLogger("tensorflow").setLevel(logging.ERROR) import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" ...
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
Locating the CSV filesWe will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
!ls -l ../../data/*.csv
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
Use tf.data to read the CSV filesSee the documentation for [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset).If you have TFRecords (which is recommended), use [make_batched_features_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_batched_...
CSV_COLUMNS = ['fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key'] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']] # load the traini...
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary:
# print a few of the rows for n, data in enumerate(tempds): row_data = {k: v.numpy() for k,v in data.items()} print(n, row_data) if n > 2: break
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column "key" and (2) keep the label separate from the features.
# get features, label def features_and_labels(row_data): # TODO 3: Prune the data by removing column named 'key' for unwanted_col in ['pickup_datetime', '']: # specify column to remove row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label # pri...
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
BatchingLet's do both (loading, features_label)in our load_dataset function, and also add batching.
def load_dataset(pattern, batch_size): return ( # TODO 4: Use tf.data to map features and labels tf.data.experimental.make_csv_dataset() # complete parameters .map() # complete with name of features and labels ) # TODO 5: Experiment by adjusting batch size # try changing the batch s...
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
ShufflingWhen training a deep learning model in batches over multiple workers, it is helpful if we [shuffle the data](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Datasetshuffle). That way, different workers will be working on different parts of the input file at the same time, and so averaging gra...
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS) .map(features_and_labels) # features, label .cache()) if mode == tf.estimator.ModeKeys.TRAIN: # TODO 6: Ad...
_____no_output_____
Apache-2.0
quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb
juancaob/training-data-analyst
Name : Chetna Nihalani Task 2: K- Means Clustering
# Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn import datasets # Load the iris dataset iris = datasets.load_iris() iris_df = pd.DataFrame(iris.data, columns = iris.feature_names) iris_df.head() # See the first 5 rows # Finding the optimum number of cluster...
_____no_output_____
MIT
Task_2_KMeans_Clustering.ipynb
chetna-source/TSF-Tasks
We clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration.From this we choose the number of clusters as ** '3**'.
# Applying kmeans to the dataset / Creating the kmeans classifier kmeans = KMeans(n_clusters = 3, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) y_kmeans = kmeans.fit_predict(x) # Visualising the clusters - On the first two columns plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1]...
_____no_output_____
MIT
Task_2_KMeans_Clustering.ipynb
chetna-source/TSF-Tasks
CorrelationsWe're now going to look at the relationships between two __numerical__ variables. This will allow us to make this final leap from ANOVA to linear regression. Correlations are a numerical representation of the strength of the relationship between two numerical variables - and in a way reflects the ability...
## loading some libraries! library(tidyverse) ## all of our normal functions for working with data library(ggcorrplot) ## make pretty corrplot library(GGally) ## scatterplot matrix function library(gtrendsR) ## Google Trends API options(repr.plot.width=4, repr.plot.height=3) ## set options for plot size within the not...
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
First we're going to plot hwy mpg vs. cty mpg. What type of relationship would we expect between these variables?
mpg %>% ggplot (aes(x = hwy, y = cty)) ## the variables we want to plot + geom_point() ## the type of plot we want
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
This graph shows an example of positive correlation - see how all of the dots "line up" and show a general trend that as the x variable increases, the y variable increases. Because they are both increasing together, the correlation is positive.Remember the correlation value is an estimation of the _linear_ relationshi...
mpg %>% ggplot(aes(x=hwy, y=cty)) + ## the variables we want to plot geom_point()+ ## the type of plot we want geom_smooth(method=lm, se=FALSE) ## adding that "best fit" line using a linear model
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
This is our "best fit" line. These linear relationships are going to form the basis of Linear Regression that we will get into next week, but right now we're going to focus simply on the correlation between these two variables. Note that the distance of the dots from the line affects the strength of the relationship ...
mpg %>% ggplot(aes(x=hwy, y=displ)) + geom_point()+ geom_smooth(method=lm, se=FALSE)
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
Now what we see is an example of a negative correlation, because the line starts at the top left corner and goes down toward the bottom right. So as hwy increases, displ decreases. These dots seem to be more spread out from the line, so I would guess that the strength of the relationship between hwy and displ (the co...
mpg %>% ggplot(aes(x=hwy, y=cyl)) + geom_point()+ geom_smooth(method=lm, se=FALSE)
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
You need to make sure your variables are truely numeric, and not simply ordinal. Because cylinder can only be 4, 5, 6, or 8, and no values in between those - we see that all of the values line up at those values of y. The line indicates that there may be a general trend of negative association between these variables...
corr <- round(cor(mtcars), 1) ## obtain all of the correlations within pairs of all the num vars ggcorrplot(corr) ## plot those correlations
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
This (above) is a heatmap of the strength and direction of the correlations between these variables where very blue is -1 and very read is +1.We can also create a scatterplot matrix where a number of variables are compared in scatterplots (like our examples above) in a grid.
options(repr.plot.width=8, repr.plot.height=4) ## plot size options for Jupyter notebook ONLY pairs(iris[,1:4]) ## plot the correlations between pairs of variables in columns 1 through 4 of iris dataset
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
We can also use a categorical variable to color the dots in the grid by groups, to see how the relationship between the two numerical variables might be associated with an additional categorical variable....... more about this to come soon.
options(repr.plot.width=8, repr.plot.height=4) pairs(iris[,1:4], col=iris$Species) ## col= adds color based on the grouping variable specified
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
We also have situations when there is no correlation between the two variables: To summarize:There are also cases where variables have obvious associations, but they are not linear. They have a __*correlation*__ of 0, but they are __*associated*__. We have to be careful how we use the word correlation in writing ...
options(repr.plot.width=10, repr.plot.height=4) ## set options for plot size within the notebook - # this is only for jupyter notebooks, you can disregard this. ## call the google trends api and return hits info for keyword searches ## hits are scaled to values between 0 - 100 trends <- gtrends(c("statistics", "pugs")...
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
Keep in mind that to be correlated the variables __*do not*__ have to have the same magnitude - they just have to trend together.
## extract the interest_over_time df from the gtrends object trend_time <- as_tibble(trends$interest_over_time) glimpse(trend_time) ## look at basic summary statistics trend_time %>% group_by(keyword) %>% summarise(mean(hits), median(hits), var(hits))
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
To use this data we need to pivot it so that our "long" format is in "wide" format - so that we have hits for statistics and hits for pugs in their own columns.
trend_wide <- trend_time %>% spread(key = keyword, value = hits) glimpse(trend_wide)
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
Let's look at a scatterplot.
trend_wide %>% ggplot (aes(x = statistics, y = pugs)) + geom_point() + geom_smooth(method=lm, se=FALSE)
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
And finally, let's calculate the correlation between these two variables. For this we will use the function (base R) cor(). The arguments for cor are simple just specify x and y (your two variables to compare).
cor(trend_wide$statistics, trend_wide$pugs) # correlation between hits for statistics and hits for pugs
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
This correlation is not as low as we may have expected. It is a "weak" correlation, but is it significantly different from a zero correlation? For that we need to use cor.test() which performs the hypothesis test as well.
cor.test(trend_wide$statistics, trend_wide$pugs) ## correlation with CI and t-test
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
So our p-value is below 0.05, so the correlation is statistically significant from zero, and therefore there is at least some correlation. However, the confidence interval for the correlation is 0.09 to 0.36, so the true population correlation may be anywhere between 0.09 (very very low) or 0.36 (small/mediumish).__NO...
cor(trend_wide$statistics, trend_wide$pugs)^2 ## calculate the correlation, and square it - ^2
_____no_output_____
BSD-3-Clause
Correlations.ipynb
adriannebradford/INST_314_examples
INFO 3402 – Week 14: Assignment - Solutions[Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan) University of Colorado Boulder Copyright and distributed under an [MIT License](https://open...
import pandas as pd import geopandas as gpd import numpy as np %matplotlib inline import matplotlib.pyplot as plt
_____no_output_____
MIT
Week 14 - Choropleths/Week 14 - Assignment.ipynb
cuinfoscience/INFO3402-Spring2022
Dependencies and Data Loading
import os import sys import time import numpy as np import random import pandas as pd import scipy.io as sio from collections import Counter from itertools import product from scipy.io import loadmat import tensorflow as tf from keras.utils import np_utils from tensorflow.keras import optimizers,backend from keras.mode...
_____no_output_____
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
Preprocessing
#train-val split X_train, X_val, y_train, y_val = train_test_split(input_spectra, labels, test_size=0.15, random_state=13) X_train = np.array(X_train).astype('float32') X_train = X_train.reshape(X_train.shape + (1,)) X_val = np.array(X_val).astype('float32') X_val = X_val.reshape(X_val.shape + (1,)) X_test = np.array...
Total of 8500 training samples. Total of 1500 validation samples.
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
Model Architecture
def mapping_to_target_range( x, target_min=6.32, target_max=7.44) : x02 = backend.tanh(x) + 1 # x in range(0,2) scale = ( target_max-target_min )/2. return x02 * scale + target_min # default random initialization for weights backend.clear_session() model = Sequential() activation = 'relu' model.add(Dense(...
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 1024, 16) 32 ...
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
Training
#Params epochs = 400 batch_size = 200 best_model_file = '/content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5' start = time.time() best_model = ModelCheckpoint(best_model_file, monitor='loss', verbose = 1, save_best_only=True, save_weights_only=False) hist = model.fit(X_train, y_train, ...
_____no_output_____
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
Testing Model Accuracy
predict = model.predict(X_test) dis_index = 0 #index for the test data def plot_predict_gt(predict_aug,y_test_aug): predict_output = pd.DataFrame({'predict':predict_aug,'ground truth':y_test_aug}) predict_output = pd.DataFrame.transpose(predict_output) print('test: ',dis_index+1) print(predict_output) plt...
test: 1 0 1 2 predict 7.386837 7.048204 6.755214 ground truth 7.429000 7.146000 6.739000 test: 2 0 1 2 predict 7.340264 7.027956 6.835915 ground truth 7.363000 7.103000 6.414000 test: 3 0 ...
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
Testing Model Accuracy - Synthetic data
predict_aug = model.predict(X_test_aug) def print_predict_gt(predict_aug,y_test_aug): predict_output = pd.DataFrame({'predict':predict_aug,'ground truth':y_test_aug}) predict_output = pd.DataFrame.transpose(predict_output) print('test: ',dis_index+1) print(predict_output) dis_index = 0 for idx in range(len(pred...
test: 1 0 1 2 predict 7.384405 7.057750 6.500827 ground truth 7.366038 7.071987 6.522446 test: 2 0 1 2 predict 7.38638 7.069021 6.540631 ground truth 7.44000 7.006862 6.779162 test: 3 0 1 ...
MIT
MLP_S_3_compart.ipynb
ryanayf/KidNeYronal
I wrote a post in 2016 on text mining the bible with R. [That post](https://emelineliu.com/2016/01/10/bible1/) remains the most popular post on this site according to Google Analytics.I still use R occasionally at work, butttt I work more on Java and my heart will probably eternally belong to Python.(List comprehension...
import re import requests import seaborn as sb import matplotlib.pyplot as plt from collections import Counter from nltk.corpus import stopwords # These stopwords are commonly used words in the English language STOPWORDS = {re.sub(r'[^\w\s]', '', word) for word in stopwords.words('english')} # Adding custom words STO...
_____no_output_____
MIT
assets/python/top_words.ipynb
emelinemeline/emelinemeline.github.io
Project Gutenberg has quite a few books available with the option to access them as text files on [their website](http://www.gutenberg.org/wiki/Main_Page). The last time I did this text analysis, I downloaded a variety of text files. This time, I'm using the call `requests.get(url)` to retrieve the full text file from ...
def get_text(url): text = requests.get(url).text # trim everything before this line clean = text.split("*** START OF THIS PROJECT GUTENBERG EBOOK")[1] # trim everything after this line clean = clean.split("*** END OF THIS PROJECT GUTENBERG EBOOK")[0] # split the text into lines and remove extran...
_____no_output_____
MIT
assets/python/top_words.ipynb
emelinemeline/emelinemeline.github.io
Looks like "shall" is the most common word. I guess it makes sense that the bible would be quite prescriptive in its language...Well that's cool and all, but let's make a picture too. The snippet below uses the Seaborn plotting library to create a barplot from that counter. I see seaborn is most commonly imported as `s...
top_words = counter.most_common(top_n) word_label = [word for (word, count) in top_words] count_words = [count for (word, count) in top_words] ax = sb.barplot(x=word_label, y=count_words, palette=sb.color_palette("cubehelix", 8)) ax.set_title(title) ax.set(ylabel='Count of words') ax.set_xticklabels(ax.get_xticklabels(...
_____no_output_____
MIT
assets/python/top_words.ipynb
emelinemeline/emelinemeline.github.io
What about the least common words?To retrieve the least common words, this call gets the entire ordered list of (item, occurrence) pairs and then retrieves the `n` last terms:
n = 20 bottom_words = counter.most_common()[:-n-1:-1] bottom_words
_____no_output_____
MIT
assets/python/top_words.ipynb
emelinemeline/emelinemeline.github.io
Fascinating, apparently sardonyx is a type of onyx that has layers of sard. What a straightforward name.Then I figured I might look at some other text files on Project Gutenberg. I picked Ion, Frankenstein, and Little Women for a little further investigation. I shoved those into a dictionary so I could iterate to gener...
def create_barplot(title, counter): word_label = [word for (word, count) in counter] count_words = [count for (word, count) in counter] ax = sb.barplot(x=word_label, y=count_words, palette=sb.color_palette("cubehelix", 8)) ax.set_title(title) ax.set(ylabel='Count of words') ax.set_xticklabels(ax...
_____no_output_____
MIT
assets/python/top_words.ipynb
emelinemeline/emelinemeline.github.io
**Curso Intro Python** Módulo 7: Estructuras de control **Problema 1:** Uso de ciclos whileSe solicita crear una aplicación que pida a un usuario que ingrese una lista de planetas.
# Step 1: Declarar variables para la entrada del usuario y la lista de los planetas. new_planet = "" planets = [] # Lista vacía # Step 2: Crear ciclo while while (new_planet != "done"): if new_planet: # Verificando que new_planet tenga un valor el cu...
Los valores ingresados a la lista son: ['Marte', 'Tierra', 'Venus']
MIT
Module_7/Module_7_kata.ipynb
CristopherA96/LaunchX_IntroPython_w1
**Problema 2:** Crear ciclo for para listaAhora, se pide que se muestre la lista de valores ingresados por parte del usuario.
# Step 1: Crear ciclo for para recorrer cada elemento de la lista para imprimirlo # en pantalla. for i_planet in planets: print("El elemento ", i_planet, "de la lista está en la posición", planets.index(i_planet))
El elemento Marte de la lista está en la posición 0 El elemento Tierra de la lista está en la posición 1 El elemento Venus de la lista está en la posición 2
MIT
Module_7/Module_7_kata.ipynb
CristopherA96/LaunchX_IntroPython_w1
Quick Check for Setup
import tensorflow as tf import os print("If this cell fails please check your Open OnDemand Setup", "Make sure you requested enough resources and the correct enviroment") assert(tf.__version__=="2.6.0") assert(int(os.environ["SLURM_MEM_PER_NODE"]) > 8000 ) assert(int(os.environ["SLURM_CPUS_ON_NODE"]) >= 2 ) pr...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022