markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Notice : new in version 2.6 FPR_Macro For more information visit [[3]](ref3). $$FPR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{FP_i}{TN_i+FP_i}$$
cm.FPR_Macro
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.6 FNR_Macro For more information visit [[3]](ref3). $$FNR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{FN_i}{TP_i+FN_i}$$
cm.FNR_Macro
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.6 F1_Macro For more information visit [[3]](ref3). $$F_{1_{Macro}}=\frac{2}{|C|}\sum_{i=1}^{|C|}\frac{TPR_i\times PPV_i}{TPR_i+PPV_i}$$
cm.F1_Macro
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.2 ACC_Macro For more information visit [[3]](ref3). $$ACC_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}{ACC_i}$$
cm.ACC_Macro
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.2 Overall_J For more information visit [[29]](ref29). $$J_{Mean}=\frac{1}{|C|}\sum_{i=1}^{|C|}J_i$$ $$J_{Sum}=\sum_{i=1}^{|C|}J_i$$ $$J_{Overall}=(J_{Sum},J_{Mean})$$
cm.Overall_J
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 0.9 Hamming loss The average Hamming loss or Hamming distance between two sets of samples [[31]](ref31). $$L_{Hamming}=\frac{1}{POP}\sum_{i=1}^{POP}1(y_i \neq \widehat{y}_i)$$
cm.HammingLoss
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.0 Zero-one loss Zero-one loss is a common loss function used with classification learning. It assigns $ 0 $ to loss for a correct classification and $ 1 $ for an incorrect classification [[31]](ref31). $$L_{0-1}=\sum_{i=1}^{POP}1(y_i \neq \widehat{y}_i)$$
cm.ZeroOneLoss
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.1 NIR (No information rate) Largest class percentage in the data [[57]](ref57). $$NIR=\frac{1}{POP}Max(P)$$
cm.NIR
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.2 P-Value In statistical hypothesis testing, the p-value or probability value is, for a given statistical model, the probability that, when the null hypothesis is true, the statistical summary (such as the absolute value of the sample mean difference between two compared groups) would ...
cm.PValue
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.2 Overall_CEN For more information visit [[17]](ref17). $$P_j=\frac{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)}{2\sum_{k,l=1}^{|C|}Matrix(k,l)}$$ $$CEN_{Overall}=\sum_{j=1}^{|C|}P_jCEN_j$$
cm.Overall_CEN
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.3 Overall_MCEN For more information visit [[19]](ref19). $$\alpha=\begin{cases}1 & |C| > 2\\0 & |C| = 2\end{cases}$$ $$P_j=\frac{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)-Matrix(j,j)}{2\sum_{k,l=1}^{|C|}Matrix(k,l)-\alpha \sum_{k=1}^{|C|}Matrix(k,k)}$$ $$MCEN_{Overall}=\sum_{j=...
cm.Overall_MCEN
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.3 Overall_MCC For more information visit [[20]](ref20) [[27]](ref27).Benchmark $$MCC_{Overall}=\frac{cov(X,Y)}{\sqrt{cov(X,X)\times cov(Y,Y)}}$$ $$cov(X,Y)=\sum_{i,j,k=1}^{|C|}\Big(Matrix(i,i)Matrix(k,j)-Matrix(j,i)Matrix(i,k)\Big)$$ $$cov(X,X) = \sum_{i=1}^{|C|}\Bigg[\Big(\sum_{j=1}^{...
cm.Overall_MCC
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.4 RR (Global performance index) For more information visit [[21]](ref21). $$RR=\frac{1}{|C|}\sum_{i,j=1}^{|C|}Matrix(i,j)$$
cm.RR
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.4 CBA (Class balance accuracy) As an evaluation tool, CBA creates an overall assessment ofmodel predictive power by scrutinizing measures simultaneously across each class in a conservative manner that guarantees that a model’s ability to recall observations from each class andits abili...
cm.CBA
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.4 AUNU When dealing with multiclass problems, a global measure of classification performances based on the ROC approach (AUNU) has been proposed as the average of single-class measures [[23]](ref23). $$AUNU=\frac{\sum_{i=1}^{|C|}AUC_i}{|C|}$$
cm.AUNU
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.4 AUNP Another option (AUNP) is that of averaging the $ AUC_i $ values with weights proportional to the number of samples experimentally belonging to each class, that is, the a priori class distribution [[23]](ref23). $$AUNP=\sum_{i=1}^{|C|}\frac{P_i}{POP}AUC_i$$
cm.AUNP
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.4 RCI (Relative classifier information) Performance of different classifiers on the same domain can be measured bycomparing relative classifier information while classifier information (mutual information) can be used for comparison across different decision problems [[32]](ref32) [[22...
cm.RCI
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.5 Pearson's C The contingency coefficient is a coefficient of association that tells whether two variables or data sets are independent or dependent of/on each other. It is also known as Pearson’s coefficient (not to be confused with Pearson’s coefficient of skewness) [[43]](ref43) [[4...
cm.C
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.0 CSI (Classification success index) The Classification Success Index (CSI) is an overallmeasure defined by averaging ICSI over all classes [[58]](ref58). $$CSI=\frac{1}{|C|}\sum_{i=1}^{|C|}{ICSI_i}$$
cm.CSI
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 2.5 ARI (Adjusted Rand index) The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of...
cm.ARI
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : $ C_{r}^{n} $ is the number of combinations of $ n $ objects taken $ r $ Notice : new in version 2.6 Print Full
print(cm)
Predict L1 L2 L3 Actual L1 3 0 2 L2 0 1 1 L3 0 2 3 Overall Statistics : 95% CI (0.30439,0.86228) ACC Macro ...
MIT
Document/Document.ipynb
GeetDsa/pycm
Matrix
cm.print_matrix() cm.matrix cm.print_matrix(one_vs_all=True,class_name = "L1") sparse_cm = ConfusionMatrix(matrix={1:{1:0,2:2},2:{1:0,2:18}}) sparse_cm.print_matrix(sparse=True)
Predict 2 Actual 1 2 2 18
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `one_vs_all` : One-Vs-All mode flag (type : `bool`, default : `False`)2. `class_name` : target class name for One-Vs-All mode (type : `any valid type`, default : `None`)3. `sparse` : sparse mode printing flag (type : `bool`, default : `False`) Notice : `one_vs_all` option, new in version 1.4 ...
cm.print_normalized_matrix() cm.normalized_matrix cm.print_normalized_matrix(one_vs_all=True,class_name = "L1") sparse_cm.print_normalized_matrix(sparse=True)
Predict 2 Actual 1 1.0 2 1.0
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `one_vs_all` : One-Vs-All mode flag (type : `bool`, default : `False`)2. `class_name` : target class name for One-Vs-All mode (type : `any valid type`, default : `None`)3. `sparse` : sparse mode printing flag (type : `bool`, default : `False`) Notice : `one_vs_all` option, new in version 1.4 ...
cm.stat() cm.stat(overall_param=["Kappa"],class_param=["ACC","AUC","TPR"]) cm.stat(overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1","L3"]) cm.stat(summary=True)
Overall Statistics : ACC Macro 0.72222 F1 Macro 0.56515 FPR Macro 0.20952 Kappa 0.35484 O...
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `overall_param` : overall statistics names for print (type : `list`, default : `None`)2. `class_param` : class statistics names for print (type : `list`, default : `None`)3. `class_name` : class names for print (sub set of classes) (type : `list`, default : `None`)4. `summary` : summary mode flag (type ...
cp.print_report() print(cp)
Best : cm2 Rank Name Class-Score Overall-Score 1 cm2 9.05 2.55 2 cm3 6.05 1.98333
MIT
Document/Document.ipynb
GeetDsa/pycm
Save
import os if "Document_Files" not in os.listdir(): os.mkdir("Document_Files")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
.pycm file
cm.save_stat(os.path.join("Document_Files","cm1"))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_stat(os.path.join("Document_Files","cm1_filtered"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_stat(os.path.join("Document_Files","cm1_filtered2"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_stat(os.path.join("Document_Files","cm1_summary"),summary=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
sparse_cm.save_stat(os.path.join("Document_Files","sparse_cm"),summary=True,sparse=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_stat("cm1asdasd/")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `name` : output file name (type : `str`)2. `address` : flag for address return (type : `bool`, default : `True`)3. `overall_param` : overall statistics names for save (type : `list`, default : `None`)4. `class_param` : class statistics names for save (type : `list`, default : `None`)5. `class_name` : cl...
cm.save_html(os.path.join("Document_Files","cm1"))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_filtered"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_filtered2"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_colored"),color=(255, 204, 255))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_colored2"),color="Crimson")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_normalized"),color="Crimson",normalize=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html(os.path.join("Document_Files","cm1_summary"),summary=True,normalize=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_html("cm1asdasd/")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `name` : output file name (type : `str`)2. `address` : flag for address return (type : `bool`, default : `True`)3. `overall_param` : overall statistics names for save (type : `list`, default : `None`)4. `class_param` : class statistics names for save (type : `list`, default : `None`)5. `class_name` : cl...
cm.save_csv(os.path.join("Document_Files","cm1"))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat FileOpen Matrix File
cm.save_csv(os.path.join("Document_Files","cm1_filtered"),class_param=["ACC","AUC","TPR"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat FileOpen Matrix File
cm.save_csv(os.path.join("Document_Files","cm1_filtered2"),class_param=["ACC","AUC","TPR"],normalize=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat FileOpen Matrix File
cm.save_csv(os.path.join("Document_Files","cm1_filtered3"),class_param=["ACC","AUC","TPR"],class_name=["L1"])
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat FileOpen Matrix File
cm.save_csv(os.path.join("Document_Files","cm1_header"),header=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat FileOpen Matrix File
cm.save_csv(os.path.join("Document_Files","cm1_summary"),summary=True,matrix_save=False)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open Stat File
cm.save_csv("cm1asdasd/")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `name` : output file name (type : `str`)2. `address` : flag for address return (type : `bool`, default : `True`)4. `class_param` : class statistics names for save (type : `list`, default : `None`)5. `class_name` : class names for print (sub set of classes) (type : `list`, default : `None`)6. `matrix_sav...
cm.save_obj(os.path.join("Document_Files","cm1"))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_obj(os.path.join("Document_Files","cm1_stat"),save_stat=True)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_obj(os.path.join("Document_Files","cm1_no_vectors"),save_vector=False)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cm.save_obj("cm1asdasd/")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `name` : output file name (type : `str`)2. `address` : flag for address return (type : `bool`, default : `True`)3. `save_stat` : save statistics flag (type : `bool`, default : `False`)4. `save_vector` : save vectors flag (type : `bool`, default : `True`) Notice : new in version 0.9.5 Notice ...
cp.save_report(os.path.join("Document_Files","cp"))
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Open File
cp.save_report("cm1asdasd/")
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Parameters 1. `name` : output file name (type : `str`)2. `address` : flag for address return (type : `bool`, default : `True`) Notice : new in version 2.0 Input errors
try: cm2=ConfusionMatrix(y_actu, 2) except pycmVectorError as e: print(str(e)) try: cm3=ConfusionMatrix(y_actu, [1,2,3]) except pycmVectorError as e: print(str(e)) try: cm_4 = ConfusionMatrix([], []) except pycmVectorError as e: print(str(e)) try: cm_5 = ConfusionMatrix([1,1,1,], [1,1,1,1]) ...
The input type is considered to be string but it's not!
MIT
Document/Document.ipynb
GeetDsa/pycm
delta M Vs delta Lambda
fig, ax = plt.subplots() dts = [1,3,5,10] alldm =[[]]*len(dts) alldl =[[]]*len(dts) allmavg=[[]]*len(dts) alllavg=[[]]*len(dts) for i, dt in enumerate(dts): dm =[] dl =[] mavg =[] lavg =[] for gal in mpgs: ind = gal.data['lambda_r'] > 0 mstar = gal.data['mstar'][ind] sba...
_____no_output_____
MIT
scripts/scripts_ipynb/0422_figs.ipynb
Hoseung/pyRamAn
Mstar, Lambda all
fig, ax = plt.subplots(2) # * (gal.data['mstar'] > 1e11) for gal in mpgs: ind = np.where(gal.data['lambda_r'] > 0)[0] mstar_ini = np.average(gal.data['mstar'][ind[-5:]]) l_ini = np.average(gal.data['lambda_r'][ind[-5:]]) ax[0].plot(gal.nouts[ind], np.log10(gal.data['mstar'][ind]/mstar_ini), c="grey", al...
241740 241535 241448 241363 241338 241306 241258 241228 241202 241176 241140 241565 241482 241403 241347 241312 241269 241240 241216 241183 241157 241581 241498 241411 241351 241318 241294 241243 241219 241191 241162 241685 241534 241446 241361 241337 241303 241255 241227 241200 241175 241646 241506 241418 241352 24132...
MIT
scripts/scripts_ipynb/0422_figs.ipynb
Hoseung/pyRamAn
Merger epoch with Mstar vs lambda
import tree.ctutils as ctu from tree import treeutils import numpy as np import pickle # Calculate merger event parameters def find_merger(atree, idx=None, aexp_min=0.0): """ find indices of merger event from a tree. (Full tree or main progenitor trunk) """ if idx == None: idx = atr...
_____no_output_____
MIT
scripts/scripts_ipynb/0422_figs.ipynb
Hoseung/pyRamAn
merger epochs
# multi page PDF from matplotlib.backends.backend_pdf import PdfPages fig, ax = plt.subplots(2, sharex=True) plt.subplots_adjust(hspace=0.001) with PdfPages('multipage_pdf.pdf') as pdf: for i in inds[0:3]: gal = mpgs[i] ax[0].scatter(gal.nouts, np.log10(gal.data['mstar'])) ax[0].set_xlim([5...
_____no_output_____
MIT
scripts/scripts_ipynb/0422_figs.ipynb
Hoseung/pyRamAn
1. 하나의 은하를 골라서 모든 머저를 표시하고 minor merger와 major merger중 어느게 더 전체 delta lambda에 기여를 많이 하는지 확인. 2. 모든 은하에 대해서, major merger가 많은 시기와 delta lambda가 큰 시기가 일치하는지 확인3. 은하단 계산 추가로 시작. (BCG를 10kpc, 15kpc 등으로 고정한 것도 추가)4. 나중에 트리가 바뀔 수도 있음.
inds[0] #%% import matplotlib.pyplot as plt # plot each galaxy. # stellar mass growth and lambda_r as a function of time. # The exponent (also called ass "offset") in the figure (1e11) # overlaps with lookback time tick labels. # And moving the offset around is not easy. # So, manually divide the values. # compile m...
_____no_output_____
MIT
scripts/scripts_ipynb/0422_figs.ipynb
Hoseung/pyRamAn
Character level language model - Dinosaurus IslandWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to giv...
import numpy as np from utils import * import random import pprint
_____no_output_____
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
data = open('dinos.txt', 'r').read() data= data.lower() chars = list(set(data)) data_size, vocab_size = len(data), len(chars) print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
There are 19909 total characters and 27 unique characters in your data.
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
* The characters are a-z (26 characters) plus the "\n" (or newline character).* In this assignment, the newline character "\n" plays a role similar to the `` (or "End of sentence") token we had discussed in lecture. - Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence. * `char_to_i...
chars = sorted(chars) print(chars) char_to_ix = { ch:i for i,ch in enumerate(chars) } ix_to_char = { i:ch for i,ch in enumerate(chars) } pp = pprint.PrettyPrinter(indent=4) pp.pprint(ix_to_char)
{ 0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', ...
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients ...
### GRADED FUNCTION: clip def clip(gradients, maxValue): ''' Clips the gradients' values between minimum and maximum. Arguments: gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby" maxValue -- everything above this number is set to this number, and everything...
gradients["dWaa"][1][2] = 10.0 gradients["dWax"][3][1] = -10.0 gradients["dWya"][1][2] = 0.29713815361 gradients["db"][4] = [ 10.] gradients["dby"][1] = [ 8.45833407]
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
** Expected output:**```Pythongradients["dWaa"][1][2] = 10.0gradients["dWax"][3][1] = -10.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 10.]gradients["dby"][1] = [ 8.45833407]```
# Test with a maxValue of 5 maxValue = 5 np.random.seed(3) dWax = np.random.randn(5,3)*10 dWaa = np.random.randn(5,5)*10 dWya = np.random.randn(2,5)*10 db = np.random.randn(5,1)*10 dby = np.random.randn(2,1)*10 gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby} gradients = clip(gradients, maxV...
gradients["dWaa"][1][2] = 5.0 gradients["dWax"][3][1] = -5.0 gradients["dWya"][1][2] = 0.29713815361 gradients["db"][4] = [ 5.] gradients["dby"][1] = [ 5.]
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
** Expected Output: **```Pythongradients["dWaa"][1][2] = 5.0gradients["dWax"][3][1] = -5.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 5.]gradients["dby"][1] = [ 5.]``` 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is e...
import numpy as np matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2) matrix2 = np.array([[0],[0],[0]]) # (3,1) vector1D = np.array([1,1]) # (2,) vector2D = np.array([[1],[1]]) # (2,1) print("matrix1 \n", matrix1,"\n") print("matrix2 \n", matrix2,"\n") print("vector1D \n", vector1D,"\n") print("vector2D \n", vector2D) p...
Adding a (3,) vector to a (3 x 1) vector broadcasts the 1D array across the second dimension Not what we want here! [[2 4 6] [2 4 6] [2 4 6]]
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
- **Step 3**: Sampling: - Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. - To make the results more interesting, we will use np.random.choice to select...
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictio...
Sampling: list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0] list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', '...
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
** Expected output:**```PythonSampling:list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', ...
# GRADED FUNCTION: optimize def optimize(X, Y, a_prev, parameters, learning_rate = 0.01): """ Execute one step of the optimization to train the model. Arguments: X -- list of integers, where each integer is a number that maps to a character in the vocabulary. Y -- list of integers, exactly the...
Loss = 126.503975722 gradients["dWaa"][1][2] = 0.194709315347 np.argmax(gradients["dWax"]) = 93 gradients["dWya"][1][2] = -0.007773876032 gradients["db"][4] = [-0.06809825] gradients["dby"][1] = [ 0.01538192] a_last[4] = [-1.]
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
** Expected output:**```PythonLoss = 126.503975722gradients["dWaa"][1][2] = 0.194709315347np.argmax(gradients["dWax"]) = 93gradients["dWya"][1][2] = -0.007773876032gradients["db"][4] = [-0.06809825]gradients["dby"][1] = [ 0.01538192]a_last[4] = [-1.]``` 3.2 - Training the model * Given the dataset of dinosaur names, ...
# GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- ...
_____no_output_____
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
parameters = model(data, ix_to_char, char_to_ix)
Iteration: 0, Loss: 23.087336 Nkzxwtdmfqoeyhsqwasjkjvu Kneb Kzxwtdmfqoeyhsqwasjkjvu Neb Zxwtdmfqoeyhsqwasjkjvu Eb Xwtdmfqoeyhsqwasjkjvu Iteration: 2000, Loss: 27.884160 Liusskeomnolxeros Hmdaairus Hytroligoraurus Lecalosapaus Xusicikoraurus Abalpsamantisaurus Tpraneronxeros Iteration: 4000, Loss: 25.901815 Mivro...
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
** Expected Output**The output of your model may look different, but it will look something like this:```PythonIteration: 34000, Loss: 22.447230OnyxipaledisonsKiabaeropaLussiamangPacaeptabalsaurusXosalongEiacotegTroia``` ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards...
from __future__ import print_function from keras.callbacks import LambdaCallback from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking from keras.layers import LSTM from keras.utils.data_utils import get_file from keras.preprocessing.sequence import p...
Using TensorFlow backend.
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prom...
print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback]) # Run this cell to try with different inputs without having to re-train the model generate_output()
_____no_output_____
MIT
Course5/Week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb
pranavkantgaur/CourseraDLSpecialization
Plot several randomly generated 2D classification datasets. This example illustrates the **datasets.make_classification datasets.make_blobs** and **datasets.make_gaussian_quantiles** functions.For `make_classification`, three binary and two multi-class classification datasets are generated, with different numbers of in...
import sklearn sklearn.__version__
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-classification-dataset/Plot-randomly-generated-classification-dataset.ipynb
bmb804/documentation
Imports This tutorial imports [make_classification](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.htmlsklearn.datasets.make_classification), [make_blobs](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.htmlsklearn.datasets.make_blobs) and [make_gauss...
import plotly.plotly as py import plotly.graph_objs as go from plotly import tools import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.datasets import make_blobs from sklearn.datasets import make_gaussian_quantiles
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-classification-dataset/Plot-randomly-generated-classification-dataset.ipynb
bmb804/documentation
Plot Dataset
fig = tools.make_subplots(rows=3, cols=2, print_grid=False, subplot_titles=("One informative feature, one cluster per class", "Two informative features, one cluster per class", "Two in...
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-classification-dataset/Plot-randomly-generated-classification-dataset.ipynb
bmb804/documentation
Deep Gravity
big_res_df1 = pd.DataFrame(columns=['cpc', 'nrmse', 'nmae', 'smape']) OD = np.load('./data/3d_daily.npy').sum(axis=2)[:48, :48] OD_max = OD.max(axis=1).reshape(-1, 1) OD_max_pred = OD_max[-14:] labels = np.load('./res/dg and g/labels.npy')[-14*48:] labels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1) for i ...
The mae loss is 32711.9884 The mape loss is 4.7522 The smape loss is 0.8279 The nrmse loss is 0.0875 The nmae loss is 0.0458 The CPC is 0.7160
MIT
Result_Analysis_dg_g.ipynb
HaTT2018/Deep_Gravity
Gravity
g_pred_df = pd.read_csv('./res/dg and g/g_pred.csv', index_col=0) g_pred = g_pred_df['0'].values[-14*48:].reshape(-1, 1) g_pred.shape labels = np.load('./res/dg and g/labels.npy')[-14*48:] labels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1) labels_df = pd.DataFrame(labels).sort_values(by=0, ascending=False...
The mae loss is 55596.9690 The mape loss is 17.3799 The smape loss is 1.1917 The nrmse loss is 0.1396 The nmae loss is 0.0778 The CPC is 0.3505
MIT
Result_Analysis_dg_g.ipynb
HaTT2018/Deep_Gravity
Two models in one graph
run = 11 path = './runs/run%i/'%run pred = np.load(path+'pred.npy')[-14*48:] labels = np.load('./res/dg and g/labels.npy')[-14*48:] pred = (pred.reshape(14, 48) * OD_max_pred).reshape(-1, 1) labels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1) labels_df = pd.DataFrame(labels).sort_values(by=0, ascending=Fal...
_____no_output_____
MIT
Result_Analysis_dg_g.ipynb
HaTT2018/Deep_Gravity
Spatial Visualization
import geopandas as gpd data_X_all = gpd.read_file('./data/data_X_all.shp') stops = pd.read_csv('./data/stops_order.csv', index_col=0).iloc[:48, :] data_X_all.head(2) bart_coor = pd.read_csv('./data/station-coor.csv', index_col=0) bart_coor.head(2) fs = 12 fig = plt.figure(figsize=[20, 10], dpi=50) ax0 = fig.add_subpl...
_____no_output_____
MIT
Result_Analysis_dg_g.ipynb
HaTT2018/Deep_Gravity
Observations and Insights
#Look across all previously generated figures and tables and write at least three observations or inferences that #can be made from the data. Include these observations at the top of notebook. #1. Two of the drugs with more promising results also had the most mice tested. #2. There were very few outliers in the dat...
_____no_output_____
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Summary Statistics
# A summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen mice_count = clean_df.groupby(['Drug Regimen']).count()['Mouse ID'] mean = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean() median = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'...
_____no_output_____
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Bar and Pie Charts
# A bar plot showing the total number of mice for each treatment throughout the course of the study using pandas. summary_df.plot(y='Mice Count', kind='bar', color = 'blue') plt.title("Mice Count per Drug Treatment") plt.tight_layout() plt.show() # Generate a bar plot showing the total number of mice for each treatmen...
_____no_output_____
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Quartiles, Outliers and Boxplots
# Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Start by getting the last (greatest) timepoint for each mouse max_df = clean_df.groupby('Mouse ID')['Timepoint'].max() # Merge this group df with the original dataframe to get th...
_____no_output_____
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Line and Scatter Plots
capomulin_data_df # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin Capomulin_line_df = merged_df.loc[merged_df['Mouse ID']=='s185',:].sort_values(by="Timepoint", ascending = False) x_axis = Capomulin_line_df['Timepoint'] tumor_volume = Capomulin_line_df['Tumor Volume (mm3)'] ...
_____no_output_____
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Correlation and Regression
# Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen correlation = st.pearsonr(avg_tumor['Weight (g)'],avg_tumor['Tumor Volume (mm3)']) print(f"The correlation between both factors is {round(correlation[0],2)}") # Generate a scatter p...
The r-squared is: 0.7532769048399547
MIT
Pymaceuticals/pymaceuticals_starter.ipynb
MBoerenko/Matplotlib-Challenge
Contraste Bilateral: Cálculo del Error de tipo II Parámetro $p$ en variables de $Bernoulli$ Autor:Sergio García Prado - [garciparedes.me](https://garciparedes.me) Fecha:Abril de 2018 Agradecimientos:Me gustaría agradecer a la profesora [Pilar Rodríguez del Tío](http://www.eio.uva.es/~pilar/) la revisión y correccione...
CriticalValue <- function(n, p, alpha) { qnorm(alpha) * sqrt((p * (1 - p)) / n) }
_____no_output_____
Apache-2.0
notebooks/beta-error-bernoulli-hypothesis-test.ipynb
garciparedes/r-examples
Cálculo de la probabilidad $P(\bar{C})$:
PNegateC <- function(p, n, c, alpha) { pnorm( ( c + CriticalValue(n, c, 1 - alpha / 2) - p ) / sqrt((p * (1 - p)) / n) ) - pnorm( ( c - CriticalValue(n, c, 1 - alpha / 2) - p ) / sqrt((p * (1 - p)) / n) ) }
_____no_output_____
Apache-2.0
notebooks/beta-error-bernoulli-hypothesis-test.ipynb
garciparedes/r-examples
 Representación gráfica de $\beta(p)$ tomando distintos valores $c$ y $n$ (manteniendo $\alpha$ fijo) para comprobar su variación
n.vec <- 10 ^ (1:3) c.vec <- c(0.25, 0.5, 0.75) p <- seq(0, 1, length = 200)
_____no_output_____
Apache-2.0
notebooks/beta-error-bernoulli-hypothesis-test.ipynb
garciparedes/r-examples
Resultados:
par(mfrow = c(length(n.vec), length(c.vec))) for (n in n.vec) { for (c in c.vec) { plot(p, 1 - PNegateC(p, n, c, 0.05), type = "l", main = paste("c =", c, "\nn =", n), ylab = "A(p) = 1 - B(p)") } }
_____no_output_____
Apache-2.0
notebooks/beta-error-bernoulli-hypothesis-test.ipynb
garciparedes/r-examples
Split EOL user crops dataset into train and test---*Last Updated 11 February 2020* Instead of creating image annotations from scratch, EOL user-generated cropping coordinates are used to create training and testing data to teach object detection models and evaluate model accuracy for YOLO via darkflow, SSD and Faste...
# Mount google drive to import/export files from google.colab import drive drive.mount('/content/drive') import pandas as pd import numpy as np # Read in EOL user-generated cropping data crops = pd.read_csv('drive/My Drive/fall19_smithsonian_informatics/train/chiroptera_crops.tsv', sep="\t", header=0) print(crops.head...
_____no_output_____
MIT
object_detection_for_image_cropping/chiroptera/archive/chiroptera_split_train_test.ipynb
aubricot/computer_vision_with_eol_images
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipy...
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import gee...
_____no_output_____
MIT
JavaScripts/Image/LandcoverCleanup.ipynb
YuePanEdward/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map
_____no_output_____
MIT
JavaScripts/Image/LandcoverCleanup.ipynb
YuePanEdward/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset
_____no_output_____
MIT
JavaScripts/Image/LandcoverCleanup.ipynb
YuePanEdward/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
JavaScripts/Image/LandcoverCleanup.ipynb
YuePanEdward/earthengine-py-notebooks
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** Python `open()`The **`open()`** function opens the file (if possible) and returns the corresponding file object.**Syntax**:```pythono...
# Example 1: How to open a file in Python? # opens test.text file of the current directory f = open("test.txt") # To get the current directory #import os #os.getcwd() # specifying the full path f = open("C:/Python99/README.txt")
_____no_output_____
MIT
002_Python_Functions_Built_in/049_Python_open().ipynb
ATrain951/01.python_function-milaan9
Since the mode is omitted, the file is opened in **`'r'`** mode; opens for reading.
# Example 2: Providing mode to open() # opens the file in reading mode #f = open("path_to_file", mode='r') f = open("C:/Python99/README.txt", mode='r') # opens the file in writing mode #f = open("path_to_file", mode = 'w') f = open("C:/Python99/README.txt", mode='w') # opens for writing to the end #f = open("path_...
_____no_output_____
MIT
002_Python_Functions_Built_in/049_Python_open().ipynb
ATrain951/01.python_function-milaan9
Python's default encoding is ASCII. You can easily change it by passing the **`encoding`** parameter.
#f = open("path_to_file", mode = 'r', encoding='utf-8') f = open("C:/Python99/README.txt", mode = 'r', encoding='utf-8')
_____no_output_____
MIT
002_Python_Functions_Built_in/049_Python_open().ipynb
ATrain951/01.python_function-milaan9
Exploration and ML Models with Sampled NYC Taxi Trip and Fare Dataset using Scala Expected time to run this Notebook: About 5 mins on a HDInsight Spark (version 1.6) cluster with 4 worker nodes (D12) INTRODUCTION, OBJECTIVE AND ORGANIZATION INTRODUCTIONHere we show some features and capabilities of Spark's MLlib an...
import java.util.Calendar val beginningTime = Calendar.getInstance().getTime()
Creating SparkContext as 'sc' Creating HiveContext as 'sqlContext' beginningTime: java.util.Date = Sun Jul 31 17:11:46 UTC 2016
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
IMPORT FUNCTIONS
import org.apache.spark.sql.SQLContext import org.apache.spark.sql.functions._ import java.text.SimpleDateFormat import java.util.Calendar import sqlContext.implicits._ import org.apache.spark.sql.Row // Spark SQL functions import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, FloatType,...
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@179ce90b
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
1. DATA INGESTION: Read in joined 0.1% taxi trip and fare file (as tsv), format and clean data, and create data-frame Specify location of input file and storage location for models in the Azure blob taht is attached to the cluster
// Location of training data val taxi_train_file = sc.textFile("wasb://mllibwalkthroughs@cdspsparksamples.blob.core.windows.net/Data/NYCTaxi/JoinedTaxiTripFare.Point1Pct.Train.tsv") val header = taxi_train_file.first; // Set model storage directory path. This is where models will be saved. val modelDir = "wasb:///user...
modelDir: String = wasb:///user/remoteuser/NYCTaxi/Models/
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Import data, create RDD and define data-frame according to schema
val starttime = Calendar.getInstance().getTime() /* DEFINE SCHEMA BASED ON HEADER OF THE FILE */ val sqlContext = new SQLContext(sc) val taxi_schema = StructType( Array( StructField("medallion", StringType, true), StructField("hack_license", StringType, true), StructField("vendor_id", Stri...
Time taken to run the above cell: 8 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience