markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
def letter_to_number(letter): return ord(letter) - ord('a') def get_data_for_letter(letter, data_set_mapping): file_name_for_letter = data_set_mapping[letter_to_number(letter)] with open(file_name_for_letter, 'rb') as f: return pickle.load(f) def display_number(data, number=None): if number i...
udacity_machine_learning_notes/deep_learning/1_notmnist.ipynb
anshbansal/anshbansal.github.io
mit
Problem 3 Another check: we expect the data to be balanced across classes. Verify that.
data_lengths = dict() for dataset_file_name in train_datasets: with open(dataset_file_name, 'rb') as f: dataset = pickle.load(f) data_lengths[dataset_file_name] = len(dataset) data_lengths
udacity_machine_learning_notes/deep_learning/1_notmnist.ipynb
anshbansal/anshbansal.github.io
mit
Problem 4 Convince yourself that the data is still good after shuffling!
def get_char_for_number(labels, number): return chr(labels[number] + ord('a')) number = 107231 print(get_char_for_number(train_labels, number)) display_number(train_dataset, number)
udacity_machine_learning_notes/deep_learning/1_notmnist.ipynb
anshbansal/anshbansal.github.io
mit
Problem 5 By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok ...
%load_ext autotime reg = LogisticRegression() size = 20000 _shape = train_dataset.shape train_dataset_2 = train_dataset.reshape(_shape[0], _shape[1] * _shape[2]) train_labels_2 = train_labels _shape = valid_dataset.shape valid_dataset_2 = valid_dataset.reshape(_shape[0], _shape[1] * _shape[2]) valid_labels_2 = vali...
udacity_machine_learning_notes/deep_learning/1_notmnist.ipynb
anshbansal/anshbansal.github.io
mit
Create Google Cloud Storage bucket and folder - function
# create_gcs_bucket_folder from typing import NamedTuple def create_gcs_bucket_folder(ctx: str, RPM_GCP_STORAGE_BUCKET: str, RPM_GCP_PROJECT: str, RPM_DEFAULT_BUCKET_EXT: str, RPM_GCP_STORAGE_BUCKET_FOL...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Determine the new revision of the model - function
# get_bqml_model_version from typing import NamedTuple def get_bqml_model_version(ctx: str, RPM_GCP_PROJECT: str, RPM_MODEL_EXPORT_PATH: str, RPM_DEFAULT_MODEL_EXPORT_PATH: str, RPM_MODEL_VER_PREFIX: str, ...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Define the KubeFlow Pipeline (KFP)
# define the pipeline import kfp.components as comp def create_kfp_comp(rpm_comp): """ Converts a Python function to a component and returns a task(ContainerOp) factory Returns: Outputs (:obj: `ContainerOp`): returns the operation """ return comp.func_to_container_op( func=rpm_comp, b...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
If there is error in the pipeline, you will see that in the KubeFlow Pipelines UI in the Experiments section. If you encounter any errors, identify the issue, fix it in the Python function, unit test the function, update the pipeline defintion, compile, create an experiment, and run the experiment. Iterate through the ...
import pandas as pd # Data Exploration !!! BE CAREFUL !!! adjust the query to sample the data. # 1. Get pandas df from BigQuery # 2. plot histogram using matplotlib ######################### from google.cloud import bigquery as bq rpm_context = get_local_context() client = bq.Client(project=rpm_context["RPM_GCP_PROJEC...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Delete Google Cloud Storage folder
import traceback # delete the storage folder from google.cloud import storage from google.cloud.exceptions import Forbidden, NotFound def delete_storage_folder(bucket_name, folder_name): """Deletes a folder in the Google Cloust Storage, This is not recommendated to use it in a production enviornment. Comes...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Delete Google Cloud Storage bucket
import traceback #delete the bucket from google.cloud import storage from google.cloud.exceptions import Forbidden, NotFound def delete_storage_bucket (bucket_name): """Deletes a folder in the Google Cloust Storage, This is not recommendated to use it in a production enviornment. Comes handy in the iterati...
notebooks/community/analytics-componetized-patterns/retail/propensity-model/bqml/bqml_kfp_retail_propensity_to_purchase.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Constructing the experiment It is obvious that to observe a series of N equals we need to have done N throws at least. Hence we initialize the experiment by tossing the coin N times. We define the categories as "H" and "T"
list_of_categories = ["H", "T"] def initializeExperiment(N, prob = [0.5, 0.5]): tosses = [] for idx in range(N): tosses.append(choice(list_of_categories, p = prob)) return tosses
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
Next we need to check if the last N throws have been equal to the category we want to observe. To do this we construct a set of the last N tosses. If the size of the set is 1 and the category in the set is the one we are looking for we found a sequence of N equal tosses.
def areLastNTossesEqualy(tosses, N, category): subset = set(tosses[-N:]) if ((len(subset) == 1) and (category in subset)): return True else: return False
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
Running the experiment Since we have no prior knowledge of when the experiment will terminate we limit ourselves to a maximum number of tosses. We always check if the last N tosses have been heads (H). If yes, we terminate otherwise we continue with another toss.
def runSingleExperiment(max_num_throws, number_of_equals, prob = [0.5,0.5]): tosses = initializeExperiment(number_of_equals, prob) throws = 0 while throws < max_num_throws: if areLastNTossesEqualy(tosses, number_of_equals, "H"): return len(tosses) else: tosses.append(...
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
Finally we want to run M experiments and evaluate for the expected number of throws.
def runKExperimentsAndEvaluate(m_experiments, number_of_equals, number_of_maximum_tosses=500, prob = [0.5,0.5]): number_of_tosses = [] for idx in range(m_experiments): number_of_tosses.append(runSingleExperiment(number_of_maximum_tosses, number_of_equals, prob)) return np.mean(number_of_tosses), np...
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
So for 3 heads in a row, what's the expected number of tosses to observe this event"
print "We expect to observe 3 heads after %3.2f tosses" % runKExperimentsAndEvaluate(5000, 3)[0]
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
As we will see later the non-integer nature of this expectation value is a residual of the numerical procedure we employed and it could easily be cast to an integer. Before we get into the mathematical formulation of the problem, let's study the distribution a little bit more. A good way to gain some insight into the d...
tosses_three_equals = runKExperimentsAndEvaluate(25000, 3, number_of_maximum_tosses=1000)[2] tosses_four_equals = runKExperimentsAndEvaluate(25000, 4, number_of_maximum_tosses=1000)[2] tosses_five_equals = runKExperimentsAndEvaluate(25000, 5, number_of_maximum_tosses=1000)[2] bin_range = range(0,150, 2) plt.hist(tosse...
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
Maybe surprisingly the distribution is not very well localized. In fact trying to fit it with an exponential function given the calculated mean fails. Increasing the number of required equals makes the curve flatter and more heavy tailed. Thus the variance itself is also large. In fact it is of the same order as the me...
def expectationValueForNumberOfTosses(p, number_of_equals): return int(float(1 - np.power(p, number_of_equals))/float(np.power(p, number_of_equals) * (1-p))) equals = np.linspace(1,20, 20) y = [] for x in equals: y.append(expectationValueForNumberOfTosses(0.5, x)) plt.semilogy( equals, y, 'o')
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
From the plot aboe we see that the number of tosses until we have $N$ equals grows exponential! (Observe the logarithmic scale). For $N=20$ heads in a row we need on the order of 2 million successive throws for a fair coin. If we could manually throw a coin every second, it would take us about 23 days of uniterupted co...
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
Numerical-Experimentation/Series of N equals in coin tosses.ipynb
jotterbach/Data-Exploration-and-Numerical-Experimentation
cc0-1.0
Getting the data In this notebook we'll use shakespeare data, but you can basically choose any text file you like!
# load file file_path = 'shakespeare.txt' with open(file_path,'r') as f: data = f.read() print("Data length:", len(data)) # make all letters lower case # it makes the problem simpler # but we'll loose information data = data.lower()
code_samples/RNN/shakespeare/WIP/model.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Preprocess data
# vocabulary vocab = list(set(data)) vocab_size = len(vocab) print vocab # embeding words to one-hot def text_to_onehot(data_, vocab): data = np.zeros((len(data_), len(vocab))) cnt = 0 for s in data_: v = [0.0] * len(vocab) v[vocab.index(s)] = 1.0 data[cnt, :] = v cnt += 1 ...
code_samples/RNN/shakespeare/WIP/model.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Read datasets
BATCH_SIZE = 20 TIMESERIES_COL = 'data' # how many characters should be used by the LSTM as input N_INPUTS = 10 # how many characters should the LSTM predict N_OUTPUTS = 1 # -------- read data and convert to needed format ----------- def read_dataset(filename, mode=tf.estimator.ModeKeys.TRAIN): def _input_fn(): ...
code_samples/RNN/shakespeare/WIP/model.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Visualizing predictions
# read test csv def read_csv(filename): with open(filename, 'rt') as csvfile: reader = csv.reader(csvfile) data = [] for row in reader: data.append([float(x) for x in row]) return data test_data = read_csv('test.csv') # update predictions with features # preds = test_da...
code_samples/RNN/shakespeare/WIP/model.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Exploration Using the Iris Dataset where each row in the database (or CSV file) is a set of measurements of an individual iris flower. Each sample in this dataset is described by 4 features and can belong to one of the target classes: Features in the Iris dataset: sepal length in cm sepal width in cm petal length in c...
raw_dir = os.path.join(os.getcwd(), os.pardir, "data/raw/") irisdf = pd.read_csv(raw_dir+"newiris.csv",index_col=0) print("* irisdf.head()", irisdf.head(10), sep="\n", end="\n\n") print("* irisdf.tail()", irisdf.tail(10), sep="\n", end="\n\n") print("* iris types:", irisdf["Name"].unique(), sep="\n") features = list...
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
Building the Decision Tree. For full usage of Decision Tree Classifier refer to scikit-learn api http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html Tunable parameters: criterion : 'gini' or 'entropy' default 'gini' min_samples_split : The minimum number of samples required to sp...
dt = tree.DecisionTreeClassifier(criterion='gini',min_samples_split=5, random_state=1024) dt.fit(X_train, y_train) #dt.fit(X, y)
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
Calculate the scores Score against the training set
def printScores(amodel, xtrain,ytrain,xtest,ytest): tscores = amodel.score( xtrain, ytrain) vscores = amodel.score( xtest, ytest) print ("Training score is %f" % tscores) print ("Validation score is %f" % vscores) print ("Model depth is %i" % amodel.tree_.max_depth ) printScores(dt,X_train,y_train...
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
Visualize Decision Tree
dot_data = StringIO() tree.export_graphviz(dt, out_file=dot_data, feature_names=features, class_names='Name', filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) I...
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
Understanding Decision Trees http://chrisstrelioff.ws/sandbox/2015/06/25/decision_trees_in_python_again_cross_validation.html
def get_code(tree, feature_names, target_names, spacer_base=" "): """Produce pseudo-code for decision tree. Args ---- tree -- scikit-leant Decision Tree. feature_names -- list of feature names. target_names -- list of target (class) names. spacer_base -- used for spacing cod...
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
Feature Importance The feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance
def listFeatureImportance(amodel, features): ### Extract feature importance ## based from ## http://stackoverflow.com/questions/34239355/feature-importance-extraction-of-decision-trees-scikit-learn importances = amodel.feature_importances_ indices = np.argsort(importances)[::-1] # Print the f...
notebooks/Decision Trees.ipynb
csantill/AustinSIGKDD-DecisionTrees
bsd-3-clause
--- Day 4: Secure Container --- You arrive at the Venus fuel depot only to discover it's protected by a password. The Elves had written the password on a sticky note, but someone threw it out. However, they do remember a few key facts about the password: It is a six-digit number. The value is within the range given in ...
data_in = '272091-815432' def criteria(word): meets = True if '11' in word or \ '22' in word or \ '33' in word or \ '44' in word or \ '55' in word or \ '66' in word or \ '77' in word or \ '88' in word or \ '99' in word: last_num = None for x in word: ...
advent_of_code/2019/04/04.ipynb
andrewzwicky/puzzles
mit
--- Part Two --- An Elf just remembered one more important detail: the two adjacent matching digits are not part of a larger group of matching digits. Given this additional criterion, but still ignoring the range rule, the following are now true: 112233 meets these criteria because the digits never decrease and all rep...
def criteria_2(word): meets = True this_digit_ok = [False]*10 for dig in '1234567890': if dig*2 in word: this_digit_ok[int(dig)] = True for length in range(3,7): if dig*length in word: this_digit_ok[int(dig)] = False if any(this_digit_ok): ...
advent_of_code/2019/04/04.ipynb
andrewzwicky/puzzles
mit
Then, read the (sample) input tables for blocking purposes
# Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' # Get the paths of the input tables path_A = datasets_dir + os.sep + 'person_table_A.csv' path_B = datasets_dir + os.sep + 'person_table_B.csv' # Read the CSV files and set 'ID' as the key attribute A = em.read_csv_metadata(path_A...
notebooks/guides/step_wise_em_guides/Removing Features From Feature Table.ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Setup a baseline to compare k-winners, which is fast to train and evaluate
dataset = Dataset(config=dict(dataset_name='MNIST', data_dir='~/nta/datasets', batch_size_train=256, batch_size_test=1024)) # torch cross_entropy is log softmax activation + negative log likelihood loss_func = F.cross_entropy # a custom Lambda module class Lambda(nn.Module): def __ini...
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Extend for the k-Winners simulation:
# from functions import KWinnersBatch as KWinners from functions import KWinners model_gen = lambda k: nn.Sequential( Lambda(lambda x: x.view(-1,28*28)), nn.Linear(784,100), KWinners(k_perc=k), nn.Linear(100,10), ) model = model_gen(.1) fit(model, dataset, epochs=1) for k in np.arange(0.01,1,0.1): ...
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Model accuracy with 1% of active neurons, with and without boosting:
# no non-linearity required to get a low accuracy model = nn.Sequential( Lambda(lambda x: x.view(-1,28*28)), nn.Linear(784,100), nn.Linear(100,10), ) fit(model, dataset)
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Extend to CNNs Further tests and comparison
# simple CNN Model non_linearity = nn.ReLU model = nn.Sequential( nn.Conv2d(1,32, kernel_size=3, stride=2, padding=1), # 14x14 non_linearity(), nn.Conv2d(32,64, kernel_size=3, stride=2, padding=1), # 7x7 non_linearity(), nn.Conv2d(64,128, kernel_size=3, stride=2, padding=1), # 4x4 non_linearity...
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Evaluate the role of boosting
# kWinners without boosting from functions import KWinners model_gen = lambda k: nn.Sequential( nn.Conv2d(1,32, kernel_size=3, stride=2, padding=1), # 14x14 KWinners(k_perc=k, use_boosting=False), nn.Conv2d(32,64, kernel_size=3, stride=2, padding=1), # 7x7 KWinners(k_perc=k, use_boosting=False), nn....
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Plot loss and acc for different betas and answer the question if boosting helps or hurts
rcParams['figure.figsize'] = (16,8) for beta, loss in zip(betas, losses): plt.plot(loss, label=str(beta)) plt.legend(); for beta, acc in zip(betas, accs): plt.plot(acc, label=str(beta)) plt.legend();
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Exploration on how torch.expand is working in the background
# topk experimentation t = torch.randn((4,3,2)) t b = torch.ones((3,2)) * 2 b t.shape, b.shape b.expand((4,3,2)) t * b.expand((4,3,2))
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Exploration on how to select kth winners in torch
# topk experimentation t = torch.randn((4,3,2)) t tx = t.view(t.size()[0], -1) print(tx.size()) val, _ = torch.kthvalue(tx, 1, dim=-1) val [t.size()[0]] + [1 for _ in range(len(t.size())-1)] t.shape (t > val.view(4,1,1)).sum(dim=0).shape (t > val.view(4,1,1)).sum(dim=0) t > val.view(4,1,1) t.shape val.view(4,1...
projects/dynamic_sparse/notebooks/kWinners.ipynb
chetan51/nupic.research
gpl-3.0
Download & Process Security Dataset
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/covenant_lolbin_wuauclt_createremotethread.zip" registerMordorSQLTable(spark, sd_file, "sdTable")
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic I Look for wuauclt with the specific parameters used to load and execute a DLL. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, CommandLine FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND EventID = 1 AND Image LIKE '%wuauclt.exe' AND CommandLine LIKE '%wuauclt%UpdateDeploymentProvider%.dll%RunHandlerComServer' ''' ) df.show(10,False)
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic II Look for unsigned DLLs being loaded by wuauclt. You might have to stack the results and find potential anomalies over time. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Module | Microsoft-Windows-Sysmon/Operational | Process loaded DLL | 7...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, ImageLoaded FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND EventID = 7 AND Image LIKE '%wuauclt.exe' AND Signed = 'false' ''' ) df.show(10,False)
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic III Look for wuauclt creating and running a thread in the virtual address space of another process via the CreateRemoteThread API. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process wrote_to ...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, TargetImage FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND EventID = 8 AND SourceImage LIKE '%wuauclt.exe' ''' ) df.show(10,False)
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic IV Look for recent files created being loaded by wuauclt. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 | | File | Microsoft-Windows-Sysmon/Operational | Process loaded DL...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, ImageLoaded FROM sdTable b INNER JOIN ( SELECT TargetFilename, ProcessGuid FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND EventID = 11 ) a ON b.ImageLoaded = a.TargetFilename WHERE Channel = 'Microsoft-Windows-Sysmon/Ope...
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic V Look for wuauclt loading recently created DLLs and writing to another process | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Module | Microsoft-Windows-Sysmon/Operational | Process created File | 11 | | Module | Microsoft-Windows-Sysmon/Opera...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, d.TargetImage, c.ImageLoaded FROM sdTable d INNER JOIN ( SELECT b.ProcessGuid, b.ImageLoaded FROM sdTable b INNER JOIN ( SELECT TargetFilename, ProcessGuid FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND E...
docs/notebooks/windows/05_defense_evasion/WIN-201012183248.ipynb
VVard0g/ThreatHunter-Playbook
mit
Teorema de Bayes O Teorema de Bayes, tradicionalmente conhecida na estatística, determina que: $$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$ A dedução do teorema não é tão importante quanto a sua interpretação. Se $A$ é um rótulo (não-observável) e $B$ é um conjunto de propriedades de um elemento, então temos um caso bastante i...
from sklearn import mixture from sklearn.cross_validation import train_test_split def treinamento_GMM_ML(train_size=0.3, n_components=2): # Separar dados adequadamente dados_treino, dados_teste, rotulos_treino, rotulos_teste =\ train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_fut...
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
Podemos verificar a estabilidade do modelo para diferentes tamanhos de conjunto de treino de forma semelhante a que fizemos no caso de KNN:
# Parametros para executar busca exaustiva train_size_min = 0.35 train_size_max = 0.95 train_size_step = 0.05 # Numero de iteracoes para cada tamanho de conjunto de treino n_iter = 100 # Listas que armazenarao os resultados steps = [] medias = [] variancias = [] train_size_atual = train_size_min while train_size_atu...
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
Classificador de Máxima Probabilidade À Posteriori (MAP) Embora o classificador ML seja bastante relevante em diversas aplicações, é possível que as probabilidades a priori sejam importantes em determinados problemas. Quando o resultado para uma população é mais importante que o resultado para cada indivíduo, o critéri...
import math def treinamento_GMM_MAP(train_size=0.3, n_components=2): # Separar dados adequadamente dados_treino, dados_teste, rotulos_treino, rotulos_teste =\ train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_futebol, train_size=train_size) treino_futebol = [dados_treino[i] f...
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
O critério MAP minimiza a probabilidade de erro teórico do estimador. Embora esse seja um resultado relevante, também é importante considerar que a estimativa das probabilidades envolvidas pode não ser sempre ótima. Em comparação com o estimador ML, o estimador MAP é capaz de atingir resultados teóricos melhores. Ao me...
# Parametros para executar busca exaustiva train_size_min = 0.35 train_size_max = 0.95 train_size_step = 0.05 # Numero de iteracoes para cada tamanho de conjunto de treino n_iter = 100 # Listas que armazenarao os resultados steps1 = [] medias1 = [] variancias1 = [] train_size_atual = train_size_min while train_size_...
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
Veja que, embora MAP tenha uma possibilidade teórica de conseguir um erro menor que ML, seu erro médio é bastante semelhante. A variância do erro também apresenta um comportamento semelhante, aumentando à medida que o conjunto de treino aumenta. Essa variância não decorre de uma degradação do modelo, mas sim da diminui...
def treinamento_GMM_nao_supervisionado(): # Especificar parametros da mistura g = mixture.GMM(n_components=2) # Treinar modelo GMM g.fit(altura_volei + altura_futebol) # Verificar qual Gaussiana corresponde a cada rótulo if g.means_[0][0] > g.means_[1][0]: rotulos = ('V', 'F') else...
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
Podemos verificar que o treinamento não-supervisionado, por utilizar todo o conjunto de dados para treino/teste, não apresenta flutuações de desempenho. Ao mesmo tempo, não é possível dizer que esse modelo generaliza para outros pontos, já que é um modelo que foi treinado e testado no mesmo conjunto de dados. A não-gen...
plt.figure(); plt.errorbar(steps, medias, yerr=variancias); plt.errorbar(steps1, medias1, yerr=variancias1, color='red'); plt.plot(steps, [acertos_nao_supervisionados] * len(steps), ':', color='green') plt.ylabel('Indice de acertos'); plt.xlabel('Tamanho do conjunto de treino');
bayesiano.ipynb
tiagoft/inteligencia_computacional
mit
2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). x which has values of the y position at those times: y[i] = y(t[i...
# YOUR CODE HERE with np.load('trajectory.npz') as data: t = data['t'] x = data['x'] y = data['y'] print(t) print(x) print(y) assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40
assignments/assignment08/InterpolationEx01.ipynb
sraejones/phys202-2015-work
mit
Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times.
# YOUR CODE HERE f = interp1d(t,x) g = interp1d(t,y) newt = np.linspace(np.min(t), np.max(t), 200) newx = f(newt) newy = g(newt) assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert len(newx)==200 assert len(newy)==200
assignments/assignment08/InterpolationEx01.ipynb
sraejones/phys202-2015-work
mit
Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful.
# YOUR CODE HERE plt.plot(newx, newy, marker='.', label = 'interplated') plt.plot(x,y, marker='o', label = 'original') plt.legend = True assert True # leave this to grade the trajectory plot
assignments/assignment08/InterpolationEx01.ipynb
sraejones/phys202-2015-work
mit
Classes are used to create objects that will be how the logic of the software programa is implemented.
atom = Atom()
Classes-en.ipynb
acs/python-red
mit
In this sample, the class does not receive any kind of configuration (params). Objects are created normally using params, which define the initial state of the object. In this example, a Carbon object is created with 6 electrons and 6 protons.
class Atom(): def __init__(self, electrons, protons): self.electrons = electrons self.protons = protons carbon = Atom(electrons = 6, protons = 6)
Classes-en.ipynb
acs/python-red
mit
To create a new Object using initial params, a special method must be defined in the Class: __init__. This method will be called automatically when the object is created, and the params used in the object creation are passed to this init method. The first param of the __init__ methods, and the first param in all the me...
class Atom(): def __init__(self, electrons, protons): self.electrons = electrons self.protons = protons def get_electrons(self): return self.electrons def get_protons(self): return self.protons carbon = Atom(electrons = 6, protons = 6) print("The carbon atom has %i ele...
Classes-en.ipynb
acs/python-red
mit
Inheritance A class could be defined reusing (specializing) another class (its base class). Let's create a new class to work specifically with Carbon atoms.
class Carbon(Atom): def __init__(self): self.electrons = 6 self.protons = 6 carbon = Carbon() print("The carbon atom has %i electrons" % carbon.get_electrons()) print("The carbon atom has %i protons" % carbon.get_electrons())
Classes-en.ipynb
acs/python-red
mit
The new Carbon class is an specilization of the base class Atom. It already know the numbers of electrons and protons, so they are not needed in the init params. And all the methods included in the base Atom base class, are also available in this new class. The Carbon class can redefine the methods from the base class....
class Carbon12(Carbon): def __init__(self): self.electrons = 6 self.protons = 6 self.neutrons = 6
Classes-en.ipynb
acs/python-red
mit
All the atoms have neutrons in the nucleus, so this param must be also added to the class Atom that models what an Atom is. And also, a method to get the value. In this way, it is not needed to create a new class for each isotope because this information can be stored in the Atom class directly. Using this approach the...
class Atom(): def __init__(self, electrons, protons, neutrons): self.electrons = electrons self.protons = protons self.neutrons = neutrons def get_electrons(self): return self.electrons def get_protons(self): return self.protons def get_neutrons...
Classes-en.ipynb
acs/python-red
mit
Now, Atoms by default are created with 6 neutrons, like Carbon12. So this is the default isotope generated when creating new Atoms. But this value can be chnaged just passing a different value in the constructor.
carbon13 = Carbon(neutrons=7) print(carbon13.get_neutrons())
Classes-en.ipynb
acs/python-red
mit
Class Static Methods and Variables Analyzing the Carbon class, the number or electros and protons is always the same, no matter what is the carbon isotope object we are creating. So this data is not object specific but class specific. All the objects created from this Carbon class will have the same values for electros...
class Atom(): electrons = None protons = None def __init__(self, neutrons): self.neutrons = neutrons @classmethod def get_electrons(cls): print(cls) return cls.electrons @classmethod def get_protons(cls): return cls.protons def get_neutrons...
Classes-en.ipynb
acs/python-red
mit
Private Methods It is possible to declare as private methods from a class. These methods can not be used outside the definition of the class. And they are the key to enforce encapsulation of logic.
class Atom(): electrons = None protons = None def __init__(self, neutrons=None): self.neutrons = neutrons self.__atom_private() @classmethod def get_electrons(cls): print(cls) return cls.electrons @classmethod def get_protons(cls): return cl...
Classes-en.ipynb
acs/python-red
mit
Build the image and push it to your project's Container Registry Hint: Review the Cloud Build gcloud command line reference for builds submit. Your image should follow the format gcr.io/[PROJECT_ID]/[IMAGE_NAME]. Note the source code for the tfx-cli is in the directory ./tfx-cli_vertex. It has an helper function tfx_pi...
IMAGE_NAME = "tfx-cli_vertex" IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}" IMAGE_URI # TODO: Your gcloud command here to build tfx-cli and submit to Container Registry. !gcloud builds submit --timeout=15m --tag {IMAGE_URI} {IMAGE_NAME}
notebooks/tfx_pipelines/cicd/solutions/tfx_cicd_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Note: building and deploying the container below is expected to take 10-15 min. Exercise: manually trigger CI/CD pipeline run with Cloud Build You can manually trigger Cloud Build runs using the gcloud builds submit command. See the documentation for pass the cloudbuild_vertex.yaml file and the substitutions.
SUBSTITUTIONS = f"_REGION={REGION}" # TODO: write gcloud builds submit command to trigger manual pipeline run. !gcloud builds submit . --timeout=2h --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
notebooks/tfx_pipelines/cicd/solutions/tfx_cicd_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Weather Conditions
fig = plt.figure() ax = fig.add_subplot(111) ax.hist(accidents['Weather_Conditions'], range = (accidents['Weather_Conditions'].min(),accidents['Weather_Conditions'].max())) counts, bins, patches = ax.hist(accidents['Weather_Conditions'], facecolor='green', edgecolor='gray') ax.set_xticks(bins) plt.title('Weath...
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Light Conditions
accidents.boxplot(column='Light_Conditions', return_type='dict');
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Light Conditions Grouped by Weather Conditions
accidents.boxplot(column='Light_Conditions', by = 'Weather_Conditions', return_type='dict');
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Categorical Variable Analysis Distribution of Casualties by Day of the Week
casualty_count = accidents.groupby('Day_of_Week').Number_of_Casualties.count() casualty_probability = accidents.groupby('Day_of_Week').Number_of_Casualties.sum()/accidents.groupby('Day_of_Week').Number_of_Casualties.count() fig = plt.figure(figsize=(8,4)) ax1 = fig.add_subplot(121) ax1.set_xlabel('Day of Week') ax1.set...
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Time-Series Analysis Number of Casualties Over Time (Entire Dataset)
# Create a dataframe containing the total number of casualties by date casualty_count = accidents.groupby('Date').agg({'Number_of_Casualties': np.sum}) # Convert the index to a DateTimeIndex casualty_count.index = pd.to_datetime(casualty_count.index) # Sort the index so the plot looks correct casualty_count.sort_inde...
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Number of Casualties in The Year 2000
# Plot one year of the data casualty_count['2000'].plot(figsize=(18, 4))
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Number of Casualties in the 1980's (Bar Graph)
# Plot the yearly total casualty count for each year in the 1980's the1980s = casualty_count['1980-01-01':'1989-12-31'].groupby(casualty_count['1980-01-01':'1989-12-31'].index.year).sum() the1980s # Show the plot the1980s.plot(kind='bar', figsize=(18, 4))
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
Number of Casualties in the 1980's (Line Graph)
# Plot the 80's data as a line graph to better see the differences in years the1980s.plot(figsize=(18, 4))
week8/2_bi_dashboards_reporting/2 - Build a Shareable Dashboard Using iPython Notebook and Matplotlib.ipynb
rdempsey/web-scraping-data-mining-course
mit
1D Automata
class Automaton_1D: def __init__(self, n: int, states: int=2): """ 1D Automaton :param n: number of cells """ self.n = n self.space = np.zeros(n, dtype=np.uint8) self.space[n//2] = 1 #np.array([0,0,0,0,1,0,0,0,0,0])#np.random.choice(2, n) def ...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Animation
automaton_size = 100 automaton_1d = Automaton_1D(automaton_size) nb_frames = 100 img = Image.new('RGB', (automaton_size, nb_frames), 'white') draw = ImageDraw.Draw(img) fig, ax = plt.subplots(dpi=50, figsize=(5, 5)) #im = ax.imshow(img) plt.axis('off') def animate(i, automaton, draw, img): space_img = Image.froma...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Conway’s Game Of Life Game Of Life (GOL) is possibly one of the most notorious examples of a cellular automata. Defined by mathematician John Horton Conway, it plays out on a two dimensional grid for which each cell can be in one of two possible states. Starting from an initial grid configuration the system evolves at ...
class ConwayGOL_2D: def __init__(self, N): """ 2D Conway Game of Life :param N: grid side size (resulting grid will be a NxN matrix) """ self.N = N self.grid = np.random.choice(2, (N,N)) def update(self): """ Update status of the grid ...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Animation in Matplotlib
gol = ConwayGOL_2D(100) fig, ax = plt.subplots(dpi=100, figsize=(5, 4)) im = ax.imshow(gol.grid, cmap='Greys', interpolation='nearest') plt.axis('off') def animate(i): gol.update() im.set_data(gol.grid) #ani = animation.FuncAnimation(fig, animate, frames=1000, interval=100).save('basic_animation.mp4', writer...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Interactive Animation
from ipywidgets import interact, widgets def run_conwayGOL_2D(size): gol = ConwayGOL_2D(size) fig, ax = plt.subplots(dpi=100, figsize=(5, 4)) im = ax.imshow(gol.grid, cmap='Greys', interpolation='nearest') plt.axis('off') def animate(i): gol.update() im.set_data(gol.grid) ret...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Game of Life 3D Regarding the grid structure and neighbors counting is purely a matter of using a 3-dimensional numpy array and related indexing. For the rules, original GOL ones are not so stable for a 3D setup.
class ConwayGOL_3D: def __init__(self, N): """ 3D Conway Game of Life :param N: 3D grid side size (resulting grid will be a NxNxN matrix) """ self.N = N self.grid = np.random.choice(2, (N,N,N)) def update(self): """ Update status of the grid ...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Performances Profiling Relying on the utility code for generic CA
%load_ext autoreload %autoreload 2 from Automaton import AutomatonND rule = {'neighbours_count_born': 3, # count required to make a cell alive 'neighbours_maxcount_survive': 3, # max number (inclusive) of neighbours that a cell can handle before dying 'neighbours_mincount_survive': 2, # min number ...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Multiple Neighborhood CA Expands further on CA like GOL by considering more neighbors or multiple combinations of neighbors. See Multiple Neighborhood Cellular Automata (MNCA)
%load_ext autoreload %autoreload 2 import cv2 from PIL import Image as IMG from Automaton import AutomatonND, MultipleNeighborhoodAutomaton, get_kernel_2d_square from mnca_utils import * from ds_utils.video_utils import generate_video configs = [ {'neighbours_count_born': [0.300, 0.350], 'neighbour...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Performances Profiling
%%prun -s cumulative -l 30 -r # We profile the cell, sort the report by "cumulative # time", limit it to 30 lines configs = [ {'neighbours_count_born': [0.300, 0.350], 'neighbours_maxcount_survive': [0.350, 0.400], 'neighbours_mincount_survive': [0.750, 0.850], }, {'ne...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Generate Video
def base_frame_gen(frame_count, automaton): automaton.update() img = cv2.normalize(automaton.grid, None, 255, 0, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) return img nb_rows = nb_cols = 300 simulation_steps = 120 automaton_name = 'mca_6polygon_kernel_fill_radinc' out_path = Path.home() / f'Documents/gra...
cellular automata/Cellular Automata.ipynb
5agado/data-science-learning
apache-2.0
Scientific notation:
1.5e-10 * 1000
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Python has a number of defined operators for handling numbers through arithmetic calculations, logic operations (that test whether a condition is true or false) or bitwise processing (where the numbers are processed in binary form). Arithmetic Operations Sum (+) Difference (-) Multiplication (*) Division (/) Integer D...
import math math.sqrt(2)
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Variables You can define variables using the equals (=) sign:
width = 20 length = 30 area = length*width area
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric characters plus underscores ("_"). Certain words, however, are reserved for the language: and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, g...
'I love Structural Geology!' "I love Structural Geology!" '''I love Structural Geology'''
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
But not both at the same time, unless you want one of the symbols to be part of the string.
"He's a geologist" 'She asked, "Are you crazy?"'
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Just like the onumbers we're familiar with, you can assign a string to a variable
greeting = "I love Structural Geology!"
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
The print function is often used for printing character strings:
print(greeting)
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
You can use the + operator to concatenate strings together:
"I " + "love " + "Structural " + "Geology!"
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
The operator % is used for string interpolation. The interpolation is more efficient in use of memory than the conventional concatenation. Symbols used in the interpolation: %s: string. %d: integer. %o: octal. %x: hexacimal. %f: real. %e: real exponential. %%: percent sign. Symbols can be used to display numbers in v...
# Zeros left print('Now is %02d:%02d.' % (16, 30)) # Real (The number after the decimal point specifies how many decimal digits ) print('Percent: %.1f%%, Exponencial:%.2e' % (5.333, 0.00314)) # Octal and hexadecimal print('Decimal: %d, Octal: %o, Hexadecimal: %x' % (10, 10, 10))
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
In addition to interpolation operator %, there is the string method and function format(). Examples:
# Parameters are identified by order print('The area of square with side {0} is {1}'.format(5, 5*5)) # Parameters are identified by name print('{greeting}, it is {hour:02d}:{minute:02d}AM'.format(greeting='Hi', hour=7, minute=30)) # Builtin function format() print('Pi =', format(math.pi, '.15f'))
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Slicing strings Slices of strings can be obtained by adding indexes between brackets after a string. Python indexes: Start with zero. Count from the end if they are negative. Can be defined as sections, in the form [start: end + 1: step]. If not set the start, it will be considered as zero. If not set end + 1, it wil...
greeting greeting[7] greeting[2:6] greeting[18:] greeting[7:17:2]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
It is possible to invert strings by using a negative step:
greeting[::-1]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Types Every value or variable in Python has a type. There are several pre-defined (builtin) simple types of data in the Python, such as: Numbers: Integer (int), Floating Point real (float), Complex (complex) Text You can view type of variable using function type:
type(area) type(math.sqrt(2)) type(greeting)
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Furthermore, there are types that function as collections. The main ones are: List Tuple Dictionary Python types can be: Mutable: allow the contents of the variables to be changed. Immutable: do not allow the contents of variables to be changed. The most common types and routines are implemented in the form of buil...
planets = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
You can access members of the list using the index of that item:
planets[2]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
The -1 element of a list is the last element:
planets[-1]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Lists can be sliced in the same way that the strings.
planets[:4]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit