markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Simple ExampleLet us now evaluate the equation$$ y = x^2 $$ for $$ x=25 $$
x= 25 y= x**2 print(y)
625
MIT
Untitled.ipynb
nyimbi/caseke
Seminar: Monte-carlo tree searchIn this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.But before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking.
from gym.core import Wrapper from pickle import dumps,loads from collections import namedtuple #a container for get_result function below. Works just like tuple, but prettier ActionResult = namedtuple("action_result",("snapshot","observation","reward","is_done","info")) class WithSnapshots(Wrapper): """ Crea...
_____no_output_____
Unlicense
week2_value_based/seminar2_MCTS.ipynb
Maverobot/Practical_RL
try out snapshots:
#make env env = WithSnapshots(gym.make("CartPole-v0")) env.reset() n_actions = env.action_space.n print("initial_state:") plt.imshow(env.render('rgb_array')) #create first snapshot snap0 = env.get_snapshot() #play without making snapshots (faster) while True: is_done = env.step(env.action_space.sample())[2] ...
_____no_output_____
Unlicense
week2_value_based/seminar2_MCTS.ipynb
Maverobot/Practical_RL
MCTS: Monte-Carlo tree searchIn this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.We will start by implementing the `Node` class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.This MCTS implementation makes some assumptions about the enviro...
assert isinstance(env,WithSnapshots) class Node: """ a tree node for MCTS """ #metadata: parent = None #parent Node value_sum = 0. #sum of state values from all visits (numerator) times_visited = 0 #counter of visits (denominator) def __init__(self,parent,action,...
_____no_output_____
Unlicense
week2_value_based/seminar2_MCTS.ipynb
Maverobot/Practical_RL
Main MCTS loopWith all we implemented, MCTS boils down to a trivial piece of code.
def plan_mcts(root,n_iters=10): """ builds tree with monte-carlo tree search for n_iters iterations :param root: tree node to plan from :param n_iters: how many select-expand-simulate-propagete loops to make """ for _ in range(n_iters): node = <select best leaf> if node.is_done...
_____no_output_____
Unlicense
week2_value_based/seminar2_MCTS.ipynb
Maverobot/Practical_RL
Plan and executeIn this section, we use the MCTS implementation to find optimal policy.
root_observation = env.reset() root_snapshot = env.get_snapshot() root = Root(root_snapshot,root_observation) #plan from root: plan_mcts(root,n_iters=1000) from IPython.display import clear_output from itertools import count from gym.wrappers import Monitor total_reward = 0 #sum of rewards test_env = lo...
_____no_output_____
Unlicense
week2_value_based/seminar2_MCTS.ipynb
Maverobot/Practical_RL
Travelling Salesperson Problem (Brute-force Search)> A solution to the travelling salesperson problem using a brute-force search.- toc: true- badges: true- comments: true- categories: [algorithm, graphs]- permalink: /2022/05/12/travelling_salesperson_problem_bruteforce/ NotesThe **Travelling Salesperson Problem** is ...
import random as rand import math import itertools as it import networkx as nx
_____no_output_____
MIT
_notebooks/2022-05-12-tsp_bruteforce.ipynb
ljk233/blog
Function
def bruteforce_tsp(G: nx.Graph, start: object) -> float | int: """Return the shortest route that visits every city exactly once and ends back at the start. Solves the travelling salesperson with a brute-force search using permutations. Preconditions: - G is a complete weighted graph - star...
_____no_output_____
MIT
_notebooks/2022-05-12-tsp_bruteforce.ipynb
ljk233/blog
Example usage Initialise the graph
cg = nx.complete_graph(['origin', 'a', 'b', 'c', 'd']) g = nx.Graph((u, v, {'weight': rand.randint(1, 10)}) for u, v in cg.edges) print(f"g = {g}")
g = Graph with 5 nodes and 10 edges
MIT
_notebooks/2022-05-12-tsp_bruteforce.ipynb
ljk233/blog
Find the shortest path from the origin
print(f"Shortest path from the origin = {bruteforce_tsp(g, 'origin')}")
Shortest path from the origin = 24
MIT
_notebooks/2022-05-12-tsp_bruteforce.ipynb
ljk233/blog
Performance
for n in [4, 6, 8, 10]: print(f"|nodes(g)| = {n}") cg = nx.complete_graph(n) g = nx.Graph((u, v, {'weight': rand.randint(1, 10)}) for u, v in cg.edges) %timeit bruteforce_tsp(g, 1)
|nodes(g)| = 4 17.6 µs ± 214 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) |nodes(g)| = 6 472 µs ± 5.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) |nodes(g)| = 8 26.3 ms ± 203 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) |nodes(g)| = 10 2.29 s ± 25.9 ms per loop (mean ± std. dev...
MIT
_notebooks/2022-05-12-tsp_bruteforce.ipynb
ljk233/blog
Sequence Similarity DemoIn this demo, we will answer the question:_How does the primary sequence of TMPRSS2 differ between species that one would encounter in a farm environment?_We will address this question using sequence alignment and analysis tools from the [Biopython](http://biopython.org/DIST/docs/tutorial/Tutor...
from Bio.Align import AlignInfo, MultipleSeqAlignment from Bio import AlignIO, Alphabet, SeqRecord, Seq, SubsMat
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Part 1____ Read the alignment recordsWe use the Python function from the Biopython package: `Bio.AlignIO.read` to read the trimmed alignment file. This Python function reads the `*.txt` file in the `'fasta'` format and returns an instance of `Bio.Align.MultipleSeqAlignment` (documentation can be found [here](https://...
alignment = AlignIO.read(open('./trimmed_alg.txt'), format='fasta') alignment
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Each element of this list-like instance is a sequence:
alignment[0]
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
This instance of `Bio.Align.MultipleSeqAlignment` is a lot like a Python list. For instance, you can:
# get the number of sequences in this alignment print("number of sequence records: ", len(alignment)) # iterate over the sequence records in the alignment record_counter = 0 for record in alignment: record_counter += 1 print("number of sequence records (a different way): ", record_counter) # get the 100th sequenc...
number of sequence records: 9757 number of sequence records (a different way): 9757 ID of the 100th sequence: 9796.ENSECAP00000016722
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Filter the sequences in the alignmentFor now, we're only interested in "domestic species," or species whose scientific name is in the Python list `domestic_sp_names`:
domestic_sp_names = [ 'Homo sapiens', # human 'Mus musculus', # mouse 'Canis lupus familiaris', # dog 'Felis catus', # cat 'Bos taurus', # cattle 'Equus caballus', # horse 'Gallus gallus' # chicken ]
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
The sequences in the `Bio.Align.MultipleSeqAlignment` are for **all** the species that EggNOG could find, including worms, polar bears, and other species that we're not interested in.Let's filter out sequences from species whose names are **not** in the list `domestic_sp_names`. To do this, we will:1. Get the scientifi...
!ls ! !ls tmprss2_ext = pd.read_table('../seq_sim_demo/extended_members.txt', header=None) tmprss2_ext.columns = ['id_1', 'id_2', 'species', '', ''] tmprss2_ext.head() for record in alignment: # while we're at it, let's make sure that Biopython knows these # are protein sequences record.seq.alphabet = ...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Step 2: Use a list comprehension to filter to domestic species
dom_aln_list = [record for record in alignment if record.description in domestic_sp_names]
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We see that the length of this filtered list is much shorter:
print("number of records for all species:", len(alignment)) print("number of records for domestic species:", len(dom_aln_list))
number of records for all species: 9757 number of records for domestic species: 732
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Step 3: Convert this list to a new `MultipleSeqAlignment` instance
dom_aln = MultipleSeqAlignment(dom_aln_list)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
`dom_aln` has the same data, but is a different type of Python variable:
print("dom_aln_list is type:", type(dom_aln_list)) print("dom_aln is type:", type(dom_aln))
dom_aln_list is type: <class 'list'> dom_aln is type: <class 'Bio.Align.MultipleSeqAlignment'>
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Get the sequence of human TMPRSS2Before we start comparing sequences to each other, let's get the sequence of TMPRSS2 in `Homo sapiens`. This is the sequence that we will compare other species' homologs to.To do this filtering, let's use a list comprehension, then convert to a `MultipleSeqAlignment`, just like we did ...
human_aln_list = [ record for record in dom_aln if record.description == 'Homo sapiens' ] human_aln = MultipleSeqAlignment(human_aln_list)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We see that there are many records in the alignment that have `Homo sapiens` as the species:
len(human_aln)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
It would be interesting to look at how the differences between these 118 variants _within_ the human species, but let's move on to our inter-species analysis for this demo. Get the sequence of human isoform 2Let's find the sequence record that has the same sequence as isoform 2 on the [TMPRSS2 UniProt page](https://www...
isoform_aln_list = [ record for record in human_aln if 'MPPAPPGG' in str(record.seq).replace("-", "") ] print("number of human sequences that contain MPPAPPGG:", len(isoform_aln_list)) human_iso2 = isoform_aln_list[0] human_iso2
number of human sequences that contain MPPAPPGG: 1
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
This is an aligned sequence, so it has a lot of `-` characters that signify residues that are missing relative to other sequences in `alignment`:
str(human_iso2.seq)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We can remove these characters using Python's string replacement method, allowing us to more easily look at the amino acid sequence:
str(human_iso2.seq).replace('-', '')
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We also notice that most of the sequence of interest is in the middle of the aligned sequence. Let's trim the aligned sequence to generate a compact aligned sequence that it starts with `MPPAPP` and ends with `ADG`. To do this, we will make use of the [`str.index`](https://docs.python.org/2/library/stdtypes.html?highli...
index_nterm = str(human_iso2.seq).index('MPPAPP') index_cterm = str(human_iso2.seq).index('ADG') # since we want to cut at ADG^, not ^ADG, we add 3 characters to this index index_cterm += 3 print("index of N-terminus:", index_nterm) print("index of C-terminus:", index_cterm)
index of N-terminus: 33713 index of C-terminus: 38856
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We can use these indices to trim to the compact sequence:
human_compact = human_iso2[index_nterm:index_cterm] str(human_compact.seq)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
These N-terminus and C-terminus indices will be useful when we want to trim sequence records for other species. Generate consensus sequences for cat homologJust like the sequence records for `Homo sapiens`, the records for the other `domestic_sp_names` have duplicates. For example, let's look at `Mus musculus`:
mouse_aln_list = [ record for record in dom_aln if record.description == 'Mus musculus' ] mouse_aln = MultipleSeqAlignment(mouse_aln_list) len(mouse_aln)
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Let's compare 1 sequence, instead of all 146 variants, of cat homolog to the human homolog. To do this, we will generate a **consensus sequence** ([Wikipedia](https://en.wikipedia.org/wiki/Consensus_sequence:~:text=In%20molecular%20biology%20and%20bioinformatics,position%20in%20a%20sequence%20alignment.)) for the cat v...
mouse_aln_summary = AlignInfo.SummaryInfo(mouse_aln) mouse_aln_summary
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Step 2
mouse_aln_consensus = mouse_aln_summary.dumb_consensus() mouse_aln_consensus
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Let's use the N-terminus and C-terminus locations that we calculated above to compact this consensus sequence:
mouse_consensus_compact = mouse_aln_consensus[index_nterm:index_cterm] str(mouse_consensus_compact).replace('X', '-')
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Finally, this consensus sequence is a `Seq`, not a `SeqRecord`. Let's convert it to a `SeqRecord` so we can compare it to the human sequence:
# convert 'X' to '-' for consistency with human sequence # and convert to a Seq.Seq instance mouse_replaced_str = str(mouse_consensus_compact).replace('X', '-') mouse_consensus_replaced = Seq.Seq(mouse_replaced_str) # then convert to a SeqRecord.SeqRecord instance mouse_record_compact = SeqRecord.SeqRecord(mouse_cons...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Part 2: the fun stuff**Finally**, we have human TMPRSS2 and a consensus sequence for cat TMPRSS2. The sequences are aligned and ready for some more advanced analysis with the help of Biopython.Let's start looking at ways we can compare the two sequences. To start, we will answer the question:**At every location in the...
hum_mouse_aln = MultipleSeqAlignment([human_compact, mouse_record_compact]) hum_mouse_aln
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Now we can generate a `SummaryInfo` instance like we did before, and calculate the PSSM:
hum_mouse_summary = AlignInfo.SummaryInfo(hum_mouse_aln) hum_mouse_summary hum_mouse_pssm = hum_mouse_summary.pos_specific_score_matrix(human_compact) hum_mouse_pssm
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We can look at the data in the PSSM by inspecting the `pssm` attribute.The PSSM is a Python list, where each element is a [tuple](https://github.com/wilfredinni/python-cheatsheettuple-data-type) of length 2. The first element of the tuple is the amino acid in the human sequence, and the second element is a Python [dict...
# we want to keep track of which amino acid our # "cursor" is on in the for loop position_counter = 0 for position in hum_mouse_pssm.pssm: # `position` is the 2-element tuple # let's give each element a useful name resi_in_human = position[0] resi_dict = position[1] # skip this position i...
mouse and human are the same at position 16, which is amino acid G mouse and human are the same at position 43, which is amino acid G mouse and human are the same at position 47, which is amino acid A mouse and human are the same at position 82, which is amino acid Y mouse and human are the same at position 100, which ...
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
To make sure our `position_counter` variable is working properly, let's double check that the length of the human sequence (without `-` characters) is indeed 529:
# position counter from the above for loop print(f"the human sequence is {position_counter} amino acids long") # calling len(str) length_a_different_way = len(str(hum_mouse_aln[0].seq).replace('-', '')) print(f"the human sequence is {length_a_different_way} amino acids long")
the human sequence is 529 amino acids long the human sequence is 529 amino acids long
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We see that `position_counter` appears to be working as expected! At which positions are amino acids different?The more interesting question is how these structures differ. We can use a similar for loop to address this question:
# we want to keep track of which amino acid our # "cursor" is on in the for loop position_counter = 0 list_to_store_same = list() for position in hum_mouse_pssm.pssm: # `position` is the 2-element tuple # let's give each element a useful name resi_in_human = position[0] resi_dict = position[1] ...
mouse and human are the same at position 1, which is amino acid M mouse and human are the same at position 2, which is amino acid P mouse and human are the same at position 3, which is amino acid P mouse and human are the same at position 4, which is amino acid A mouse and human are the same at position 5, which is ami...
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
At which positions do we encounter a hydrophobic -> hydrophilic (or vice versa)?For this question, we will need to make our algorithm a little more complex. We are going to start by making a dataframe that stores amino acid properties, such as volume, hydrophobicity, charge, and so forth. We will use the CSV format of...
for position in human_compact: print(type(position)) print(position[0]) break
<class 'str'> M
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Pseudocode outline```human MP P APPcat LA P ---```1. Iterate over each amino acid. For loop. We don't necessarily need the PSSM here.```pythonfor position in human_compact:```2. Get the amino acid at this position, for both human and cat.```pythonresi_in_human = human_compact[position]resi_in_mouse = mouse_c...
change_in_h aa_props['hydrophobicity'] def replace "".replace("-", '') def get_hydrophobicity(aa): hydrophobicity = aa_props.loc[[aa]]['hydrophobicity'].item() return hydrophobicity aa = 'S' get_hydrophobicity(aa) aa_props = pd.read_csv("../../data/amino_acid_properties.csv") aa_props.set_index('single_letter',...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
See the [PDF format](../../data/amino_acid_properties.pdf) for references and details on how these metrics are calculated.Next, we will write a Python [function](https://github.com/wilfredinni/python-cheatsheetfunctions), in which we pass the single-letter IDs of two amino acids, and get a Python [boolean](https://gith...
def is_change_in_hydrophobicity(resi1, resi2): """This function takes string-type amino acid identifiers `resi1` and `resi2` and compares their hydrophobicities. If the absolute value of the difference between hydrophobicities is greater than `min_diff`, return boolean True. Otherwise, return boolean Fa...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We can quickly test our function with some examples:
is_change_in_hydrophobicity('M', 'S') is_change_in_hydrophobicity('M', 'F') is_change_in_hydrophobicity('M', 'M')
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Get list of interesting residuesNext, let's generate a list of positions in the human sequence that are residues of interest, such as the catalytic triad (H296, D345 and S441) and important binding residues (D435, K223, and K224).It is important to remember that these positions reported in the literature are relative ...
len(str(human_compact.seq).replace('-', '')) str(human_compact.seq).replace('-', '')[477]
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
We can also check that the other residues of interest are the expected amino acids:* H296 in isoform 1 → 296 + 37 = H333 in isoform 2 → 333 - 1 = position 332 with 0-indexing* D345 → 381* D435 → 471* K223 → 259* K224 → 260Let's store these 0-indexed positions in a list so we can use it later:
resi_interest = [332, 381, 471, 259, 260]
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Let's check that these positions are the amino acids we expect, this time using a for loop:
for position in resi_interest: resi = str(human_compact.seq).replace('-', '')[position] print(f"amino acid at 0-indexed position {position} is {resi}")
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Putting it all togetherLet's try using our new function in a for loop. This for loop is a bit different from the previous ones; it's actually simpler. Instead of using the PSSM, we can simply iterate over the positions in the human sequence, get the equivalent amino acid in the cat sequence, and use our function to as...
list(range(len(human_compact))) # we want to keep track of which amino acid our # "cursor" is on in the for loop position_counter = 0 # get the entire list of positions in the human sequence as # integers. We include dashes in this calculation list_of_positions_including_dashes = range(len(human_compact)) for positio...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Goal for the end of this weekFor every position in the human sequence (compared to cat sequence), write an algorithm that prints every time there is a hydrophobic residue in human, and non-hydrophobic (hydrophilic) residue in cats. Other useful `SummaryInfo` tools Compute replacement dictionary
hum_mouse_rep_dict = hum_mouse_summary.replacement_dictionary() {k: hum_mouse_rep_dict[k] for k in hum_mouse_rep_dict if hum_mouse_rep_dict[k] > 0 and k[0] != k[1]}
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Compute substitution and log odds matrix
my_arm = SubsMat.SeqMat(hum_mouse_rep_dict) my_arm my_lom = SubsMat.make_log_odds_matrix(my_arm) my_lom
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Ty's Work Below
for position in hum_mouse_pssm.pssm: resi_in_human = position[0] resi_dict=position[1] if resi_in_human =="-": continue else: position_counter += 1 if position[1][resi_in_human] > 1: print(f"mouse and human are the same at position" + f"{position_counter}, which is amino aci...
_____no_output_____
MIT
code/seq_sim_demo/20200713_seq_sim_mouse.ipynb
eho-tacc/epi-model-reu
Imports and other setup
!pip install livelossplot --quiet !pip install deap --quiet import numpy as np import tensorflow as tf import numpy as np import random from tensorflow.keras.datasets import mnist from tensorflow.keras import layers, models, Input, Model from tensorflow.keras.callbacks import EarlyStopping from IPython import display...
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
CONSTANTS Load Fashion Data
# load dataset (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data() # split dataset train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype("float32") / 255.0 test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype("float32") / 25...
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step 40960/29515 [=========================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datase...
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Setup class names and labels for visualization, not training
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Plot some images.
import math def plot_data(num_images, images, labels): grid = math.ceil(math.sqrt(num_images)) plt.figure(figsize=(grid*2,grid*2)) for i in range(num_images): plt.subplot(grid,grid,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(images[i].reshape(28,28)) plt...
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
STAGE 1: Auto-encoders Build the Encoder
# input layer input_layer = Input(shape=(28, 28, 1)) # encoding architecture encoded_layer1 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(input_layer) encoded_layer1 = layers.MaxPool2D( (2, 2), padding='same')(encoded_layer1) encoded_layer2 = layers.Conv2D(32, (3, 3), activation='relu', padding='same'...
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Build the Decoder
#decoding architecture decoded_layer1 = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(latent_view) decoded_layer1 = layers.UpSampling2D((2, 2))(decoded_layer1) decoded_layer2 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(decoded_layer1) decoded_layer2 = layers.UpSampling2D((2, 2))(decode...
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Build the Model
# compile the model model = Model(input_layer, output_layer) model.compile(optimizer='adam', loss='mse') model.summary() plot_model(model) history_loss = [] history_val_loss = [] def add_history(history): history_loss.append(history.history["loss"]) history_val_loss.append(history.history["val_loss"]) def reset_h...
Model: "model_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 28, 28, 1)] 0 ...
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Creating Mate/Mutation Operators
#@title Start of Mate/Mutation Operators def get_layers(ind, layer_type): return [a for a in range(len(ind)) if ind[a] == layer_type] def swap(ind1, iv1, ind2, iv2, ll): ch1 = ind1[iv1:iv1+ll] ch2 = ind2[iv2:iv2+ll] ind1[iv1:iv1+ll] = ch2 ind2[iv2:iv2+ll] = ch1 return ind1, ind2 def swap_layers(ind1, ind...
_____no_output_____
Apache-2.0
EDL_8_4_EvoAutoencoder.ipynb
cxbxmxcx/EvolutionaryDeepLearning
Introduction Recall from the example in the previous lesson that Keras will keep a history of the training and validation loss over the epochs that it is training the model. In this lesson, we're going to learn how to interpret these learning curves and how we can use them to guide model development. In particular, we...
from tensorflow.keras.callbacks import EarlyStopping early_stopping = EarlyStopping( min_delta=0.001, # minimium amount of change to count as an improvement patience=20, # how many epochs to wait before stopping restore_best_weights=True, )
_____no_output_____
Apache-2.0
notebooks/deep_learning_intro/raw/tut4.ipynb
qursaan/learntools
These parameters say: "If there hasn't been at least an improvement of 0.001 in the validation loss over the previous 20 epochs, then stop the training and keep the best model you found." It can sometimes be hard to tell if the validation loss is rising due to overfitting or just due to random batch variation. The para...
#$HIDE_INPUT$ import pandas as pd from IPython.display import display red_wine = pd.read_csv('../input/dl-course-data/red-wine.csv') # Create training and validation splits df_train = red_wine.sample(frac=0.7, random_state=0) df_valid = red_wine.drop(df_train.index) display(df_train.head(4)) # Scale to [0, 1] max_ =...
_____no_output_____
Apache-2.0
notebooks/deep_learning_intro/raw/tut4.ipynb
qursaan/learntools
Now let's increase the capacity of the network. We'll go for a fairly large network, but rely on the callback to halt the training once the validation loss shows signs of increasing.
from tensorflow import keras from tensorflow.keras import layers, callbacks early_stopping = callbacks.EarlyStopping( min_delta=0.001, # minimium amount of change to count as an improvement patience=20, # how many epochs to wait before stopping restore_best_weights=True, ) model = keras.Sequential([ l...
_____no_output_____
Apache-2.0
notebooks/deep_learning_intro/raw/tut4.ipynb
qursaan/learntools
After defining the callback, add it as an argument in `fit` (you can have several, so put it in a list). Choose a large number of epochs when using early stopping, more than you'll need.
history = model.fit( X_train, y_train, validation_data=(X_valid, y_valid), batch_size=256, epochs=500, callbacks=[early_stopping], # put your callbacks in a list verbose=0, # turn off training log ) history_df = pd.DataFrame(history.history) history_df.loc[:, ['loss', 'val_loss']].plot(); prin...
_____no_output_____
Apache-2.0
notebooks/deep_learning_intro/raw/tut4.ipynb
qursaan/learntools
Titanic : Analysis of a disaster Author - Paawan Mukker> This notebook strives to answer some chosen question using simple exploratory data analysis, and descriptive statistics, (the aim is to avoid using any inferential statistics or Machine learning as much as possible) on the titanic dataset. This notebook follows ...
# Libraries for Visualisation/plots. from matplotlib import pyplot as plt import seaborn as sns from pandas.plotting import radviz import matplotlib.patches as mpatches # Libraries for data handling %matplotlib inline import numpy as np import pandas as pd # Libraries for data modeling from sklearn.ensemble import R...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Let's read our data using pandas:
# Load the data as pandas data frame titanic_train = pd.read_csv("./data/train.csv") titanic_test = pd.read_csv("./data/test.csv") # Will not be used
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Show an overview of our data: _Dimension of the data:_
titanic_train.shape
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
_First few columns:_
titanic_train.head()
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Here is what each of the column means:```Variable Name DescriptionPassengerId Passenger Id.Survived 1 for Survived, 0 otherwisePclass Passenger’s className Passenger’s nameSex Passenger’s sexAge Passenger’s ageSibSp Number of siblings/spouses on shipParch ...
# Use only relevant columns to_have = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'] titanic_train[to_have].describe(exclude=[type(None)])
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Assessing Missing Values in columns
titanic_train.isnull().sum() titanic_train.isnull().sum().plot(kind='bar', figsize=(15,4))
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Phase 3 - Data Preparation Data cleaning Re-Encode Categorical Features
titanic_train.dtypes drop_categorical_var = ['Name', 'Embarked', 'Ticket', 'Cabin', 'PassengerId'] bin_categorical_var = ['Sex'] multi_categorical_var = ['Pclass'] # Drop not required categorical variable titanic_train.drop(drop_categorical_var, axis=1, inplace=True) # Re-encode binary categorical variable(s) to be...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Fix missing Values Let's target the most seen Null or NA values.
# Show count of missing values titanic_train.isnull().sum()
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
For replacing missing data there can be multiple stratergies. Let's see which strategy suits us the best.We can:1. Deleting rows/columns with missing values - but since the dataset is small removing rows is not a good way to go but a field can be dropped for analysis.2. Replace missing values with values inferred from ...
# Impute the missing age values with mean titanic_train.Age=titanic_train.Age.fillna(titanic_train.Age.mean())
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Data Construction We can't predict role of family in survival since we don't have any field that directly corresponds to that. Hence we need to do come up with something based on existing fields which are SibSp (Siblings/Spouse) and Parch(Parent/children).
# Source - http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html family = pd.DataFrame() # introducing a new feature : the size of families (including the passenger) family[ 'FamilySize' ] = titanic_train[ 'Parch' ] + titanic_train[ 'SibSp' ] + 1 # introducing other features based on the family ...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Phase 4 - Modeling Modelling technique selection For modelling we are using RandomForest classifiers here and the reason to choose them is as follows :- This is a classifcation problem so we are restricted to use classification algorithms.- We have the labelled data, hence this becomes a supervised learning problem, ...
# Drop the survived column (or labels for training data) x = titanic_train.drop(['Survived'], axis=1) # Get the label data y = titanic_train.Survived
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Build Model
# Instantiate RAndom forest classifer clf = RandomForestClassifier(random_state=0, max_features=None) # Fit the training data clf_tit = clf.fit(x, y)
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
Phase 5 - Evaluation In this section we'll be evaluating the questions asked instead of the model. - Does having family members on board increases your survival ? Ler us plot the bar graph of family members on ship vs their survival count.
freq1 = pd.value_counts(titanic_train[titanic_train['Survived']==1].FamilySize) freq2 = pd.value_counts(titanic_train[titanic_train['Survived']==0].FamilySize) ax = pd.concat([ freq2.rename('Not Survived'), freq1.rename('Survived')], axis=1).plot.bar(figsize=(10,7)) ax.set_xlabel('Number of family members on ship') ax....
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
**Ans** : Using the above graph we can see that there’s a survival penalty to singletons and those with family sizes above 4. Hence to answer the asked question, Yes having family members on board increases your survival but only if you have family members less than 4 beyond that it hurts your chances rather than incre...
fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) titanic_train.Survived[titanic_train.Sex == 0].value_counts().plot(kind='bar', label='Male', color='blue') titanic_train.Survived[titanic_train.Sex == 1].value_counts().plot(kind='bar', label='Female', color='red') ax.set_xticklabels(['Survived', 'Not survived'...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
The above chart is Count of males/females that survived : Through this we see that the **number of males surviving are more than the females.** Also the **number of males who didn’t survive are also more than females who didn’t survive.** So this doesn’t gives the answer and hence we have the next chart.
freq1 = pd.value_counts(titanic_train[titanic_train['Survived']==1].Sex) freq2 = pd.value_counts(titanic_train[titanic_train['Survived']==0].Sex) ax = pd.concat([ freq2.rename('Not Survived'), freq1.rename('Survived')], axis=1).plot.bar(figsize=(10,5)) ax.set_xticklabels(['male', 'female']) ax.set_xlabel('Gender of per...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
This one contrasts the number of survived/not survived male and female passengers and their total count. From this we clearly infer that out of nearly 550 males aboard only 100 survived, that is 18%. Whereas for females out of 340, nearly 260 survived i.e. 75%. Hence we answer in affirmative that yes **Females had a su...
features = pd.DataFrame() features['features'] = x.columns features['importance'] = clf.feature_importances_ features.sort_values(by=['importance'], ascending=True, inplace=True) features.set_index('features', inplace=True) ax = features.plot(kind='barh', figsize=(12,6)) ax.set_xlabel('The feature importances (the high...
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
The feature importance are numbers which are computed using scikit learn. Basically, the idea is to measure the decrease in accuracy on data when we randomly permute the values for that feature. If the decrease is low, then the feature is not important, and vice-versa. The higher the number the more the feature importa...
fig = plt.figure(figsize=(12,8)) ax = radviz(titanic_train, 'Survived', color=['r','b']) blue_patch = mpatches.Patch(color='blue', label='Survived') red_patch = mpatches.Patch(color='red', label='Not survived') ax.legend(handles=[blue_patch, red_patch])
_____no_output_____
FTL
Titanic_dataset_analysis.ipynb
paawan01/Titanic_dataset_analysis
ResNet34 - Experiments Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier fr...
%reload_ext autoreload %autoreload 2 %matplotlib inline
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural netwo...
from fastai.vision import * from fastai.metrics import error_rate
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
bs = 64 # bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate betw...
help(untar_data) #path = untar_data(URLs.PETS); path path = Path(r'/home/ec2-user/SageMaker/classify-streetview/images') path path.ls() #path_anno = path/'annotations' path_img = path
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are a...
fnames = get_image_files(path_img) fnames[:5] tfms = get_transforms(do_flip=False) #data = ImageDataBunch.from_folder(path_img, ds_tfms=tfms, size=224) #np.random.seed(2) #pat = r'/([^/]+)_\d+.jpg$' #data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs # ...
['0_missing', '1_null', '2_obstacle', '3_present', '4_surface_prob']
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lesso...
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.model learn.fit_one_cycle(4) learn.save('stage-1')
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that ...
interp = ClassificationInterpretation.from_learner(learn) losses,idxs = interp.top_losses() len(data.valid_ds)==len(losses)==len(idxs) interp.plot_top_losses(9, figsize=(15,11)) doc(interp.plot_top_losses) interp.plot_confusion_matrix(figsize=(4,4), dpi=60) interp.most_confused(min_val=2)
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
learn.unfreeze() learn.fit_one_cycle(1) learn.load('stage-1'); learn.lr_find() learn.recorder.plot() learn.unfreeze() learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the detail...
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=299, bs=bs//2).normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, metrics=error_rate) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(8) learn.save('stage-1-50')
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
learn.unfreeze() learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
Total time: 03:27 epoch train_loss valid_loss error_rate 1 0.097319 0.155017 0.048038 (01:10) 2 0.074885 0.144853 0.044655 (01:08) 3 0.063509 0.144917 0.043978 (01:08)
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
If it doesn't, you can always go back to your previous model.
learn.load('stage-1-50'); interp = ClassificationInterpretation.from_learner(learn) interp.most_confused(min_val=2)
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Other data formats
path = untar_data(URLs.MNIST_SAMPLE); path tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26) data.show_batch(rows=3, figsize=(5,5)) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(2) df = pd.read_csv(path/'labels.csv') df.head() data = ImageDataBu...
_____no_output_____
MIT
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb
w210-accessibility/classify-streetview
Install and Import libraries.
!pip install mediapipe opencv-python import mediapipe as mp import numpy as np import os import cv2 import uuid # checking for webcam capture_frames=cv2.VideoCapture(0) while capture_frames.isOpened(): ret,frames=capture_frames.read() image=cv2.cvtColor(frames,cv2.COLOR_BGR2RGB) cv2.imshow("Hand Tracking M...
_____no_output_____
MIT
Palm Detection and Hand Tracking Model.ipynb
alfaPegasis/Hand-Tracking-Model
Render joints and landmarks of our hand.
mp_drawing=mp.solutions.drawing_utils mp_hands=mp.solutions.hands
_____no_output_____
MIT
Palm Detection and Hand Tracking Model.ipynb
alfaPegasis/Hand-Tracking-Model
Detecting Images
capture_frames=cv2.VideoCapture(0) with mp_hands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.5)as hands: while capture_frames.isOpened(): ret,frame=capture_frames.read() #recolor the frame image=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) #set flag ...
<class 'mediapipe.python.solution_base.SolutionOutputs'> <class 'mediapipe.python.solution_base.SolutionOutputs'> <class 'mediapipe.python.solution_base.SolutionOutputs'> <class 'mediapipe.python.solution_base.SolutionOutputs'> <class 'mediapipe.python.solution_base.SolutionOutputs'> <class 'mediapipe.python.solution_b...
MIT
Palm Detection and Hand Tracking Model.ipynb
alfaPegasis/Hand-Tracking-Model
Save Images in the local folder
os.mkdir("Output Images After Detection") capture_frames=cv2.VideoCapture(0) with mp_hands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.5)as hands: while capture_frames.isOpened(): ret,frame=capture_frames.read() #recolor the frame image=cv2.cvtColor(frame,cv2.COLOR_...
_____no_output_____
MIT
Palm Detection and Hand Tracking Model.ipynb
alfaPegasis/Hand-Tracking-Model
3 - Zenbaki arrunt baten errepresentazio hamartarrak izango dituen digito kopurua kalkulatzen duen funtzioa. Generalizatu ezazu edozein oinarri erabili ahal izateko.
def digito_kopurua(n,oinarria=10): #print(n) if n < oinarria : return 1 else : return 1 + digito_kopurua(n // oinarria, oinarria) #digito_kopurua(863465234) #digito_kopurua(24,10) digito_kopurua(24,2)
_____no_output_____
MIT
Ariketak/Errekurtsibitatea.ipynb
mpenagar/Programazioaren-Oinarriak
4 - Zerrenda batetako elementu handienaren balioa bueltatuko duen funtzioa.
def maximoa(z): if len(z) == 1 : return z[0] else : a = z[:len(z)//2] b = z[len(z)//2:] #print(a,b) return max(maximoa(a),maximoa(b)) maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])
_____no_output_____
MIT
Ariketak/Errekurtsibitatea.ipynb
mpenagar/Programazioaren-Oinarriak
Horrenbeste zerrenda sortzeak badu bere kostua...
def maximoa_errek(z,i,j): if j-i == 1 : return z[i] else : #print(z[i:(i+j)//2],z[(i+j)//2:j]) return max(maximoa_errek(z,i,(i+j)//2),maximoa_errek(z,(i+j)//2,j)) def maximoa(z): return maximoa_errek(z,0,len(z)) maximoa_errek([34216,32,46,13465,236,134,632,73452,3452,36,236,2365],0,...
_____no_output_____
MIT
Ariketak/Errekurtsibitatea.ipynb
mpenagar/Programazioaren-Oinarriak
Horrenbeste zerrenda sortzeak badu bere kostua...
def maximoa_errek(z,i): if i == len(z)-1 : return z[i] else : return max(z[i],maximoa_errek(z,i+1)) def maximoa(z): return maximoa_errek(z,0) maximoa_errek([34216,32,46,13465,236,134,632,73452,3452,36,236,2365],0) maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])
_____no_output_____
MIT
Ariketak/Errekurtsibitatea.ipynb
mpenagar/Programazioaren-Oinarriak
7- Karaktere kate baten barnean beste kate baten agerpen kopurua (gainezarmenik gabe) kalkulatuko duen funtzioa.
def kontatu(zer,non): if len(zer) >= len(non) : return 1 if zer==non else 0 else : def kontatu(zer,non): n = len(zer) if n > len(non) : return 0 elif zer == non[:n] : return 1 + kontatu(zer,non[n:]) else : return kontatu(zer,non[1:]) z = "kaixo" z[10...
_____no_output_____
MIT
Ariketak/Errekurtsibitatea.ipynb
mpenagar/Programazioaren-Oinarriak
Chapter 9: Pytorchによる転移学習ここではpytorchによる転移学習の実装を行います.転移学習とは,事前に他のデータセットで学習した深層学習モデルを特徴量生成器(次元削減器)として使用することで学習用データが少ない時でも過学習しないモデルを作るといったアプローチです今回は[Chapter8](./Chapter8.ipynb)で作成した分類器を用いて「0」「1」を分類するモデルを転移学習で作成していきます.
import torch import torchvision import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt %matplotlib inline print(torch.__version__)
1.9.1
MIT
text/Chapter9.ipynb
Selubi/tutorial_python