markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
we see most of our clients are enrolled before 2018-03-22. This is why the lines are so smooth for the first ~9 datapoints in the previous retention chart Breaking retention down into shorter segments shows that there are indeed differences between the taar branches, however they track each other rather closely and are...
w6r = ret_df[ret_df.period==6] w6r (slinear, nlinear, sensemble, nensemble, scontrol, ncontrol) = [int(w6r[w6r.branch == b][i].values[0]) for b in ('linear-taar', 'ensemble-taar', 'control') for i in ('n_week_clients', "total_clients")] def get_effect(g1s, g2s, g1n,...
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Welcome to the Tensor2Tensor Dataset Colab! Installation & Setup Define the Problem Run t2t_datagen Viewing the generated data. tf.python_io.tf_record_iterator Using tf.data.Dataset Terminology Problem Modalities Installation & Setup We'll install T2T and TensorFlow. We also need to setup the directories where...
#@title Run for installation. ! pip install -q -U tensor2tensor ! pip install -q tensorflow #@title Run this only once - Sets up TF Eager execution. import sys if 'google.colab' in sys.modules: # Colab-only TensorFlow version selector %tensorflow_version 1.x import tensorflow as tf # Enable Eager execution - usef...
tensor2tensor/notebooks/t2t_problem.ipynb
tensorflow/tensor2tensor
apache-2.0
Define the Problem To simplify our setting our input text sampled randomly from [a, z] - each sentence has between [3, 20] words with each word being [1, 8] characters in length. Example input: "olrkpi z cldv xqcxisg cutzllf doteq" -- this will be generated by sample_sentence() Our output will be the input words sorted...
#@title Define `sample_sentence()` and `target_sentence(input_sentence)` import random import string def sample_sentence(): # Our sentence has between 3 and 20 words num_words = random.randint(3, 20) words = [] for i in range(num_words): # Our words have between 1 and 8 characters. num_...
tensor2tensor/notebooks/t2t_problem.ipynb
tensorflow/tensor2tensor
apache-2.0
That's it! To use this with t2t-trainer or t2t-datagen, save it to a directory, add an __init__.py that imports it, and then specify that directory with --t2t_usr_dir. i.e. as follows: ``` $ t2t-datagen \ --problem=sort_words_according_to_length_random \ --data_dir=/tmp/t2t/data \ --tmp_dir=/tmp/t2t/tmp \ --t2t...
sort_len_problem = SortWordsAccordingToLengthRandom() sort_len_problem.generate_data(DATA_DIR, TMP_DIR)
tensor2tensor/notebooks/t2t_problem.ipynb
tensorflow/tensor2tensor
apache-2.0
Viewing the generated data. tf.data.Dataset is the recommended API for inputting data into a TensorFlow graph and the Problem.dataset() method returns a tf.data.Dataset object.
Modes = tf.estimator.ModeKeys # We can iterate over our examples by making an iterator and calling next on it. sort_len_problem_dataset = sort_len_problem.dataset(Modes.EVAL, DATA_DIR) eager_iterator = sort_len_problem_dataset.make_one_shot_iterator() example = next(eager_iterator) input_tensor = example["inputs"] ta...
tensor2tensor/notebooks/t2t_problem.ipynb
tensorflow/tensor2tensor
apache-2.0
Let us suppose that we have a dataframe with 5 rows. Each row gives us the values of a quantities in each of 5 bins. So, the columns are numbers, means of some quantity X and standard deviations of X. We will assume that in each of the bins, X is drawn from a Normal distribution, and so characterizing the mean and the ...
# Code to generate the toy example (let us not worry how this code works) nums = np.arange(1000, 6000, 1000) \ + np.round(np.random.RandomState(0).normal(0., 200., size=5,)).astype(np.int) df = pd.DataFrame(dict(Numbers=nums, meanX=np.power(nums, 0.5)/5., stdX=np.power(nums, 0.1))) df
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
Step 1 To answer the question: we will assume that the numbers in each bin relative to the total stay the same when we change the total numbers. So first :
df['frequencies'] = df.Numbers / df.Numbers.sum() df
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
Step 2 Now, given a total number of objects Nobj, we can find approimately how many we expect in each bin = Nobj * frequencies (but that might be a fraction, so we will round it to an integer). Let us try Nobj = 50000
numObjectsPerBin = np.round(df.frequencies * 50000).astype(np.int) print(numObjectsPerBin)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
We can check (as it obviously must) that this matches our numbers if Nobj equals the total number of objects in our toy example:
np.round(df.frequencies * df.Numbers.sum()).astype(np.int)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
Step 3 Now for each bin we mgith want to draw numbers from a normal distribution with size = number of objects in that bin For each bin this is now easy: (some syntax about how to access the ith row and two columns by name from a dataframe is necessary, but fairly intuitive)
m, s = df.ix[0, ['meanX', 'stdX']] # Now the mean of the 0th bin is assigned to m, and std to s X = np.random.normal(m, s, size=numObjectsPerBin.ix[0] ) print(X)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
So, what we need to do is loop through the bins, and keep appending X to some list
XVals = [] for i in range(len(df)): m, s = df.ix[i, ['meanX', 'stdX']] # We will convert the numpy array to list, but that may not be necessary X = np.random.normal(m, s, size=numObjectsPerBin.ix[i]).tolist() XVals.append(X) XVals
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
XVals is a list of lists. The 0th element of XVals is a list which has all of the Xs sampled in the 0th bin and likewise. We can check the frequencies match up map is a useful function to know about (but not essential, you can use for loops or preferably us list comprehensions in its place). But here is what it does: ...
np.array(map(len, XVals)) / np.float(sum(map(len, XVals))) totalobjs = sum(map(len, XVals))
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
And we can find their means and std deviations
map(np.mean, XVals) map(np.std, XVals)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
Randomness and Reproducibility There was a question about getting different values from each random draw. As guessed, this is expected, but there are times when you want to get the same answer to be able to reproduce older calculations. One of the ways to achieve this is by supplying a seed
seed = 1 rng = np.random.RandomState(seed) rng.normal(0, 1, size= 50)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
The next 50 will be different
rng.normal(0, 1, size=50) # But if you want to reproduce the first 50, you can do so by using the same seed rng = np.random.RandomState(seed) rng.normal(0,1, size=60)
examples/RandomSamplingFromSequences.ipynb
SN-Isotropy/Isotropy
mit
Word Frequencies Let's look at frequencies of words, bigrams and trigrams in a text. The following function reads lines from a file or URL and splits them into words:
def iterate_words(filename): """Read lines from a file and split them into words.""" for line in open(filename): for word in line.split(): yield word.strip()
text_analysis.ipynb
AllenDowney/CompStats
mit
Here's an example using a book. wc is a Counter of words, that is, a dictionary that maps from each word to the number of times it appears:
import os # Originally from https://archive.org/stream/TheFaultInOurStarsJohnGreen/The+Fault+In+Our+Stars+-+John+Green_djvu.txt filename = 'the_fault_in_our_stars.txt' if not os.path.exists(filename): !wget https://raw.githubusercontent.com/AllenDowney/CompStats/master/the_fault_in_our_stars.txt from collections...
text_analysis.ipynb
AllenDowney/CompStats
mit
Here are the 20 most common words:
wc.most_common(20)
text_analysis.ipynb
AllenDowney/CompStats
mit
Word frequencies in natural languages follow a predictable pattern called Zipf's law (which is an instance of Stigler's law, which is also an instance of Stigler's law). We can see the pattern by lining up the words in descending order of frequency and plotting their counts (6507, 5250, 2707) versus ranks (1st, 2nd, 3r...
def counter_ranks(wc): """Returns ranks and counts as lists.""" return zip(*enumerate(sorted(wc.values(), reverse=True))) ranks, counts = counter_ranks(wc) plt.plot(ranks, counts) plt.xlabel('Rank') plt.ylabel('Count') plt.title('Word count versus rank, linear scale');
text_analysis.ipynb
AllenDowney/CompStats
mit
Huh. Maybe that's not so clear after all. The problem is that the counts drop off very quickly. If we use the highest count to scale the figure, most of the other counts are indistinguishable from zero. Also, there are more than 10,000 words, but most of them appear only a few times, so we are wasting most of the sp...
ranks, counts = counter_ranks(wc) plt.plot(ranks, counts) plt.xlabel('Rank') plt.ylabel('Count') plt.xscale('log') plt.yscale('log') plt.title('Word count versus rank, log-log scale');
text_analysis.ipynb
AllenDowney/CompStats
mit
This (approximately) straight line is characteristic of Zipf's law. n-grams On to the next topic: bigrams and trigrams.
from itertools import tee def pairwise(iterator): """Iterates through a sequence in overlapping pairs. If the sequence is 1, 2, 3, the result is (1, 2), (2, 3), (3, 4), etc. """ a, b = tee(iterator) next(b, None) return zip(a, b)
text_analysis.ipynb
AllenDowney/CompStats
mit
bigrams is the histogram of word pairs:
bigrams = Counter(pairwise(iterate_words(filename)))
text_analysis.ipynb
AllenDowney/CompStats
mit
And here are the 20 most common:
bigrams.most_common(20)
text_analysis.ipynb
AllenDowney/CompStats
mit
Similarly, we can iterate the trigrams:
def triplewise(iterator): a, b, c = tee(iterator, 3) next(b) next(c) next(c) return zip(a, b, c)
text_analysis.ipynb
AllenDowney/CompStats
mit
And make a Counter:
trigrams = Counter(triplewise(iterate_words(filename)))
text_analysis.ipynb
AllenDowney/CompStats
mit
Here are the 20 most common:
trigrams.most_common(20)
text_analysis.ipynb
AllenDowney/CompStats
mit
Markov analysis And now for a little fun. I'll make a dictionary that maps from each word pair to a Counter of the words that can follow.
from collections import defaultdict d = defaultdict(Counter) for a, b, c in trigrams: d[a, b][c] += trigrams[a, b, c]
text_analysis.ipynb
AllenDowney/CompStats
mit
Now we can look up a pair and see what might come next:
d['I', 'said']
text_analysis.ipynb
AllenDowney/CompStats
mit
Here are the most common words that follow "into the":
d['into', 'the'].most_common(10)
text_analysis.ipynb
AllenDowney/CompStats
mit
The following function chooses a random word from the suffixes in a Counter:
import random def choice(counter): """Chooses a random element.""" return random.choice(list(counter.elements())) choice(d['into', 'the'])
text_analysis.ipynb
AllenDowney/CompStats
mit
Given a prefix, we can choose a random suffix:
prefix = 'into', 'the' suffix = choice(d[prefix]) suffix
text_analysis.ipynb
AllenDowney/CompStats
mit
Then we can shift the words and compute the next prefix:
prefix = prefix[1], suffix prefix
text_analysis.ipynb
AllenDowney/CompStats
mit
Repeating this process, we can generate random new text that has the same correlation structure between words as the original:
for i in range(100): suffix = choice(d[prefix]) print(suffix, end=' ') prefix = prefix[1], suffix
text_analysis.ipynb
AllenDowney/CompStats
mit
3. Affine decomposition The parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine. The discrete empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obtain an efficient (approximately affine) expansion of $f(\cdot; \boldsymbol{\mu})$.
@DEIM() class Gaussian(EllipticCoerciveProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization EllipticCoerciveProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "subdomain...
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook.
mesh = Mesh("data/gaussian.xml") subdomains = MeshFunction("size_t", mesh, "data/gaussian_physical_region.xml") boundaries = MeshFunction("size_t", mesh, "data/gaussian_facet_region.xml")
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.2. Create Finite Element space (Lagrange P1)
V = FunctionSpace(mesh, "Lagrange", 1)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.3. Allocate an object of the Gaussian class
problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries) mu_range = [(-1.0, 1.0), (-1.0, 1.0)] problem.set_mu_range(mu_range)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.4. Prepare reduction with a reduced basis method
reduction_method = ReducedBasis(problem) reduction_method.set_Nmax(20, DEIM=21) reduction_method.set_tolerance(1e-4, DEIM=1e-8)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.5. Perform the offline phase
reduction_method.initialize_training_set(50, DEIM=60) reduced_problem = reduction_method.offline()
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.6.1. Perform an online solve
online_mu = (0.3, -1.0) reduced_problem.set_mu(online_mu) reduced_solution = reduced_problem.solve() plot(reduced_solution, reduced_problem=reduced_problem)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.6.2. Perform an online solve with a lower number of DEIM terms
reduced_solution_11 = reduced_problem.solve(DEIM=11) plot(reduced_solution_11, reduced_problem=reduced_problem)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.6.3. Perform an online solve with an even lower number of DEIM terms
reduced_solution_1 = reduced_problem.solve(DEIM=1) plot(reduced_solution_1, reduced_problem=reduced_problem)
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.7.1. Perform an error analysis
reduction_method.initialize_testing_set(50, DEIM=60) reduction_method.error_analysis(filename="error_analysis")
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.7.2. Perform an error analysis with respect to the exact problem
reduction_method.error_analysis( with_respect_to=exact_problem, filename="error_analysis__with_respect_to_exact")
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
4.7.3. Perform an error analysis with respect to the exact problem, but employing a smaller number of DEIM terms
reduction_method.error_analysis( with_respect_to=exact_problem, DEIM=11, filename="error_analysis__with_respect_to_exact__DEIM_11")
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
mathLab/RBniCS
lgpl-3.0
Seq2Seq Seq2Seq (Sequence to Sequence) is a many to many network where two neural networks, one encoder and one decoder work together to transform one sequence to another. The core highlight of this method is having no restrictions on the length of the source and target sequence. At a high-level, the way it works is: ...
SEED = 2222 random.seed(SEED) torch.manual_seed(SEED)
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
The next two code chunks: Downloads the spacy model for the German and English language. Create the tokenizer functions, which will take in the sentence as the input and return the sentence as a list of tokens. These functions can then be passed to torchtext.
# !python -m spacy download de # !python -m spacy download en # the link below contains explanation of how spacy's tokenization works # https://spacy.io/usage/spacy-101#annotations-token spacy_de = spacy.load('de_core_news_sm') spacy_en = spacy.load('en_core_web_sm') def tokenize_de(text: str) -> List[str]: retu...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
The tokenizer is language specific, e.g. it knows that in the English language don't should be tokenized into do not (n't). Another thing to note is that the order of the source sentence is reversed during the tokenization process. The rationale behind things comes from the original seq2seq paper where they identified ...
source = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True) target = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True)
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Constructing Dataset We've defined the logic of processing our raw text data, now we need to tell the fields what data it should work on. This is where Dataset comes in. The dataset we'll be using is the Multi30k dataset. This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words ...
train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(source, target)) print(f"Number of training examples: {len(train_data.examples)}") print(f"Number of validation examples: {len(valid_data.examples)}") print(f"Number of testing examples: {len(test_data.examples)}")
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Upon loading the dataset, we can indexed and iterate over the Dataset like a normal list. Each element in the dataset bundles the attributes of a single record for us. We can index our dataset like a list and then access the .src and .trg attribute to take a look at the tokenized source and target sentence.
# equivalent, albeit more verbiage train_data.examples[0].src train_data[0].src train_data[0].trg
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
The next missing piece is to build the vocabulary for the source and target languages. That way we can convert our tokenized tokens into integers so that they can be fed into downstream models. Constructing the vocabulary and word to integer mapping is done by calling the build_vocab method of a Field on a dataset. Thi...
source.build_vocab(train_data, min_freq=2) target.build_vocab(train_data, min_freq=2) print(f"Unique tokens in source (de) vocabulary: {len(source.vocab)}") print(f"Unique tokens in target (en) vocabulary: {len(target.vocab)}")
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Constructing Iterator The final step of preparing the data is to create the iterators. Very similar to DataLoader in the standard pytorch package, Iterator in torchtext converts our data into batches, so that they can be fed into the model. These can be iterated on to return a batch of data which will have a src and tr...
BATCH_SIZE = 128 # pytorch boilerplate that determines whether a GPU is present or not, # this determines whether our dataset or model can to moved to a GPU device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # create batches out of the dataset and sends them to the appropriate device train_iterator...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
We can list out the first batch, we see each element of the iterator is a Batch object, similar to element of a Dataset, we can access the fields via its attributes. The next important thing to note that it is of size [sentence length, batch size], and the longest sentence in the first batch of the source language has ...
# adjustable parameters INPUT_DIM = len(source.vocab) OUTPUT_DIM = len(target.vocab) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 HID_DIM = 512 N_LAYERS = 2 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
To define our seq2seq model, we first specify the encoder and decoder separately. Encoder Module
class Encoder(nn.Module): """ Input : - source batch Layer : source batch -> Embedding -> LSTM Output : - LSTM hidden state - LSTM cell state Parmeters --------- input_dim : int Input dimension, should equal to the source vocab size. emb_dim...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Decoder Module The decoder accept a batch of input tokens, previous hidden states and previous cell states. Note that in the decoder module, we are only decoding one token at a time, the input tokens will always have a sequence length of 1. This is different from the encoder module where we encode the entire source sen...
class Decoder(nn.Module): """ Input : - first token in the target batch - LSTM hidden state from the encoder - LSTM cell state from the encoder Layer : target batch -> Embedding -- | encoder hidden state ------|--> LSTM -> Linear ...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Seq2Seq Module For the final part of the implementation, we'll implement the seq2seq model. This will handle: receiving the input/source sentence using the encoder to produce the context vectors using the decoder to produce the predicted output/target sentence The Seq2Seq model takes in an Encoder, Decoder, and a d...
class Seq2Seq(nn.Module): def __init__(self, encoder: Encoder, decoder: Decoder, device: torch.device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device assert encoder.hid_dim == decoder.hid_dim, \ 'Hidden dimensions of encode...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Training Seq2Seq We've done the hard work of defining our seq2seq module. The final touch is to specify the training/evaluation loop.
optimizer = optim.Adam(seq2seq.parameters()) # ignore the padding index when calculating the loss PAD_IDX = target.vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX) def train(seq2seq, iterator, optimizer, criterion): seq2seq.train() epoch_loss = 0 for batch in iterator: op...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Evaluating Seq2Seq
seq2seq.load_state_dict(torch.load('tut1-model.pt')) test_loss = evaluate(seq2seq, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Here, we pick a random example in our dataset, print out the original source and target sentence. Then takes a look at whether the "predicted" target sentence generated by the model.
example_idx = 0 example = train_data.examples[example_idx] print('source sentence: ', ' '.join(example.src)) print('target sentence: ', ' '.join(example.trg)) src_tensor = source.process([example.src]).to(device) trg_tensor = target.process([example.trg]).to(device) print(trg_tensor.shape) seq2seq.eval() with torch.n...
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
ethen8181/machine-learning
mit
Setup Kite and Environment The easiest way to create a kite and resource is to use the managers. However, they are both just dictionaries. Required and optional elements of the dictionary are specified in the docstrings for the various objects that use them. You can always create, edit, and overwrite the various parts ...
# using the resource and config managers resource = rm.GetResourceByName() other_resource = rm.MakeResourceByShearAndHref(0.2, 80., 1.075, 8.) base_kite = cm.GetConfigByName() #M600 does NOT SOLVE high winds with roll limits in place #removing those limits for a clean example base_kite.pop('roll_min') base_kite.pop('r...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
There are several helper functions to plot things scattered about. As an example, we can inspect the aero database to find where the Loyd limit is. The Loyd limit is defined as: $\zeta_{max} = \frac{4}{27}\frac{C_L^3}{C_D^2}$ $v_{a_best_power} \approx v_{k_best_power} \approx \frac{2}{3}\frac{L}{D} v_w$ Derivations w...
zeta, cL, L_over_D, alpha, beta = makani_FBL.calc_loyd_optimums(base_kite) plots = cm.PlotKiteAero(base_kite, keys=['zeta'])
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Create Path Object (optional) Section is optional as you don't need to know how to create a path object as they are usually created and managed by the higher level object, KiteLoop. KitePath creates and holds all the path definition needed for the FBL model. You can create it by manually making the params, or by using ...
# using config helper function to get args and splatting it into KitePath path_args = cm.GetPathArgsByR_LoopAndMinHeight(base_kite, 100., 130.) print 'Path_Args:' pprint.pprint(path_args) print path = kite_path.KitePath(config=base_kite, **path_args) # pull off one position to use later position = path.positions[0] ...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Creating a KitePose object (optional) A KitePose is the base level object to complete a force balance. It is a single point model of a kite. There are many options here, but this section is optional as typical use only uses the higher level objects, which will manage KitePoses automatically. There are 2 solve options: ...
# standard solver # solving with a known aero state # using kite with body coefficient aero model and v_a pose = kite_pose.KitePose(position, resource, base_kite, v_a=50., alpha=5., beta=0., v_w_at_h_ref=7.5, verbose=True) pose.solve(solve_type='unknown_roll') pr...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Creating KiteLoop objects KiteLoop objects are a container for a "set" of poses that define an entire loop. The KiteLoop applies accelerations to each pose to make them consistent with the speed strategy applied. Any necessary variable that isn't specified is determined by an optimizer, with a default seed. Alternative...
# make a loop with some options and solve it loop = kite_loop.KiteLoop( resource, base_kite, v_w_at_h_ref=9., verbose=True, opt_params={'tol':0.01, 'constraint_stiffness': 0.01, 'maxiter':700}, vars_to_opt={'v_a': {'param_type': 'spline', 'values'...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
KiteLoop - Using specific values instead of the optimizer There are several ways to specify values to hold fixed. If all values are specified, the optimizer isn't used at all, and the solution time is very quick (thousandths of a sec). See the example below for formats to specify a particular solution, or the docstri...
loops = [] azims = np.linspace(-0.5, 0, 6) for azim in azims: temp_loop = kite_loop.KiteLoop( resource, base_kite, v_w_at_h_ref=7.5, verbose=True, path_location_params={'azim': azim, 'incl': 0.577}, pose_states_param={'alpha': {'param_type': 'linear_interp', 'v...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Using the Plotly Plotting Library The KiteLoop contains a few tools that output the 3D force solution, as well as variables colored by value around the loop. The plotting tool can be found at: mx_modeling/visualizations/power_calcs_plotting_tool/plotter.html Open it directly with your browser, and point it to the files...
# make files for plotly plotter loop.gen_loop_vec_plot_file('test_forces.json') loop.gen_loop_positions_plot_file('test_colors_roll_angle.json', var_to_color='tether_roll_angle')
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Creating a KitePowerCurve object A KitePowerCurve object creates KiteLoop objects for each wind speed in the range. All the same optimization parameters, options, etc that were available at the loop level are available here as well (opt_params, vars_to_opt, loop_params, path_shape_params, path_location_params), with th...
pc = makani_FBL.KitePowerCurve(resource, base_kite, v_w_at_h_ref_range=(2., 10.), v_w_step=2.) pc.solve() print 'Validity of each loop: ', pc.valids
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Multiple Ways to Get Data There's a ton of data in the KitePowerCurve object, and lot of ways to get it. Summary level data is an attribute of the object, and the loop summaries are aggregated into a Dataframe object called: self.data_loops The loops themselves are available in a list at self.loops. You can then pull o...
# 1: access data directly, some as attribute, some in data plt.plot(pc.v_ws_at_h_hub, pc.data_loops['zeta_padmount_avg_time']) # 2: use dataframe tools pc.data_loops.plot(y='zeta_padmount_avg_time') # 3: use built in plotting helper functions pc.plot_loop_data(ys=['zeta_padmount_avg_time']) pc.plot_pose_data_as_surf(...
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Putting it all together This is the minimum set of things needed to calculate a power curve: This example has NOT removed the roll limits, which is why the power curve has a big dip - when invalid solutions are found, the loop inclination is raised until it works, but this is a big performance hit.
# we need a kite m600 = cm.GetConfigByName() # we need a resource china_lake = rm.GetResourceByName('CL_nom') # then we make and solve a power curve m600pc = makani_FBL.KitePowerCurve(china_lake, m600) m600pc.solve() # then we do things with it m600pc.plot_power_curve()
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
google/makani
apache-2.0
Get the connectivity (spatial structure)
from sklearn.feature_extraction.image import grid_to_graph from rena import weighted_connectivity_graph connectivity_ward = grid_to_graph(n_x, n_y, 1) mask = np.ones((n_x, n_y)) connectivity_rena = weighted_connectivity_graph(X_data, n_features=X.shape[1], mask=mask)
Example_Faces.ipynb
ahoyosid/ReNA
bsd-3-clause
Custering
import time from sklearn.cluster import AgglomerativeClustering from rena import recursive_nearest_agglomeration n_clusters = 150 ward = AgglomerativeClustering(n_clusters=n_clusters, connectivity=connectivity_ward, linkage='ward') ti_ward = time.clock() ...
Example_Faces.ipynb
ahoyosid/ReNA
bsd-3-clause
Results visualization
%matplotlib inline import matplotlib.pyplot as plt fig, axx = plt.subplots(3, 4, **{'figsize': (10, 5)}) plt.gray() for i in range(4): axx[0, i].imshow(X[i + 30].reshape(n_x, n_y)) axx[0, i].set_axis_off() axx[0, 0].set_title('Original') axx[1, i].imshow(X_approx_ward[i + 30].reshape(n_x, n_y)) ax...
Example_Faces.ipynb
ahoyosid/ReNA
bsd-3-clause
2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: Convolution functions, including: Zero Padding Convolve window Convolution forward Convolution bac...
# GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- i...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1...
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight ...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Z** </td> <td> -23.1602122025 </td> </tr> </table> 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D m...
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shap...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Z's mean** </td> <td> 0.155859324889 </td> </tr> <tr> <td> **cache_conv[0][1][2][3]** </td> <td> [-0.20075807 0.18656139 0.41005165] </td> </tr> </table...
# GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mod...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 1.6924546 2.10025514]]] <br/> [[[ 1.19891788 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[-0.09498456 0.11180064 -0.14263511]]] <br/> [[[-...
def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **dA_mean** </td> <td> 9.60899067587 </td> </tr> <tr> <td> **dW_mean** </td> <td> 10.5817412755 </td> </tr> <tr> <td> **db_mean** ...
def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ###...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's b...
def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we dis...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. Exercise: Implement the pool_backward function in both modes ("max"...
def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and h...
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
diegocavalca/Studies
cc0-1.0
From the previous notebook we know the function march doesn't need the details of the geometry, but it does need: $s$: the distance along the boundary layer $u_e(x)$: the velocity on the edge of the boundary layer $u_e'(x)$: the tangential derivative of $u_e$ $\nu$: the kinematic viscosity The viscosity is obvious, b...
# split panels into two sections based on the flow velocity def split_panels(panels): # positive velocity defines `top` BL top = [p for p in panels if p.gamma<=0] # negative defines the `bottom` bottom = [p for p in panels if p.gamma>=0] # reverse array so panel[0] is stagnation bottom = b...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Note that we changed the direction of the bottom array so that it runs from the stagnation point to the trailing edge, in accordance with the flow direction. Lets plot them to make sure we got it right:
# plot panels with labels def plot_segment(panels): pyplot.figure(figsize=(10,2)) pyplot.axis([-1.2,1.2,-.3,.3]) for i,p_i in enumerate(panels): p_i.plot() if i%10 == 0: pyplot.scatter(p_i.xc,p_i.yc) pyplot.text(p_i.xc,p_i.yc+0.05, 'panel ['+'%i'%i+'...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Pohlhausen class Now we just need to pull out the distance and velocity data from these Panel arrays and pass it to the march function. To keep this clean we define a new class Pohlhausen.
# Pohlhausen Boundary Layer class class Pohlhausen: def __init__(self,panels,nu): self.u_e = [abs(p.gamma) for p in panels] # tangential velocity self.s = numpy.empty_like(self.u_e) # initialize distance array self.s[0] = panels[0].S for i in range(len(self.s)-1): ...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
A few implementation notes: - The distance from the center of panel $i+1$ to panel $i$ is $\Delta s_{i+1} = S_i+S_{i+1}$, therefore $s_{i+1} = s_i+S_i+S_{i+1}$. - The numpy.gradient function is used to get $u_e'$. - Pohlhausen.march calls march from the last notebook and then interpolates linearly to get values at ...
circle = make_circle(N) # set-up circle solve_gamma_kutta(circle) # solve flow top,bottom = split_panels(circle) # split panels nu = 1e-5 # set viscosity top = Pohlhausen(top,nu) # get BL inputs u_e = 2.*numpy.sin(top.s) # analytic u_e du_e = 2.*num...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Those look very good. Now lets march and look at $\delta$ and the separation point.
top.march() # solve the boundary layer flow i = top.iSep+2 # last point to plot # plot the boundary layer thicknes and separation point pyplot.ylabel(r'$\delta$', fontsize=16) pyplot.xlabel(r'$s$', fontsize=16) pyplot.plot(top.s[:i],top.delta[:i],lw=2) pyplot.scatter(top.s_sep,top.delta_sep, s=100, c='r')...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Same answer as the previous notebook. Good. Now that we know the code is working, lets write a function to set-up, solve, and plot the separation points for the boundary layer flow.
def solve_plot_boundary_layers(panels,alpha=0,nu=1e-5): # split the panels top_panels,bottom_panels = split_panels(panels) # Set up and solve the top boundary layer top = Pohlhausen(top_panels,nu) top.march() # Set up and solve the bottom boundary layer bottom = Pohlhausen(bottom_pane...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
The red and green dots mark the separation point for the top and bottom boundary layer, respectively. Separation occurs soon after the flow begins to decelerate. Physically, the boundary layer loses energy to friction as it travels over the front of the body (remember how large $C_F$ was?) and can not cope with the adv...
def predict_jukowski_separation(t_c,alpha=0,N=128): # set dx to gets the correct t/c foil = make_jukowski(N,dx=t_c-0.019) # find and print t/c x0 = foil[N/2].xc c = foil[0].xc-x0 t = 2.*numpy.max([p.yc for p in foil]) print "t/c = "+"%.3f"%(t/c) # solve potential flow and boundary laye...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 4 We know $\nu$ doesn't impact separation. How can you move the separation points? Change the foil thickness Change the angle of attack Change the resolution We can make sure the behavoir above is correct by validating against the analytic solution for simple geometries. Here is a summary figure from Chapter 3 ...
predict_jukowski_separation(t_c=0.15)
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
The $t/c=0.15$ case matches very well with Hoerner's picture.
predict_jukowski_separation(t_c=0.17)
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 5 What could be the cause of the ~$15\%$ discrepancy in the $t/c=0.17$ case? Error in Hoerner Error in Pohlhausen boundary layer ODE Error in numerical method (VortexPanel, BoundaryLayer, etc) Ellipse validation Let's see how we fair in the ellipse cases. From the Hoerner image I estimate: $t/c$| 1/2 | 1/4 | 1/8...
def predict_ellipse_separation(t_c,N=128,alpha=0): ellipse = make_circle(N,t_c) print "t/c = "+"%.3f"%(t_c) # solve potential flow and boundary layer evolution solve_gamma_kutta(ellipse,alpha) top,bottom = solve_plot_boundary_layers(ellipse,alpha) # print message print ("Separation at x/c ...
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
So I get the feeling Hoerner has a typo... that's the first one I've found. Pressure force estimates Now that we can predict the separation point, we can make non-zero pressure force estimates. The pressure force on the body is $$\vec F_p = \oint_{\cal S} p \hat n ds$$ where $\cal S$ is the body surface and $\hat n$ is...
# you code here
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Ignore the line below - it just loads the style sheet.
from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom.css', 'r').read() return HTML(styles) css_styling()
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Scenario:<br> Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available. You decide to sha...
survey_data = pd.read_csv("data/surveys.csv") survey_data.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 1** - How many individual records (occurrences) does the survey data set contain? </div>
# %load _solutions/case2_observations_processing1.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Adding the data source information as static column For convenience when this data-set will be combined with other datasets, we first add a column of static values, defining the datasetName of this particular data:
datasetname = "Ecological Archives E090-118-D1."
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Adding this static value as a new column datasetName: <div class="alert alert-success"> **EXERCISE 2** Add a new column, `datasetName`, to the survey data set with `datasetname` as value for all of the records (static value for the entire data set) <details><summary>Hints</summary> - When a column does not exist, a...
# %load _solutions/case2_observations_processing2.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Cleaning the sex_char column into a DwC called sex column <div class="alert alert-success"> **EXERCISE 3** - Get a list of the unique values for the column `sex_char`. <details><summary>Hints</summary> - To find the unique values, look for a function called `unique` (remember `SHIFT`+`TAB` combination to explore th...
# %load _solutions/case2_observations_processing3.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
So, apparently, more information is provided in this column, whereas according to the metadata information, the sex information should be either M (male) or F (female). We will create a column, named sex and convert the symbols to the corresponding sex, taking into account the following mapping of the values (see metad...
survey_data = survey_data.rename(columns={'sex_char': 'verbatimSex'})
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 4** - Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`. - Use the `sex_dict` dictionary to replace the values in the `verbatim...
# %load _solutions/case2_observations_processing4.py # %load _solutions/case2_observations_processing5.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Checking the current frequency of values of the resulting sex column (this should result in the values male, female and nan):
survey_data["sex"].unique()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause