markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Vectorizing functions As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work wit...
def theta(x): """ Scalar implemenation of the Heaviside step function. """ if x >= 0: return 1 else: return 0 try: theta(np.array([-3,-2,-1,0,1,2,3])) except Exception as e: print(traceback.format_exc())
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
OK, that didn't work because we didn't write the Theta function so that it can handle a vector input... To get a vectorized version of Theta we can use the Numpy function vectorize. In many cases it can automatically vectorize a function:
theta_vec = np.vectorize(theta) %%time theta_vec(np.array([-3,-2,-1,0,1,2,3]))
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance):
def theta(x): """ Vector-aware implemenation of the Heaviside step function. """ return 1 * (x >= 0) %%time theta(np.array([-3,-2,-1,0,1,2,3])) # still works for scalars as well theta(-1.2), theta(2.6)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Using arrays in conditions When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True:
M if (M > 5).any(): print("at least one element in M is larger than 5") else: print("no element in M is larger than 5") if (M > 5).all(): print("all elements in M are larger than 5") else: print("all elements in M are not larger than 5")
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Type casting Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type:
M.dtype M2 = M.astype(float) M2 M2.dtype M3 = M.astype(bool) M3
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Further reading http://numpy.scipy.org - Official Numpy Documentation http://scipy.org/Tentative_NumPy_Tutorial - Official Numpy Quickstart Tutorial (highly recommended) http://www.scipy-lectures.org/intro/numpy/index.html - Scipy Lectures: Lecture 1.3 Versions
%reload_ext version_information %version_information numpy
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Chain Rule 考慮 $F = f(\mathbf{a},\mathbf{g}(\mathbf{b},\mathbf{h}(\mathbf{c}, \mathbf{i}))$ $\mathbf{a},\mathbf{b},\mathbf{c},$ 代表著權重 , $\mathbf{i}$ 是輸入 站在 \mathbf{g} 的角度,為了要更新權重,我們想算 $\frac{\partial F}{\partial b_i}$ 我們需要什麼? 由 chain rule 得知 $\frac{\partial F}{\partial b_i} = \sum_j \frac{\partial F}{\partial g_j}\fra...
# 參考範例, 各種函數、微分 %run -i solutions/ff_funcs.py # 參考範例, 計算 loss %run -i solutions/ff_compute_loss2.py
Week11/DIY_AI/FeedForward-Backpropagation.ipynb
tjwei/HackNTU_Data_2017
mit
$ \frac{\partial L}{\partial d} = \sigma(CU+d)^T - p^T$ $ \frac{\partial L}{\partial C } = (\sigma(Z) - p) U^T$ $ \frac{\partial L}{\partial b_i } = ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$ $ \frac{\partial L}{\partial A_{i,j} } = ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
# 計算 gradient %run -i solutions/ff_compute_gradient.py # 更新權重,計算新的 loss %run -i solutions/ff_update.py
Week11/DIY_AI/FeedForward-Backpropagation.ipynb
tjwei/HackNTU_Data_2017
mit
練習:隨機訓練 20000 次
%matplotlib inline import matplotlib.pyplot as plt # 參考範例 %run -i solutions/ff_train_mod3.py plt.plot(L_history); # 訓練結果測試 for i in range(16): x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2) y = i%3 U = relu(A@x+b) q = softmax(C@U+d) print(q.argmax(), y)
Week11/DIY_AI/FeedForward-Backpropagation.ipynb
tjwei/HackNTU_Data_2017
mit
練習:井字棋的判定
def truth(x): x = x.reshape(3,3) return int(x.all(axis=0).any() or x.all(axis=1).any() or x.diagonal().all() or x[::-1].diagonal().all()) %run -i solutions/ff_train_ttt.py plt.plot(accuracy_history);
Week11/DIY_AI/FeedForward-Backpropagation.ipynb
tjwei/HackNTU_Data_2017
mit
Introducing ReLU The ReLu function is defined as $f(x) = \max(0, x),$ [1] A smooth approximation to the rectifier is the analytic function: $f(x) = \ln(1 + e^x)$ which is called the softplus function. The derivative of softplus is $f'(x) = e^x / (e^x + 1) = 1 / (1 + e^{-x})$, i.e. the logistic function. [1] http://www....
from keras.models import Sequential from keras.layers.core import Dense from keras.optimizers import SGD nb_classes = 10 # FC@512+relu -> FC@512+relu -> FC@nb_classes+softmax # ... your Code Here # %load ../solutions/sol_321.py from keras.models import Sequential from keras.layers.core import Dense from keras.optim...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Data preparation (keras.dataset) We will train our model on the MNIST dataset, which consists of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. Since this dataset is provided with Keras, we just ask the keras.dataset model for training and test data. We will: download the dat...
from keras.datasets import mnist from keras.utils import np_utils (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train.shape X_train = X_train.reshape(60000, 784) X_test = X_test.reshape(10000, 784) X_train = X_train.astype("float32") X_test = X_test.astype("float32") # Put everything on grayscale X_tra...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Split Training and Validation Data
from sklearn.model_selection import train_test_split X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train) X_train[0].shape plt.imshow(X_train[0].reshape(28, 28)) print(np.asarray(range(10))) print(Y_train[0].astype('int')) plt.imshow(X_val[0].reshape(28, 28)) print(np.asarray(range(10))) print(Y_va...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Training Having defined and compiled the model, it can be trained using the fit function. We also specify a validation dataset to monitor validation loss and accuracy.
network_history = model.fit(X_train, Y_train, batch_size=128, epochs=2, verbose=1, validation_data=(X_val, Y_val))
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Plotting Network Performance Trend The return value of the fit function is a keras.callbacks.History object which contains the entire history of training/validation loss and accuracy, for each epoch. We can therefore plot the behaviour of loss and accuracy during the training phase.
import matplotlib.pyplot as plt %matplotlib inline def plot_history(network_history): plt.figure() plt.xlabel('Epochs') plt.ylabel('Loss') plt.plot(network_history.history['loss']) plt.plot(network_history.history['val_loss']) plt.legend(['Training', 'Validation']) plt.figure() plt.xla...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
After 2 epochs, we get a ~88% validation accuracy. If you increase the number of epochs, you will get definitely better results. Quick Exercise: Try increasing the number of epochs (if you're hardware allows to)
# Your code here model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001), metrics=['accuracy']) network_history = model.fit(X_train, Y_train, batch_size=128, epochs=2, verbose=1, validation_data=(X_val, Y_val))
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Introducing the Dropout Layer The dropout layers have the very specific function to drop out a random set of activations in that layers by setting them to zero in the forward pass. Simple as that. It allows to avoid overfitting but has to be used only at training time and not at test time. ```python keras.layers.core...
from keras.layers.core import Dropout ## Pls note **where** the `K.in_train_phase` is actually called!! Dropout?? from keras import backend as K K.in_train_phase?
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Exercise: Try modifying the previous example network adding a Dropout layer:
from keras.layers.core import Dropout # FC@512+relu -> DropOut(0.2) -> FC@512+relu -> DropOut(0.2) -> FC@nb_classes+softmax # ... your Code Here # %load ../solutions/sol_312.py network_history = model.fit(X_train, Y_train, batch_size=128, epochs=4, verbose=1, validation_data=(X_val, Y_va...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
If you continue training, at some point the validation loss will start to increase: that is when the model starts to overfit. It is always necessary to monitor training and validation loss during the training of any kind of Neural Network, either to detect overfitting or to evaluate the behaviour of the model (any cl...
# %load solutions/sol23.py from keras.callbacks import EarlyStopping early_stop = EarlyStopping(monitor='val_loss', patience=4, verbose=1) model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.ad...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Inspecting Layers
# We already used `summary` model.summary()
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
model.layers is iterable
print('Model Input Tensors: ', model.input, end='\n\n') print('Layers - Network Configuration:', end='\n\n') for layer in model.layers: print(layer.name, layer.trainable) print('Layer Configuration:') print(layer.get_config(), end='\n{}\n'.format('----'*10)) print('Model Output Tensors: ', model.output)
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Extract hidden layer representation of the given data One simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read. Then you can run the ._predict(X_batch) method to get the activations for a batch of inputs.
model_truncated = Sequential() model_truncated.add(Dense(512, activation='relu', input_shape=(784,))) model_truncated.add(Dropout(0.2)) model_truncated.add(Dense(512, activation='relu')) for i, layer in enumerate(model_truncated.layers): layer.set_weights(model.layers[i].get_weights()) model_truncated.compile(los...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Hint: Alternative Method to get activations (Using keras.backend function on Tensors) python def get_activations(model, layer, X_batch): activations_f = K.function([model.layers[0].input, K.learning_phase()], [layer.output,]) activations = activations_f((X_batch, False)) return activations Generate the Emb...
from sklearn.manifold import TSNE tsne = TSNE(n_components=2) X_tsne = tsne.fit_transform(hidden_features[:1000]) ## Reduced for computational issues colors_map = np.argmax(Y_train, axis=1) X_tsne.shape nb_classes np.where(colors_map==6) colors = np.array([x for x in 'b-g-r-c-m-y-k-purple-coral-lime'.split('-')])...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Using Bokeh (Interactive Chart)
from bokeh.plotting import figure, output_notebook, show output_notebook() p = figure(plot_width=600, plot_height=600) colors = [x for x in 'blue-green-red-cyan-magenta-yellow-black-purple-coral-lime'.split('-')] colors_map = colors_map[:1000] for cl in range(nb_classes): indices = np.where(colors_map==cl) p...
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Note: We used default TSNE parameters. Better results can be achieved by tuning TSNE Hyper-parameters Exercise 1: Try with a different algorithm to create the manifold
from sklearn.manifold import MDS ## Your code here
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Exercise 2: Try extracting the Hidden features of the First and the Last layer of the model
## Your code here ## Try using the `get_activations` function relying on keras backend def get_activations(model, layer, X_batch): activations_f = K.function([model.layers[0].input, K.learning_phase()], [layer.output,]) activations = activations_f((X_batch, False)) return activations
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
infilect/ml-course1
mit
Example. Rolling a fair n-sided die (with n=6).
n = 6 die = list(range(1, n+1)) P = BoxModel(die) RV(P).sim(10000).plot()
docs/common_cards_coins_dice.ipynb
dlsun/symbulate
mit
Example. Flipping a fair coin twice and recording the results in sequence.
P = BoxModel(['H', 'T'], size=2, order_matters=True) P.sim(10000).tabulate(normalize=True)
docs/common_cards_coins_dice.ipynb
dlsun/symbulate
mit
Example. Unequally likely outcomes on a colored "spinner".
P = BoxModel(['orange', 'brown', 'yellow'], probs=[0.5, 0.25, 0.25]) P.sim(10000).tabulate(normalize = True)
docs/common_cards_coins_dice.ipynb
dlsun/symbulate
mit
DeckOfCards() is a special case of BoxModel for drawing from a standard deck of 52 cards. By default replace=False. Example. Simulated hands of 5 cards each.
DeckOfCards(size=5).sim(3)
docs/common_cards_coins_dice.ipynb
dlsun/symbulate
mit
G-force Control As a first test, we'll start from the launchpad, thrust at full throttle till we hit $altitude_{goal} > 100\ \text{meters}$ and then use a proportional gain of $Kp = 0.05$ to keep $gforce \sim 1.0$. LOCK dthrott_p TO Kp * (1.0 - gforce). LOCK dthrott TO dthrott_p.
data = loadData('collected_data\\gforce.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Pretty cool! Once it passes 100m altitude the controller starts, the throttle controls for gforce, bringing it oscillating down around 1g. This zeros our acceleration but not our existing velocity, so the position continues to increase. We could add some derivative gain to damp down the gforce overshoot, but it won't s...
data = loadData('collected_data\\vspeed.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Awesome! The controller drops the velocity to a stable oscillation around 0 m/s, and the position seems to flatten off, but it isn't perfect. Maybe it's because of the oscillations? In the game I can see the engine spurt on and off rythmically. It seems to try and stay at roughly 0 m/s, but the position is not 100m and...
# To run in kOS console: RUN hover3(pos0.txt,20,0.05,0). data = loadData('collected_data\\pos0.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Well, we crashed. Turns out an ideal stable oscillation (which Proportional only controllers tend to do) starting from ground level (around 76m from where the accelerometer is located on the landed craft) would necessarily come back to that point... Lets try adding some derivative gain to damp that out, the derivative ...
# To run in kOS console: RUN hover3(pos1.txt,20,0.05,0). data = loadData('collected_data\\pos1.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Great! The controller burned us about 100m and then tried staying there, but there is quite a lot of bounce, maybe if we tweak our gains some. Let's try gains of $Kp = 0.08,\ \ Kd = 0.04$.
# To run in kOS console: RUN hover3(pos2.txt,20,0.08,0.04). data = loadData('collected_data\\pos2.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Hmm, after trying a few other combinations, it seems like there's a conceptual error here keeping us from getting to a smooth point. We've been trying to build our controller with $thrott = thrott + \Delta thrott$$ which means it takes time to overcome our previous throttle, introducing this lag between our current po...
# To run in kOS console: RUN hover4(hover0.txt,60,0.01,0.001). data = loadData('collected_data\\hover0.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
It's stably oscillating! This is a good sign, showing our hover setpoint is doing it's job, the proportional gain is there, and there's barely any derivative gain. Let's bump up the derivative gain to $Kd = 0.01$.
# To run in kOS console: RUN hover4(hover1.txt,60,0.01,0.01). data = loadData('collected_data\\hover1.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Woohoo! It overshoots a little but stablizes smoothly at 100m! Great to see this going in the game, looks a bit like the SpaceX grasshopper. The kOS script used is hover4.ks and these tests are run by calling RUN hover4(hoverN.txt,20,Kp,Kd). Some datasets: hover0.txt hover1.txt hover2.txt hover3.txt Tweaking gains N...
# To run in kOS console: RUN hover4(hover2.txt,10,0.1,0.1). data = loadData('collected_data\\hover2.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Much faster! What happens if we change the altitude to say 300m?
# To run in kOS console: RUN hover5(hover3.txt,10,0.1,0.1,300). data = loadData('collected_data\\hover3.txt') plotData(data)
KSP_pid_tuning.ipynb
Elucidation/KSP-rocket-hover-controller
mit
Backtest SquareMathLevels Cíl Ověření hypotézy, že SquareMath Levels fungují jako S/R úrovně, tzn. trh má tendenci se od nich odrážet. Ověření na statistice ZN 1min, SML - 30min SQUARE 16 Příprava dat Nastavení pro kalkulaci SquareMath
SQUARE = 128 SQUARE_MULTIPLIER = 1.5 # how many BARS_BACK_TO_REFERENCE = np.int(np.ceil(SQUARE * SQUARE_MULTIPLIER)) # set higher timeframe for getting SquareMathLevels MINUTES = 30 # range 0-59 PD_RESAMPLE_RULE = f'{MINUTES}Min' # set the period of PD_RESAMPLE_RULE will be started. E.g. PD_RESAMPLE_RULE == '30min'...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Data, která se budou analyzovat
TICK_SIZE_STR = f'{1/32*0.5}' TICK_SIZE = float(TICK_SIZE_STR) #SYMBOL = 'ZN' TICK_SIZE_STR DATA_FILE = '../../Data/ZN-1s.csv' read_cols = ['Date', 'Time', 'Open', 'High', 'Low', 'Last'] data = pd.read_csv(DATA_FILE, index_col=0, skipinitialspace=True, usecols=read_cols, parse_dates={'Datetime': [0, 1]}) data.rename(...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Maximální high a low za posledních BARS_BACK_TO_REFERENCE svíček z vyššího timeframu. High
# calculate max high for actual record from higher tiframe his period df_helper_gr = df[['High']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE)) df_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).max().dropna() # cummax() with new index df_helper['bigCumMaxHigh'] = df_helper.assign(l=df...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Low
# calculate min low for actual record from higher tiframe his period df_helper_gr = df[['Low']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE)) df_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).min().dropna() # cummin() with new index df_helper['bigCumMinLow'] = df_helper.assign(l=df_he...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Zahození nepotřebných prostředků a záznamů NaN, které nemůžu analyzovat
del df_helper del df_helper_gr df.dropna(inplace=True) df
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Výpočet SMLevels pro každý záznam
from vhat.squaremath.funcs import calculate_octave SML_INDEXES = np.arange(-2, 10+1, dtype=np.int) # from -2/8 to +2/8 def round_to_tick_size(values, tick_size): return np.round(values / tick_size) * tick_size def get_smlines(r): tick_size = TICK_SIZE lowLimit = r.SMLLowLimit highLimit = r.SMLHighLim...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Výpočet dotyku SML Musím vypočítat dotyk předchozího průrazu kvůli frame-shift.
df['prevSML'] = df.SML.shift() df.dropna(inplace=True) df df['SMLTouch'] = df.apply(lambda r: np.bitwise_and(r.Low<=r.prevSML, r.prevSML<=r.High), axis=1) df['SMLTouchCount'] = df.SMLTouch.apply(lambda v: sum(v)) df from dataclasses import dataclass from typing import List @dataclass class Trade: tId: int # ...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Statistika výsledků Backtest základní info
print('Od:', df.iloc[0].name) print('Do', df.iloc[-1].name) print('Časové období:', df.iloc[-1].name - df.iloc[0].name) print('Počet obchodních dnů:', df.Close.resample('1D').ohlc().shape[0]) print('Počet záznamů jemného tf:', df.shape[0])
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Validita nízkého timeframe pro backtest - možná zasenesená chyba Zjištění, zda je zvolený SQUARE na vyšším timeframu dostatečný pro backtest na tomto nízkém timeframu. Tzn. pokud mám Square=32 z vyššího timeframe='30min', mohu zjistit jestli jsou záznamy timeframe='1min' vhodné pro backtest. Pokud by byla vysoká chyba ...
touchCounts = df.SMLTouchCount.value_counts().to_frame(name='Occurences') touchCounts['Occ%'] = touchCounts / df.shape[0]*100 print(f'Počet protnutích více něž jedné SML v jednom záznamu: v {(df.SMLTouchCount>1).sum()} případech ({(df.SMLTouchCount>1).sum()/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem\n') touchCounts
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Velmi nízký SML spread
spread_stats = df.spread.value_counts().to_frame(name='Occurences') spread_stats['Occ%'] = spread_stats / df.shape[0]*100 spread_stats['Ticks'] = spread_stats.index / TICK_SIZE # index musím print(f'Počet spredu SML menších než 2 ticky v jednom záznamu: v {(df.spread/TICK_SIZE<2).sum()} případech ({(df.spread/TICK_SIZE...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Výsledná možná chybovost na nízkém TF pro backtest
chybovost = df.spread[(df.spread/TICK_SIZE<2) | (df.SMLTouchCount>1)].shape[0] print(f'Celková chybovost v nízkém timeframe může být v {chybovost} případech ({chybovost/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem')
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Validita výsledků obchodů
finishedCount = stats.shape[0] print('Total finished trades:', finishedCount) # pokud je opravdu hodně "unrecognizableTrade", mám moc nízké rozlišení SquareMath levels (malý square) unrec_trades = stats.unrecognizableTrade.sum() print('Unrecognizable trades:', unrec_trades, f'({unrec_trades/finishedCount *100:.3f}%)') ...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Dál nebudu potřebovat unrecognized trades
stats.drop(stats[stats.unrecognizableTrade].index, inplace=True) shorts_mask = stats.lots<0 longs_mask = stats.lots>0 stats.loc[shorts_mask, 'PnL'] = ((stats[shorts_mask].entryPrice - stats[shorts_mask].exitPrice) / TICK_SIZE).round() stats.loc[longs_mask, 'PnL'] = ((stats[longs_mask].exitPrice - stats[longs_mask].en...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Celkové výsledky
# masks shorts_mask = stats.lots<0 longs_mask = stats.lots>0 profit_mask = stats.PnL>0 loss_mask = stats.PnL<0 breakeven_mask = stats.PnL==0 total_trades = stats.shape[0] profit_trades_count = stats.PnL[profit_mask].shape[0] loss_trades_count = stats.PnL[loss_mask].shape[0] breakeven_trades_count = stats.PnL[breakeven...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Ztrátové obchody
selected_stats = stats[loss_mask] selected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences') selected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100 selected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE selected_pnl_stats
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Max pohyb v zisku ve ztrátových obchodech
sns.distplot(selected_stats.runPTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.
sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Max pohyb ve ztrátě ve ztrátových obchodech
sns.distplot(selected_stats.runLTicks, color="r"); sns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color="r");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Ziskové obchody
selected_stats = stats[profit_mask] selected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences') selected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100 selected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE selected_pnl_stats
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
PT adjustment ve ziskových obchodech - Max pohyb v zisku
sns.distplot(selected_stats.runPTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Poměrově pohyb v zisku k PT u ziskových obchodů.
sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Max pohyb ve ztrátě ve ziskových obchodech
sns.distplot(selected_stats.runLTicks, color="r");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
poměr vývoje ztráty k zadanému SL v ziskových obchodech
sns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color="r");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Long obchody
selected_stats = stats[longs_mask] print('Počet obchodů:', selected_stats.shape[0], f'({selected_stats.shape[0]/stats.shape[0]*100:.2f}%) z {stats.shape[0]}') print('Počet win:', selected_stats[selected_stats.PnL>0].shape[0], f'({selected_stats[selected_stats.PnL>0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {select...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
PT adjustment ve ztrátových long obchodech - Max pohyb v zisku
sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.
sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks/selected_stats[selected_stats.PnL<0].ptTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
SL djustment ve ztrátových obchodech - max pohyb v zisku
sns.distplot(selected_stats[selected_stats.PnL<0].runLTicks, color="r"); sns.distplot(selected_stats[selected_stats.PnL<0].runLTicks/selected_stats[selected_stats.PnL<0].slTicks, color="r"); # kontrola
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
PT adjustment v ziskových long obchodech - Max pohyb v zisku
sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Poměrově pohyb v zisku k nastavenému PT u ziskových obchodů.
sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks/selected_stats[selected_stats.PnL>0].ptTicks, color="g");
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
SL djustment v ziskových obchodech - max pohyb ve ztrátě
sns.distplot(selected_stats[selected_stats.PnL>0].runLTicks, color="r"); sns.distplot(selected_stats[selected_stats.PnL>0].runLTicks/selected_stats[selected_stats.PnL>0].slTicks, color="r"); # kontrola
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
SML analýza Celkový počet vstupů na jednotlivých SML
#smlvl_stats = stats.entrySmLvl.value_counts().to_frame(name='entrySmLvlOcc') smlvl_stats = stats[['entrySmLvl', 'lots']].groupby(['entrySmLvl']).count() smlvl_stats.sort_values(by='lots', ascending=False, inplace=True) smlvl_stats.rename(columns={'lots':'entrySmLvlOcc'}, inplace=True) smlvl_stats['Occ%'] = smlvl_stats...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Vstupy na jednotlivých levelech
sns.barplot(x=smlvl_stats.entrySmLvlOcc.sort_index().index, y=smlvl_stats.entrySmLvlOcc.sort_index());
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Počet vstupů Buy nebo Sell na SML
stats.lots.replace({1: 'Long', -1: 'Short'}, inplace=True) smlvl_stats_buy_sell = stats[['entrySmLvl', 'PnL', 'lots']].groupby(['entrySmLvl', 'lots']).count() smlvl_stats_buy_sell.sort_index(ascending=False, inplace=True) smlvl_stats_buy_sell.rename(columns={'PnL':'LongShortCount'}, inplace=True) smlvl_stats_buy_sell ...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Úspěšnost Long obchodů na SML
stats['Win']=profit_mask stats['Win'] = stats['Win'].mask(~profit_mask) # groupby bude počítat jen výhry smlvl_stats_buy_sell['WinCount'] = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).count().droplevel(2) smlvl_stats_buy_sell['Win%'] = smlvl_stats_buy_sell.WinCount / smlvl_stats_...
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Jen pro kontrolu. Win == True, Loss == False
# stats['Win'] = profit_mask # smlvl_stats_buy_sell2 = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).sum() # smlvl_stats_buy_sell2.sort_index(ascending=False, inplace=True) # smlvl_stats_buy_sell2.rename(columns={'PnL':'WinLossCount'}, inplace=True) # smlvl_stats_buy_sell2
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
Seřazeny výsledky dle úspěsnosti:
smlvl_stats_buy_sell.sort_values('Win%', ascending=False)
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
vanheck/blog-notes
mit
1.Searching and Printing a List of 50 'Lil' Musicians With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
#With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 #that are playable in the USA (or the country of your choice), along with their popularity score. count =0 for artist in Lil_artists: count += 1 print(count,".", artist['name'],"has the popularity of", artis...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
2 Genres Most Represented in the Search Results What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres #in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". #Tip: "how to join a list Python" might be a helpful search # if len(artist['genres']) == 0 ) # prin...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
More Spotify - LIL' GRAPHICS Use Excel, Illustrator or something like https://infogr.am/ to make a graphic about the Lil's, or the Lil's vs. the Biggies. Just a simple bar graph of their various popularities sounds good to me. Link to the Line Graph of Lil's Popularity chart Lil Popularity Graph
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US') Lil_data = Lil_response.json() #Lil_data
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
The Second Highest Popular Artist Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
#Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. #Is it the same artist who has the largest number of followers? name_highest = "" name_follow ="" second_high_pop = 0 highest_pop = 0 high_follow = 0 for artist in Lil_artists: if (highest_pop < artist['popularity']) & (artist['n...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
4. List of Lil's Popular Than Lil' Kim
Lil_artists = Lil_data['artists']['items'] #Print a list of Lil's that are more popular than Lil' Kim. count = 0 for artist in Lil_artists: if artist['popularity'] > 62: count+=1 print(count, artist['name'],"has the popularity of", artist['popularity']) #else: #print(artist['na...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
5.Two Favorite Lils and Their Top Tracks
response = requests.get("https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US") data = response.json() for artist in Lil_artists: #print(artist['name'],artist['id']) if artist['name'] == "Lil Wayne": wayne = artist['id'] print(artist['name'], "id is",wayne) if...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US") data = response.json() tracks = data['tracks'] #print(tracks) #for track in tracks: #print(track.keys()) #Get an average popularity for their explicit songs vs. their non-explicit songs. #How many minutes of explicit...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
7a. Number of Biggies and Lils Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
#How many total "Biggie" artists are there? How many total "Lil"s? #If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US') biggie_data = biggie_respons...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
7b. Time to Download All Information on Lil and Biggies
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? limit_download = 50 biggie_artists = biggie_data['artists']['total'] Lil_artist = Lil_data['artists']['total'] #1n 5 sec = 50 #in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec # for 4501 = 4501/10 s...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
8. Highest Average Popular Lils and Biggies Out of The Top 50
#Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US') biggie_data = biggie_response.json() biggie_artists = biggie_data['artists']['items'] big_count_pop = 0 for artist in big...
homework05/Homework05_Spotify_radhika.ipynb
radhikapc/foundation-homework
mit
Getting attendance records from datatracker When attendees register for a meeting, the report their name, email address, and affiliation. While this is noisy data (any human-entered data is!), we will use this information to associate domains with affilations. E.g. the email domain apple.com is associated with the comp...
datatracker = DataTracker() meetings = datatracker.meetings(meeting_type = datatracker.meeting_type(MeetingTypeURI('/api/v1/name/meetingtypename/ietf/'))) full_ietf_meetings = list(meetings) ietf_meetings = [] for meeting in full_ietf_meetings: meetingd = dataclasses.asdict(meeting) meetingd['meeting_obj'] = ...
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Individual Affiliations
dt = DataTrackerExt() # initialize, for all meeting registration downloads
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
This will construct a dataframe of every attendee's registration at every specified meeting. (Downloading this data takes a while!)
ietf_meetings[110]['date'] meeting_attendees_df = pd.DataFrame() for meeting in ietf_meetings: if meeting['num'] in [104,105,106,107,108,109]: # can filter here by the meetings to analyze registrations = dt.meeting_registrations(meeting=meeting['meeting_obj']) df = pd.DataFrame.from_records([datacl...
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Filter by those who actually attended the meeting (checked in, didn't just register).
ind_affiliation = meeting_attendees_df[['full_name', 'affiliation', 'email', 'domain','date']]
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
This format of data -- with name, email, affiliation, and a timestamp -- can also be extracted from other IETF data, such as the RFC submission metadata. Later, we will use data of this form to infer duration of affilation for IETF attendees.
ind_affiliation[:10] ind_affiliation['affiliation'].dropna().value_counts()
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Matching affiliations with domains
affil_domain = ind_affiliation[['affiliation', 'domain', 'email']].pivot_table( index='affiliation',columns='domain', values='email', aggfunc = 'count')
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Drop both known generic and known personal email domains.
ddf = domains.load_data() generics = ddf[ddf['category'] == 'generic'].index personals = ddf[ddf['category'] == 'personal'].index generic_email_domains = set(affil_domain.columns).intersection(generics) affil_domain.drop(generic_email_domains, axis = 1, inplace = True) personal_email_domains = set(affil_domain.colum...
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Duration of affiliation The current data we have for individual affiliations is "point" data, reflecting the affiliation of an individual on a particular date. For many kinds of analysis, we may want to understand the full duration for which an individual has been associated with an organization. This requires an infer...
affil_dates = ind_affiliation.pivot_table( index="date", columns="full_name", values="affiliation", aggfunc="first" ).fillna(method='ffill').fillna(method='bfill') top_attendees = ind_affiliation.groupby('full_name')['date'].count().sort_values(ascending=False)[:40].index top_attendees affil_dates[to...
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Linking to Organization lists
import bigbang.analysis.process as process # drop subsidiary organizations org_cats = org_cats[org_cats['subsidiary of / alias of'].isna()] org_cats
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Normalize/resolve the names from the IETF attedence records.
org_names = ad_stats['sum'] org_names = org_names.append( pd.Series(index = org_cats['name'], data = 1) ) org_names = org_names.sort_values(ascending = False) org_names = org_names[~org_names.index.duplicated(keep="first")] ents = process.resolve_entities( org_names, process.containment_distance, thres...
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Export the graph of relations Getting the affiliation data relations extracted from the attendance tables. Final form: Three tables: - Name - Email, earliest and latest date - Name - Affiliation, earliest and latest date - Email - Affiliation, earliest and latest date These can be combined into a tripartite graph, w...
meeting_range = [106,107,108] a, b, c = attendance.name_email_affil_relations_from_IETF_attendance(meeting_range, threshold = 0.17) a b b['affiliation'].value_counts()['cisco'] c
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Match to a mailing list
from bigbang.archive import Archive arx = Archive("httpbisa")
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
From the archive data: From -> email address, Date Match with table B: email,. min_date, max_date, to get Affiliation Add Affiliation to the archive data.
arx.add_affiliation(b) arx.data[['From','Date','affiliation']].dropna()
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
datactive/bigbang
mit
Ejemplo 2. Determine los desplazamientos nodales y rotaciones, fuerzas nodales globales, y fuerzas en elementos para la viga mostrada en la figura. Se ha discretizado la viga como se indica en la numeración nodal. La viga está fija en los nodos 1 y 5, y tiene un soporte de rodillo en el nodo 3. Las cargas verticales de...
""" Logan, D. (2007). A first course in the finite element analysis. Example 4.2 , pp. 166. """ from nusa.core import * from nusa.model import * from nusa.element import * # Input data E = 30e6 I = 500.0 P = 10e3 L = 10*(12.0) # ft -> in # Model m1 = BeamModel("Beam Model") # Nodes n1 = Node((0,0)) n2 = Node((10*12,...
docs/nusa-info/es/beam-element.ipynb
JorgeDeLosSantos/nusa
mit
Ejemplo 3.
""" Beer & Johnston. (2012) Mechanics of materials. Problem 9.13 , pp. 568. """ # Input data E = 29e6 I = 291 # W14x30 P = 35e3 L1 = 5*12 # in L2 = 10*12 #in # Model m1 = BeamModel("Beam Model") # Nodes n1 = Node((0,0)) n2 = Node((L1,0)) n3 = Node((L1+L2,0)) # Elements e1 = Beam((n1,n2),E,I) e2 = Beam((n2,n3),E,I) ...
docs/nusa-info/es/beam-element.ipynb
JorgeDeLosSantos/nusa
mit