markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This is the main application loop. The distance computation is executed inside a ComputeUnit. See mapper.py for code. Data is read from files and written to stdout. We execute 10 iterations of KMeans.
for i in range(10): cudesc = rp.ComputeUnitDescription() cudesc.executable = "/opt/anaconda/bin/python" cudesc.arguments = [os.path.join(os.getcwd(), "mapper.py"), os.path.join(os.getcwd(), "points.csv"), os.path.join(os.getcwd(), "clusters.csv")] c...
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Print out final centroids computed
new_centroids_df session.close()
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Spark MLLib In the following we utilize the Spark MLlib KMeans implementation. See http://spark.apache.org/docs/latest/mllib-clustering.html#k-means We use Pilot-Spark to startup Spark.
from numpy import array from math import sqrt %run ../env.py %run ../util/init_spark.py from pilot_hadoop import PilotComputeService as PilotSparkComputeService try: sc except: pilotcompute_description = { "service_url": "yarn-client://sc15.radical-cybertools.org", "number_of_processes": 5 ...
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Load and parse the data in a Spark DataFrame.
data_spark=sqlCtx.createDataFrame(data) data_spark_without_class=data_spark.select('SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth').show()
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Convert DataFrame to Tuple for MLlib
data_spark_tuple = data_spark.map(lambda a: (a[0],a[1],a[2],a[3])) from pyspark.mllib.clustering import KMeans, KMeansModel clusters = KMeans.train(data_spark_tuple, 3, maxIterations=10, runs=10, initializationMode="random") # Evaluate clustering by computing Within Set Sum of Squared Errors d...
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Stop Pilot-Job
pilot_spark.cancel()
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Uruchomiene Aplikacji Z poziomu konsoli > python program.py Shebang (Unix) > #!/usr/bin/env python3 Prawa wykonywania > chmod u+x program.py > Interaktywny Interpreter > ipython Składnia instrukcje nie są zakończone średnikiem bloki kodu są definiowane przez wcięcia wiele angielskich słów...
#wyrównanie do nawiasu otwierającego foo = moja_dluga_funkcja(zmienna_jeden, zmienna_dwa zmienna_trzy, zmienna_cztery) # zwiększone wcięcia aby rozróżnić funkcję od ciała funkcji def moja_dluga_funkcja( zmienna_jeden, zmienna_dwa, zmienna_trzy, zmienna_cztery): print(zmien...
part_1/01.Wstęp.ipynb
andrzejkrawczyk/python-course
apache-2.0
Symbols
x, w2 = symbols('x omega^2') L, m, EJ = symbols('L m EJ', positive = True) A, B, C, D, ld, LD = symbols('A B C D lambda Lambda') f, φ = symbols('f phi')
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
Supported mass and stiffness of support
mass_coeff = 8 stiff_coeff = 24 k = stiff_coeff*EJ/L**3 M = mass_coeff*m*L
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
General solution and its derivatives
f0 = A*cos(ld*x) + B*sin(ld*x) + C*cosh(ld*x) + D*sinh(ld*x) f1 = f0.diff(x) f2 = f1.diff(x) f3 = f2.diff(x) display(Eq(φ,f0))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
Left boundary conditions The eigenfunction and its second derivative must be zero when 0 is substituted for x, we solve for A and C and put the solution in the variable AC. We substitute our solution in the eigenfunctions and all of its derivatives.
AC = solve((f0.subs(x,0), f2.subs(x,0)), A, C, dict=True) f0, f1, f2, f3 = [f.subs(AC[0]) for f in (f0, f1, f2, f3)] display(Eq(φ, f0))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
First, simpler boundary condition at the right end, $x=L$. The second derivative must be equal to zero, so we solve and substitute, also substitute $\lambda L$ with $\Lambda$
D = solve(f2.subs(x, L), D, dict=True) f0, f1, f2, f3 = [f.subs(D[0]).subs(L,LD/ld) for f in (f0, f1, f2, f3)] display(Latex('With $\\Lambda = \\lambda\\,L$ it is')) display(Eq(φ, f0.simplify()))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
Last boundary conditions, equation of wave numbers The last equation is an equation of equilibrium $$V(t) + k\, v(t) + M\,\ddot v(t) = 0$$ (all the forces are directed upwards). With $v(t)=\phi(x)\,\sin\omega t$, the shear is $V = -EJ\, v''' = -EJ\, \phi'''(x)\sin\omega t$ and the inertial force is $M\,\ddot v= -M\,\ph...
eq = (f0*k - f0*M*ld**4*EJ/m - EJ*f3).subs(x, L).subs(L, LD/ld) display(Eq(eq.expand().collect(B).collect(ld).collect(EJ), 0))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
We have a non trivial solution when the term in brackets is equal to zero, to have the bracketed term we must divide both members by $B\,EJ\, \lambda^3$
eq = (eq/EJ/ld**3/B).expand() display(Eq(eq,0))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
The behavior near $\Lambda=0$ is led by the last term that goes like $48/\Lambda^2$, so to have a nice plot we multiply everything by $\Lambda^2$
display(Eq(symbols('f'), (eq*LD**2).expand())) plot(eq*LD**2, (LD, 0, 2));
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
and see that there is a root between 1.25 and 1.5. If we were interested in upper roots, we can observe that all the terms in the LHS of our determinantal equations are bounded for increasing $\Lambda$ except for the first one, that grows linearly, so to investigate the other roots we may divide the equation by $\Lamb...
display(Eq(symbols('f'), (eq/LD).expand())) plot(eq/LD, (LD, 2, 10));
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
All the RHS terms except the first have $\Lambda$ in the denominator and are bounded, so the asymptotic behaviour is controlled by $\Lambda_{n+1}=n\pi$.
from scipy.optimize import bisect f = lambdify(LD, eq, modules='math') l1 = bisect(f, 0.5, 1.5) Latex(r'$\lambda_1=%.6f\,\frac{1}{L}, \quad\omega_1^2=%.6f\,\frac{EJ}{mL^4}$'%(l1, l1**4))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
Rayleigh Quotient Using $v=\frac xL\sin\omega t$ (that is, a rigid rotation about the left hinge) we have $$T_\text{max}=\frac12\omega^2\Big(\int_0^Lm\left(\frac xL\right)^2dx + M\,1^2\Big) = \frac12\omega^2\Big(\frac13+8\Big)mL $$ and $$V_\text{max}=\frac12\Big(\int_0^L EJ\left(\frac xL\right)''^2 + k\,1^2\Big) = \fra...
display(Latex(r'$\omega^2_{R00} = %.3f\,\frac{EJ}{mL^4}$'%(3*24/25)))
dati_2018/wt03/Continuous.ipynb
boffi/boffi.github.io
mit
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, [None, real_dim], name='input_real') inputs_z = tf.placeholder(tf.float32, [None, z_dim], name='input_z') return inputs_real, inputs_z
gan_mnist/Intro_to_GANs_Exercises.ipynb
schaber/deep-learning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
schaber/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gan_mnist/Intro_to_GANs_Exercises.ipynb
schaber/deep-learning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will be sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_wit...
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_real, labels=tf.ones_like(d_logits_real)*(1.0-smooth) )) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_fake, labels=tf.zeros_like(d_logits_real)...
gan_mnist/Intro_to_GANs_Exercises.ipynb
schaber/deep-learning
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the gener...
for v in tf.trainable_variables(): print(v.name) # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [] d_vars = [] for v in t_vars: if v.name.startswith('generator'): g_vars.append(v) elif v.name.startswith('discrim...
gan_mnist/Intro_to_GANs_Exercises.ipynb
schaber/deep-learning
mit
We use the %precision magic (defined in IPython) to show only 3 decimals in the Python output. This is just to alleviate the text.
%precision 3
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
We generate two Python lists x and y, each one containing one million random numbers between 0 and 1.
n = 1000000 x = [random.random() for _ in range(n)] y = [random.random() for _ in range(n)] x[:3], y[:3]
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
Let's compute the element-wise sum of all these numbers: the first element of x plus the first element of y, and so on. We use a for loop in a list comprehension.
z = [x[i] + y[i] for i in range(n)] z[:3]
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
How long does this computation take? IPython defines a handy %timeit magic command to quickly evaluate the time taken by a single command.
%timeit [x[i] + y[i] for i in range(n)]
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
Now, we will perform the same operation with NumPy. NumPy works on multidimensional arrays, so we need to convert our lists to arrays. The np.array() function does just that.
xa = np.array(x) ya = np.array(y) xa[:3]
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
The arrays xa and ya contain the exact same numbers than our original lists x and y. Whereas those lists where instances of a built-in class list, our arrays are instances of a NumPy class ndarray. Those types are implemented very differently in Python and NumPy. We will see that, in this example, using arrays instead ...
za = xa + ya za[:3]
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
We see that the list z and the array za contain the same elements (the sum of the numbers in x and y). Let's compare the performance of this NumPy operation with the native Python loop.
%timeit xa + ya
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
We observe that this operation is more than one order of magnitude faster in NumPy than in pure Python! Now, we will compute something else: the sum of all elements in x or xa. Although this is not an element-wise operation, NumPy is still highly efficient here. The pure Python version uses the built-in sum function on...
%timeit sum(x) # pure Python %timeit np.sum(xa) # NumPy
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
We also observe an impressive speedup here. SciPy <img src="images/scipylogo.png"> Consists of a number of more specific packages: Special functions (scipy.special) Integration (scipy.integrate) Optimization (scipy.optimize) Interpolation (scipy.interpolate) Fourier Transforms (scipy.fftpack) Signal Processing (scipy....
# The IPython Notebook historically has a tight integration with # Matplotlib. To display plots rendered inline in the notebook, the # ipython magic can be called %matplotlib inline # While it is still possible to include expose the call-for-call% # MATLAB-like API with # from pylab import * # or # %pylab inline ...
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. SymPy <img src="images/sympylogo.png"> First, we import SymPy, and enable rich display LaTeX-based printing in the IPython notebook (using the MathJax Javascript library).
from sympy import * init_printing() # Create symbolic variables x, y = symbols('x y') # Create mathematical expressions expr1 = (x + 1)**2 expr2 = x**2 + 2*x + 1 expr1 expr2 expr1 == expr2 simplify(expr1-expr2) # Substitution expr1.subs(x, expr1) expr1.subs(x, pi) # S converts an arbitrary expression to a type...
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
Pandas <img src="images/pandaslogo.jpg"> This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
# Load a CSV file into a Pandas DataFrame import pandas as pd url = "http://donnees.ville.montreal.qc.ca/storage/f/2014-01-20T20%3A48%3A50.296Z/2013.csv" df = pd.read_csv(url, index_col='Date', parse_dates=True, dayfirst=True) df.head(2) df.describe()
3_SciPy_Stack.ipynb
thewtex/ieee-nss-mic-scipy-2014
apache-2.0
2. Get the Data with Pandas https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=2 When the Titanic sank, 1502 of the 2224 passengers and crew were killed. One of the main reasons for this high level of casualties was the lack of lifeboats on this self-proclaimed...
# Import the Pandas library import pandas as pd kaggle_path = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/" # Load the train and test datasets to create two DataFrames train_url = kaggle_path + "train.csv" train = pd.read_csv(train_url) test_url = kaggle_path + "test.csv" test = pd.read_csv(test_url) ...
dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb
karst87/ml
mit
3.Understanding your data https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=3 Before starting with the actual analysis, it's important to understand the structure of your data. Both test and train are DataFrame objects, the way pandas represent datasets. You c...
train.describe() test.describe() train.shape test.shape
dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb
karst87/ml
mit
4. Rose vs Jack, or Female vs Male https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=4 How many people in your training set survived the disaster with the Titanic? To see this, you can use the value_counts() method in combination with standard bracket notation...
# absoulte numbers train['Survived'].value_counts() # percentages train['Survived'].value_counts(normalize=True) train['Survived'][train['Sex']=='male'].value_counts() train['Survived'][train['Sex'] =='female'].value_counts() # Passengers that survived vs passengers that passed away print(train['Survived'].value_co...
dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb
karst87/ml
mit
5.Does age play a role? https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=5 Another variable that could influence survival is age; since it's probable that children were saved first. You can test this by creating a new column with a categorical variable Child....
# Create the column Child and assign to 'NaN' train["Child"] = float('NaN') # Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column. # train['Child'][train['Age'] >= 18] = 0 # train['Child'][train['Age'] < 18] = 1 train.loc[train['Age'] >= 18, 'Child'] = 0 train.loc[train['Age'] < 18, 'Child'] ...
dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb
karst87/ml
mit
6.First Prediction https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=6 In one of the previous exercises you discovered that in your training set, females had over a 50% chance of surviving and males had less than a 50% chance of surviving. Hence, you could use...
# Create a copy of test: test_one test_one = test # Initialize a Survived column to 0 test_one['Survived'] = 0 # Set Survived to 1 if Sex equals "female" and print the `Survived` column from `test_one` # test_one['Survived'][test_one['Sex'] == 'female'] = 1 test_one.loc[test_one['Sex'] == 'female', 'Survived'] = 1 pr...
dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb
karst87/ml
mit
Joint distribution $$p(Y, X_1, ..., X_N)=p(Y)p(X_1|Y)\prod_{i=2}^{n}(X_i|X_1, ..., X_{i-1}, y)$$
joint_probability(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'vh'}) joint_probability(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'h'}) joint_probability(df, 'A', {'GPA': '[3,3.5)', 'I': 'h'}) joint_probability(df, 'A', {'GPA': '[3,3.5)', 'I': 'vh'})
probabilistic-graphical-models/bayes-net.ipynb
amirziai/learning
mit
With conditional independence $X_1, ..., X_n$ conditionally independent given $Y$ $$p(Y, X_1, ..., X_N)=p(Y)\prod_{i=1}^{n}(X_i|Y)$$ Need 2n + 1 parameters: 1 for Y~Ber(p) and 2 for each variable because there are two possible values of Y
joint_probability_with_cond_ind(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'vh'}) joint_probability_with_cond_ind(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'h'})
probabilistic-graphical-models/bayes-net.ipynb
amirziai/learning
mit
Naive Bayes
naive_bayes(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'vh'}) naive_bayes(df, 'B', {'GPA': '[3.5,4.0]', 'I': 'vh'}) naive_bayes(df, 'A', {'GPA': '[3.5,4.0]', 'I': 'h'}) naive_bayes(df, 'B', {'GPA': '[3.5,4.0]', 'I': 'h'}) naive_bayes(df, 'A', {'GPA': '[0,3)', 'I': 'h'}) naive_bayes(df, 'B', {'GPA': '[0,3)', 'I': 'h'})
probabilistic-graphical-models/bayes-net.ipynb
amirziai/learning
mit
Let's have a look at the column labels:
list(table.columns.values)
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
Suppose we just want to select a couple of columns, we can use the column labels like this:
table[['ID', 'Address']]
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
But a couple of interesting columns (for the collection date and the weight measured by the sensor) have very complicated labels, so let's simplify them. First, we'll just make a list of all the labels, then we'll bind the relevant string values to a couple of variables. This means that we don't have to worry about mis...
l = list(table.columns.values) date = l[8] fill = l[10] date, fill
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
Now that we've got short variables date and time in place of the long strings, let's go ahead and replace those labels with something simpler:
table = table.rename(columns={date: 'Date', fill: 'Fill_level'})
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
Now we'll make a new table with just four columns:
table1 = table[['ID', 'Address', 'Date', 'Fill_level']]
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
And we'll just take the first 30 rows:
tabletop = table1.head(30) tabletop
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
Finally, we'll write the result to a JSON formatted file.
tabletop.to_json('../data/binsensorsimple.json', orient="records")
viz/python/Simplify_Bin_Sensor_Data.ipynb
edinburghlivinglab/D4I
apache-2.0
We can also easily estimate the thermodynamic properties of the gas under the assumption of a perfect gas. We know that $\gamma = c_P / c_V = 5/3$. Of importance for our hypothesis is knowing the various thermodynamic temperature gradients, $\nabla_{\rm ad}$ and $\nabla_{\rm rad}$. The former can be expressed generally...
def del_rad(opac, P, T, L, M): return 7.62586e9*opac*P*L/(T**4*M) P = 1680.0 # Ba T = 5600.0 # K M = 1.5 # Msun L = 6953.0 # Rstar = 412 Rsun; Teff = 2600 K print "{:11.5e} < Del_rad < {:11.5e}".format(del_rad(1.0e-2, P, T, L, M), del_rad(1.0e-1, P, T, L, M))
Daily/20151123_agb_inner_boundary.ipynb
gfeiden/Notebook
mit
In the case where the radiative opacity is close to $10^{-2}\ {\rm cm}^2\, {\rm g}^{-1}$, the radiative gradient may exceed the adiabatic temperature gradient, suggesting that layers may be unstable to convective instabilities. It is therefore quite possible that thermodynamic conditions developing at the inner boundar...
def T_rad(opac, P, L, M): return (1.906465e10*opac*P*L/M)**0.25 print "{:11.5e} < T < {:11.5e} K".format(T_rad(1.0e-2, 1680., L, M), T_rad(1.0e-1, 1680., L, M))
Daily/20151123_agb_inner_boundary.ipynb
gfeiden/Notebook
mit
which produces a density
rho1 = (1.26*1.6726219e-24/1.3806488e-16)*(1680./6207.) rho2 = (1.26*1.6726219e-24/1.3806488e-16)*(1680./11038.) print "{:11.5e} < Density < {:11.5e} [g cm**-3].".format(rho1, rho2)
Daily/20151123_agb_inner_boundary.ipynb
gfeiden/Notebook
mit
Examples of ABC inference Coin flip with two flips We've flipped a coin twice and gotten heads twice. What is the probability for getting heads?
data= ['H','H'] outcomes= ['T','H'] def coin_ABC(): while True: h= numpy.random.uniform() flips= numpy.random.binomial(1,h,size=2) if outcomes[flips[0]] == data[0] \ and outcomes[flips[1]] == data[1]: yield h hsamples= [] start= time.time() for h in coin_ABC(): ...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
Coin flip with 10 flips Same with 10 flips, still matching the entire sequence:
data= ['T', 'H', 'H', 'T', 'T', 'H', 'H', 'T', 'H', 'H'] def coin_ABC_10flips(): while True: h= numpy.random.uniform() flips= numpy.random.binomial(1,h,size=len(data)) if outcomes[flips[0]] == data[0] \ and outcomes[flips[1]] == data[1] \ and outcomes[flips[2]] == da...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
Using a sufficient statistic instead:
sufficient_data= numpy.sum([d == 'H' for d in data]) def coin_ABC_10flips_suff(): while True: h= numpy.random.uniform() flips= numpy.random.binomial(1,h,size=len(data)) if numpy.sum(flips) == sufficient_data: yield h hsamples= [] start= time.time() for h in coin_ABC_10flips_suff...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
Variance of a Gaussian with zero mean Now we infer the variance of a Gaussian with zero mean using ABC:
data= numpy.random.normal(size=100) def Var_ABC(threshold=0.05): while True: v= numpy.random.uniform()*4 sim= numpy.random.normal(size=len(data))*numpy.sqrt(v) d= numpy.fabs(numpy.var(sim)-numpy.var(data)) if d < threshold: yield v vsamples= [] start= time.time() for v ...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
If we raise the threshold too much, we sample simply from the prior:
vsamples= [] start= time.time() for v in Var_ABC(threshold=1.5): vsamples.append(v) if time.time() > start+2.: break print "Obtained %i samples" % len(vsamples) h= hist(vsamples,range=[0.,2.],bins=51,normed=True) xs= numpy.linspace(0.001,2.,1001) ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.var...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
And if we make the threshold too small, we don't get many samples:
vsamples= [] start= time.time() for v in Var_ABC(threshold=0.001): vsamples.append(v) if time.time() > start+2.: break print "Obtained %i samples" % len(vsamples) h= hist(vsamples,range=[0.,2.],bins=51,normed=True) xs= numpy.linspace(0.001,2.,1001) ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.v...
inference/ABC-examples.ipynb
jobovy/misc-notebooks
bsd-3-clause
1 - Load an experiment Make a list of pandas dataframes with (clean) experimental data from a csv file. This is done with the pandas package from data in csv files. Two columns should be included in the csv file with true strain ("e_true") and true stress ("Sigma_true").
testFileNames=['example_1.csv'] listCleanTests=[] for testFileName in testFileNames: test=pd.read_csv(testFileName) listCleanTests.append(test)
examples/Old_RESSPyLab_Parameter_Calibration_Orientation_Notebook.ipynb
AlbanoCastroSousa/RESSPyLab
mit
2 - Determine Voce and Chaboche material parameters with either VCopt_SVD or VCopt_J There are two arguments to VCopt: an initial starting point for the parameters ("x_0") and the list of tests previously assembled. The parameters are gathered in list in the following order: [E, sy0, Qinf, b, C_1, gamma_1, C_2, gamma_2...
x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1] sol=RESSPyLab.VCopt_SVD(x_0,listCleanTests) print(sol) x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1] sol=RESSPyLab.VCopt_J(x_0,listCleanTests) print(sol)
examples/Old_RESSPyLab_Parameter_Calibration_Orientation_Notebook.ipynb
AlbanoCastroSousa/RESSPyLab
mit
3 - Use the solution point to plot experiment vs simulation
simCurve=RESSPyLab.VCsimCurve(sol,test) plt.plot(test['e_true'],test['Sigma_true'],c='r',label='Test') plt.plot(simCurve['e_true'],simCurve['Sigma_true'],c='k',label='RESSPyLab') plt.legend(loc='best') plt.xlabel('True strain') plt.ylabel('True stress')
examples/Old_RESSPyLab_Parameter_Calibration_Orientation_Notebook.ipynb
AlbanoCastroSousa/RESSPyLab
mit
4 - Example with multiple tests
testFileNames=['example_1.csv','example_2.csv'] listCleanTests=[] for testFileName in testFileNames: test=pd.read_csv(testFileName) listCleanTests.append(test) x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1] sol=RESSPyLab.VCopt_SVD(x_0,listCleanTests) print(sol) x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1] sol=...
examples/Old_RESSPyLab_Parameter_Calibration_Orientation_Notebook.ipynb
AlbanoCastroSousa/RESSPyLab
mit
我们看看该 wave 文件的参数,详细了解该文件
mix_1_wave.getparams()
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
该文件只有一个声道(因此是单声道)。帧率是 44100,表示每秒声音由 44100 个整数组成(因为文件是常见的 PCM 16 位格式,所以是整数)。该文件总共有 264515 个整数/帧,因此时长为:
264515/44100
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
我们从该 wave 文件中提取帧,这些帧将属于我们将运行 ICA 的数据集:
# Extract Raw Audio from Wav File signal_1_raw = mix_1_wave.readframes(-1) signal_1 = np.fromstring(signal_1_raw, 'Int16')
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
signal_1 现在是一个整数列表,表示第一个文件中包含的声音。
'length: ', len(signal_1) , 'first 100 elements: ',signal_1[:100]
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
如果将此数组绘制成线形图,我们将获得熟悉的波形:
import matplotlib.pyplot as plt fs = mix_1_wave.getframerate() timing = np.linspace(0, len(signal_1)/fs, num=len(signal_1)) plt.figure(figsize=(12,2)) plt.title('Recording 1') plt.plot(timing,signal_1, c="#3ABFE7") plt.ylim(-35000, 35000) plt.show()
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在我们可以按照相同的方式加载另外两个 wave 文件 ICA_mix_2.wav 和 ICA_mix_3.wav
mix_2_wave = wave.open('ICA_mix_2.wav','r') #Extract Raw Audio from Wav File signal_raw_2 = mix_2_wave.readframes(-1) signal_2 = np.fromstring(signal_raw_2, 'Int16') mix_3_wave = wave.open('ICA_mix_3.wav','r') #Extract Raw Audio from Wav File signal_raw_3 = mix_3_wave.readframes(-1) signal_3 = np.fromstring(signal...
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
读取所有三个文件后,可以通过 zip 运算创建数据集。 通过将 signal_1、signal_2 和 signal_3 组合成一个列表创建数据集 X
X = list(zip(signal_1, signal_2, signal_3)) # Let's peak at what X looks like X[:10]
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在准备运行 ICA 以尝试获取原始信号。 导入 sklearn 的 FastICA 模块 初始化 FastICA,查看三个成分 使用 fit_transform 对数据集 X 运行 FastICA 算法
# TODO: Import FastICA from sklearn.decomposition import FastICA # TODO: Initialize FastICA with n_components=3 ica = FastICA(n_components=3) # TODO: Run the FastICA algorithm using fit_transform on dataset X ica_result = ica.fit_transform(X) ica_result.shape
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
我们将其拆分为单独的信号并查看这些信号
result_signal_1 = ica_result[:,0] result_signal_2 = ica_result[:,1] result_signal_3 = ica_result[:,2]
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
我们对信号进行绘制,查看波浪线的形状
# Plot Independent Component #1 plt.figure(figsize=(12,2)) plt.title('Independent Component #1') plt.plot(result_signal_1, c="#df8efd") plt.ylim(-0.010, 0.010) plt.show() # Plot Independent Component #2 plt.figure(figsize=(12,2)) plt.title('Independent Component #2') plt.plot(result_signal_2, c="#87de72") plt.ylim(-0....
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
某些波浪线看起来像音乐波形吗? 确认结果的最佳方式是聆听生成的文件。另存为 wave 文件并进行验证。在此之前,我们需要: 将它们转换为整数(以便另存为 PCM 16 位 Wave 文件),否则只有某些媒体播放器能够播放它们 将值映射到 int16 音频的相应范围内。该范围在 -32768 到 +32767 之间。基本的映射方法是乘以 32767。 音量有点低,我们可以乘以某个值(例如 100)来提高音量
from scipy.io import wavfile # Convert to int, map the appropriate range, and increase the volume a little bit result_signal_1_int = np.int16(result_signal_1*32767*100) result_signal_2_int = np.int16(result_signal_2*32767*100) result_signal_3_int = np.int16(result_signal_3*32767*100) # Write wave files wavfile.write...
assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab [SOLUTION]-zh.ipynb
hetaodie/hetaodie.github.io
mit
Suposons que l'on veux calculer l'aire d'un cercle $A=\pi.r^2$, où $r$ est le rayon, qui peux être variable et $\pi=3.14159$ pour simplifier.
pi=3.14159 r=11.2 A=pi*r**2 print("L'aire du disque de rayon {0} m, est A={1} m^2".format(r,A)) print("l'adresse du rayon est {0}, celle de l'aire est {1}".format(id(r),id(A))) r=14.3 A=pi*r**2 ######## On à changé le rayon print("L aire du disque de rayon {0} m, est A={1} m^2".format(r,A)) print("l'adresse du rayon ...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
1.1.2 Contenant Le contenant n'est rien d'autre que l'association par adressage, d'un nom et d'un pointeur vers le contennu, soit la valeur associé à ce nom. L'affectation est l'opération qui permet d'associer un contenu (opérande de droite) à un contennant (opérande de gauche) et donc d'associer un nom avec un pointeu...
del(r) r=15e-3 A=pi*r**2 print("L aire du disque de rayon {0} m, est A={1} m^2".format(r,A)) print("l'adresse du rayon est {0}, celle de l'aire est {1}".format(id(r),id(A)))
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
1.2 Les variables on une etendue limitée Par leur place dans le programme Par leur valeur, au moment de déclarer leur contennu Par leur type, au moment de déclarer leur contennant Au moment de déclarer leur nom (regles strictes sur les termes réservés par Python) 1.2.1 Les Keywords ou noms réservés (30) + True, Fa...
#Variables permetant la reafectation a=(1,) print("le Type de la variable a est {}".format(type(a))) print("Son identifiant est {}".format(id(a))) a+=(2,) print("le Type de la variable a est {}".format(type(a))) print("Son identifiant est {}".format(id(a))) #Variables permetant le changement de place sans réaffectati...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
1.3 Visibilité des variables Sur une console, toute variable déclarée par une affectation dans une instruction indépendante est accessible depuis n'importe quel endroit.
# Ici on relance le Kernel, ce qui néttoie toute affectation %reset My_Variable_Number=42 globals()
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Certains blocs introduisent un nouvel espace de nomage
# On déclare une variable My_Variable_Fetiche_Number=42 # Puis on cree une fonction ou cette même variable est déclarée def f(): My_Variable_Fetiche_Number=4200000000000000000000 print('Dans la fonction la valeur de ma variable fétiche est {}'.format(My_Variable_Fetiche_Number)) # On execute la fonction...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2 Fonctions Les fonctions sont indispensables en programmation pour créer des entrées-sorties 2.1 Déclaration des Fonctions Utilisation du mot clef def
# Here a function that prints something visual # This function has no output variable def do_twice(f): """ This function has a docstring It executes twice the function f The program it is used by functions: - do_eight() - print_posts() - print_beams() """ f() f() def do_e...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
| Premier | Second | | - - - - | - - - - | | | | | | | | | | | | |
## On appelle de 'Doctring' de la fonction do_eight ## help(do_eight) def My_function_scare_of_n(n): n *= n return n #print(n) n=5 Carre=My_function_scare_of_n(n) print("Le carré de {0} est {1}".format(n,Carre)) # Voici une fonction qui rend le carré de n def My_function_scare_of_n(n): square =...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Attention return est indispensable pour que cette fonction rende quelque chose
# Voici une fonction ne rend rien mais c'est prévu # “Pass" means this method has not been implemented yet, but this will be the place to do it def update_agent(agent): s='Congratulate '+agent.upper()+' he doubles his salary' return s def time_step(agents): for agent in agents: s=update_agent(agent...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.2 Utilisation des fonction pour le jeu du Pendu
import random words = ['chicken', 'dog', 'cat', 'mouse', 'frog','horse','pig'] def pick_a_word(): word_position = random.randint(0, len(words) - 1) return words[word_position] word=pick_a_word() print('The secret word is "{}" '.format(word)) lives_remaining = 14 #this will become a global variable guessed_letters...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
~~~ Python Here are the first STUBS def get_guess(word): pass def process_guess(guess, word): pass def play(): word = pick_a_word() while True: guess = get_guess(word) if process_guess(guess, word): print('You win! Well Done!') break if lives_remaining ==...
import random words = ['chicken', 'dog', 'cat', 'mouse', 'frog'] lives_remaining = 14 ############A do-always-wrong STUB def get_guess(word): return 'a' ############################ def play(): word = pick_a_word() while True: guess = get_guess(word) if process_guess(guess, word): print('You win! Well ...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
~~~ Python def print_word_with_blanks(word): pass def get_guess(word): print_word_with_blanks(word) print('Lives Remaining: ' + str(lives_remaining)) guess = input(' Guess a letter or whole word?') return guess get_guess(word) ~~~
#04_06_hangman_get_guess import random words = ['chicken', 'dog', 'cat', 'mouse', 'frog'] lives_remaining = 14 def play(): word = pick_a_word() while True: guess = get_guess(word) if process_guess(guess, word): print('You win! Well Done!') break if lives_remaining == 0: print('You are Hung!') pri...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Pour le moment process_guess est toujours un stub ~~~ Python # STUB def single_letter_guess(guess, word): pass def whole_word_guess(guess, word): pass def process_guess(guess, word): if len(guess) > 1 and len(guess) == len(word): return whole_word_guess(guess, word) else: return single_...
#04_07_hangman_print_word import random words = ['chicken', 'dog', 'cat', 'mouse', 'frog'] lives_remaining = 14 guessed_letters = '' def play(): word = pick_a_word() while True: guess = get_guess(word) if process_guess(guess, word): print('You win! Well Done!') break if lives_remaining == 0: print('...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
On n'a pas encore gagné... et en plus on voudrait que le joueur puisse entrer une lettre ou un mot complet... On va maintenat ecrire les fonction single_letter_guess et whole_world_guess
def whole_word_guess(guess, word): global lives_remaining if guess.lower() == word.lower(): return True else: lives_remaining = lives_remaining - 1 return False def single_letter_guess(guess, word): global guessed_letters global lives_remaining if word.find(guess) == -1: # letter guess was incorrect li...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Maintenant le jeu est complet
#!/usr/bin/python # -*- coding: utf-8 -*- """ Created on Tue Sep 20 22:11:59 2016 """ import random words = ['chicken', 'dog', 'cat', 'mouse', 'frog'] lives_remaining = 14 guessed_letters = '' def play(): word = pick_a_word() while True: guess = get_guess(word) if process_guess(guess, word): print('You wi...
Cours03_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Using scalar aggregates in filters
countries = connection.table('countries') countries.limit(5)
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
We could always compute some aggregate value from the table and use that in another expression, or we can use a data-derived aggregate in the filter. Take the average of a column. For example the average of countries size:
countries.area_km2.mean()
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
You can use this expression as a substitute for a scalar value in a filter, and the execution engine will combine everything into a single query rather than having to access the database multiple times. For example, we want to filter European countries larger than the average country size in the world. See how most cou...
cond = countries.area_km2 > countries.area_km2.mean() expr = countries[(countries.continent == 'EU') & cond] expr
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
Conditional aggregates Suppose that we wish to filter using an aggregate computed conditional on some other expressions holding true. For example, we want to filter European countries larger than the average country size, but this time of the average in Africa. African countries have an smaller size compared to the wor...
conditional_avg = countries[countries.continent == 'AF'].area_km2.mean() countries[(countries.continent == 'EU') & (countries.area_km2 > conditional_avg)]
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
"Existence" filters Some filtering involves checking for the existence of a particular value in a column of another table, or amount the results of some value expression. This is common in many-to-many relationships, and can be performed in numerous different ways, but it's nice to be able to express it with a single c...
gdp = connection.table('gdp') gdp cond = ((gdp.country_code == countries.iso_alpha3) & (gdp.value > 3e12)).any() countries[cond]['name']
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
Note how this is different than a join between countries and gdp, which would return one row per year. The method .any() is equivalent to filtering with a subquery. Filtering in aggregations Suppose that you want to compute an aggregation with a subset of the data for only one of the metrics / aggregates in question, a...
arctic = countries.name.isin(['United States', 'Canada', 'Finland', 'Greenland', 'Iceland', 'Norway', 'Russia', ...
docs/source/tutorial/06-Advanced-Topics-ComplexFiltering.ipynb
cloudera/ibis
apache-2.0
Notice that the column vulnerability_type not only includes sql injection. It may also cite other identified types out of the 13. For instance, row 4 value is SQL XSS, indicating the entry is both of SQL and XSS types. Important: Entries labeled with multiple types WILL appear, accordingly, on the tables. A combinatio...
vulnerability_type_histogram = cved_df.groupby(by=['vulnerability_type'])['cwe_id','cve_id'].count() vulnerability_type_histogram
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
We can note some combinations of type occur much more frequently than others. Let's explore further the vulnerability types proposed by CVE Details, by not only counting the number of cwe id's per vulnerability type, but also what are the cwe_id'per type. Out of curiosity, let's also include the number of exploits that...
vulnerability_type_histogram = cved_df.groupby(by=['vulnerability_type','cwe_id'])['cve_id','n_exploits'].count() print(vulnerability_type_histogram) vulnerability_list = np.unique(cved_df['vulnerability_type']) vulnerability_by_month = cved_df.groupby(by=['vulnerability_type','month'])['cve_id'].count()
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
A pattern emerges in the construction of the types: For vulnerability types with a higher number of cwe entries, this higher number is led by a single cwe id. This is the case for 3 vulnerability types in the table above: Dir.Trav. being led by cwe_id 22, Exec Code Sql being led by cwe_id 89, and vulnerability type SQL...
vulnerability_histogram = cved_df.groupby(by=['cwe_id'])['cve_id'].count() vulnerability_histogram
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
Visualization Vulnerability type histogram
#imports for histogram import numpy as np import pandas as pd from bokeh.plotting import figure, show from bokeh.models import Range1d from bokeh.io import output_notebook from bokeh.charts import Bar import matplotlib.pyplot as plot from datetime import datetime output_notebook() #creating a histogram for vulnerabil...
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
The histogram below represents vulnerability types as mentioned in the Cve Details database. The vulnerability types are explained below: Dir. Trav. stands for Directory Traversal Dir. Trav. Bypass stands for Directory Traversal Bypass Dir. Trav. File Inclusion stands for Directory Traversal File Inclusion DoS Sql sta...
show(p)
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
We created this histogram to gain insights into the the number of occurances of each of the vulnerability types. On analysis we can see that Exec. code Sql is the most frequent type of attack, followed by Dir. Trav. and Sql types which qualify to the three most frequent vulnerability types. CWE ID count histogram
#creating a histogram for cwe ID types by creating a dictionary data = {} data['Entries'] = vulnerability_histogram #saving in dictionary for sorting and visualising df_data = pd.DataFrame(data).sort_values(by='Entries', ascending=True) series = df_data.loc[:,'Entries'] p = figure(width=800, y_range=series.index.tolis...
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0
The histogram above shows the frequency of CWE IDs of the CVE Details database. The most frequent CWE IDs are 89 and 22 which account for more than 90% of the entries.
color_map = { '+Priv Dir. Trav.': 'red', 'Dir. Trav.': 'green', 'Dir. Trav. +Info': 'yellow', 'Dir. Trav. Bypass': 'violet', 'Dir. Trav. File Inclusion': 'indigo', 'DoS Sql': 'brown', 'Exec Code Dir. Trav.': 'black', 'Exec Code Sql': 'blue', 'Exec Code Sql +Info': 'orange', 'Sql': 'olive', 'Sql +Info' : 'navy', 'Sql By...
Notebooks/CVE_Details/cve_details_introduction.ipynb
sailuh/perceive
gpl-2.0