markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Ejemplo 4
""" Beer & Johnston. (2012) Mechanics of materials. Problem 9.13 , pp. 568. """ # Datos P1 = 3e3 P2 = 3e3 M1 = 450 E = 200e9 I = (1/12)*(50e-3)*(60e-3)**3 n1 = Node((0,0)) n2 = Node((0.3,0)) n3 = Node((0.5,0)) n4 = Node((0.7,0)) n5 = Node((1,0)) e1 = Beam((n1,n2), E, I) e2 = Beam((n2,n3), E, I) e3 = Beam((n3,n4), E...
docs/nusa-info/es/beam-element.ipynb
JorgeDeLosSantos/nusa
mit
Install requirements
!apt-get update !apt-get install chromium-chromedriver !pip install -r easygen/requirements.txt
Easygen.ipynb
markriedl/easygen
mit
Download StyleGAN
!git clone https://github.com/NVlabs/stylegan.git !cp easygen/stylegan_runner.py stylegan
Easygen.ipynb
markriedl/easygen
mit
Download GPT-2
!git clone https://github.com/nshepperd/gpt-2
Easygen.ipynb
markriedl/easygen
mit
Create backend hooks for saving and loading programs
import IPython from google.colab import output def python_save_hook(file_text, filename): import easygen import hooks status = hooks.python_save_hook_aux(file_text, filename) ret_status = 'true' if status else 'false' return IPython.display.JSON({'result': ret_status}) def python_load_hook(filename): impo...
Easygen.ipynb
markriedl/easygen
mit
Import EasyGen
import sys sys.path.insert(0, 'easygen') import easygen
Easygen.ipynb
markriedl/easygen
mit
2. Download pre-trained neural network models 2.1 Download GPT-2 and StyleGan models Download the GPT-2 small 117M model. Will save to models/117M directory.
!python gpt-2/download_model.py 117M
Easygen.ipynb
markriedl/easygen
mit
Download the GPT-2 medium 345M model. Will save to models/345M directory.
!python gpt-2/download_model.py 345M
Easygen.ipynb
markriedl/easygen
mit
Download the StyleGAN cats model (256x256). Will save as "cats256x256.pkl" in the home directory.
!wget -O cats256x256.pkl https://www.dropbox.com/s/1w97383h0nrj4ea/karras2019stylegan-cats-256x256.pkl?dl=0
Easygen.ipynb
markriedl/easygen
mit
2.2 Download Wikipedia You only need to do this if you are using the ReadWikipedia functionality. This takes a long time. You may want to skip it if you know you wont be scraping data from Wikipedia.
!wget -O wiki.zip https://www.dropbox.com/s/39w6mj1akwy2a0r/wiki.zip?dl=0 !unzip wiki.zip
Easygen.ipynb
markriedl/easygen
mit
3. Run the GUI Run the cell below. This will load a default example program that generate new, fictional paint names. Use the "clear" button to clear it and make your own. When done, name the program and press the "save" button. You should see your file appear in the file listing in the left panel.
%%html <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <style> // MAKE CANVAS canvas { border:1px solid #d3d3d3; background-color: #f1f1f1; } </style> </head> <body> <script src="nbextensions/google.colab/module_dicts.js"></script> <script src="nbextension...
Easygen.ipynb
markriedl/easygen
mit
4. Run Your Program This will run a default example program that will generate new, fictional paint names. If you don't want to run that program, skip to the next cell.
program_file_name = 'easygen/examples/make_new_colors' easygen.main(program_file_name)
Easygen.ipynb
markriedl/easygen
mit
Once you've made your own program, run the cell below, enter the program below, and the press the run button.
%%html <html> <body> <script src="nbextensions/google.colab/run_program.js"></script> </script> <b>Run Program:</b> <input id="inp_run" /> <button onmouseup="run_program()">Run</button> </body> </html>
Easygen.ipynb
markriedl/easygen
mit
5. View Your Output Files Run the cell below to launch a file manager so you can view your text and image files. You can use the panel to the left to download any files written to disk.
%%html <html> <body> <script src="nbextensions/google.colab/file_manager.js"></script> <h1>Manage Files</h1> <table cols="3" border="0"> <tr><td><strong id="path1">/content</strong></td><td></td><td><strong id="path2">/content</strong></td></tr> <tr><td><select multiple id="file_list1"></select></td><td><p><button ...
Easygen.ipynb
markriedl/easygen
mit
Tokenizing text and creating sequences for sentences <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb"><img src="htt...
# Import the Tokenizer from tensorflow.keras.preprocessing.text import Tokenizer
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
Write some sentences Feel free to change and add sentences as you like
sentences = [ 'My favorite food is ice cream', 'do you like ice cream too?', 'My dog likes ice cream!', "your favorite flavor of icecream is chocolate", "chocolate isn't good for dogs", "your dog, your cat, and your parrot prefer broccoli" ]
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
Tokenize the words The first step to preparing text to be used in a machine learning model is to tokenize the text, in other words, to generate numbers for the words.
# Optionally set the max number of words to tokenize. # The out of vocabulary (OOV) token represents words that are not in the index. # Call fit_on_text() on the tokenizer to generate unique numbers for each word tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>") tokenizer.fit_on_texts(sentences)
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
View the word index After you tokenize the text, the tokenizer has a word index that contains key-value pairs for all the words and their numbers. The word is the key, and the number is the value. Notice that the OOV token is the first entry.
# Examine the word index word_index = tokenizer.word_index print(word_index) # Get the number for a given word print(word_index['favorite'])
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
Create sequences for the sentences After you tokenize the words, the word index contains a unique number for each word. However, the numbers in the word index are not ordered. Words in a sentence have an order. So after tokenizing the words, the next step is to generate sequences for the sentences.
sequences = tokenizer.texts_to_sequences(sentences) print (sequences)
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
Sequence sentences that contain words that are not in the word index Let's take a look at what happens if the sentence being sequenced contains words that are not in the word index. The Out of Vocabluary (OOV) token is the first entry in the word index. You will see it shows up in the sequences in place of any word tha...
sentences2 = ["I like hot chocolate", "My dogs and my hedgehog like kibble but my squirrel prefers grapes and my chickens like ice cream, preferably vanilla"] sequences2 = tokenizer.texts_to_sequences(sentences2) print(sequences2)
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
tensorflow/examples
apache-2.0
Можно опустить и : перед указанием шага, если его не указывать:
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[:])
crash-course/slices.ipynb
citxx/sis-python
mit
Указываем конец Указываем до какого элемента выводить:
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[:5]) # то же самое, что и lst[:5:] lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[:0])
crash-course/slices.ipynb
citxx/sis-python
mit
Указываем начало Или с какого элемента начинать:
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[2:]) lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[2:5])
crash-course/slices.ipynb
citxx/sis-python
mit
Указываем шаг
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[1:7:2]) lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[::2])
crash-course/slices.ipynb
citxx/sis-python
mit
Отрицательный шаг Можно даже сделать отрицательный шаг, как в range:
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[::-1])
crash-course/slices.ipynb
citxx/sis-python
mit
С указанием начала срез с отрицательным шагом можно понимать как: "Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1 до того, как список закончится".
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[2::-1])
crash-course/slices.ipynb
citxx/sis-python
mit
Для отрицательного шага важно правильно указывать порядок начала и конца, и помнить, что левое число всегда включительно, правое - не включительно:
lst = [1, 2, 3, 4, 5, 6, 7, 8] # Допустим, хотим элементы с индексами 1 и 2 в обратном порядке print(lst[1:3:-1]) # Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1 # до того, как встретим элемент с индексом 0 (0 не включительно) print(lst[2:0:-1])
crash-course/slices.ipynb
citxx/sis-python
mit
Особенности срезов Срезы не изменяют текущий список, а создают копию. С помощью срезов можно решить проблему ссылочной реализации при изменении одного элемента списка:
a = [1, 2, 3, 4] # а - ссылка на список, каждый элемент списка это ссылки на объекты 1, 2, 3, 4 b = a # b - ссылка на тот же самый список a[0] = -1 # Меняем элемент списка a print("a =", a) print("b =", b) # Значение b тоже поменялось! print() a = [1, 2, 3, 4] b = a[:] # Создаём копию ...
crash-course/slices.ipynb
citxx/sis-python
mit
Примеры использования С помощью срезов можно, например, пропустить элемент списка с заданным индексом:
lst = [1, 2, 3, 4, 5, 6, 7, 8] print(lst[:4] + lst[5:])
crash-course/slices.ipynb
citxx/sis-python
mit
Или поменять местами две части списка:
lst = [1, 2, 3, 4, 5, 6, 7, 8] swapped = lst[5:] + lst[:5] # поменять местами, начиная с элемента с индексом 5 print(swapped)
crash-course/slices.ipynb
citxx/sis-python
mit
Срезы и строки Срезы можно использовать не только для списков, но и для строк. Например, чтобы изменить третий символ строки, можно сделать так:
s = "long string" s = s[:2] + "!" + s[3:] print(s)
crash-course/slices.ipynb
citxx/sis-python
mit
Valuable Functions in Pandas
### Social Network Ads social_network = pd.read_csv("Social_Network_Ads.csv") social_network.head() social_network.shape
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
value_counts() - The Series class provides a method, value_counts, that counts the number of times each value appears.
social_network["Gender"].value_counts()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
isnull - It finds the number of values in each column with value as null.
social_network.isnull().sum()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
sort_index() - It sorts the Series by index, so the values appear in order.
social_network["Age"].value_counts().sort_index()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
value_counts(sort=False) - It counts the number of times each value appears with least frequent value on top in an increasing order.
social_network["EstimatedSalary"].value_counts(sort=False)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
describe() - It gives the basic statistical metrics like mean etc.
social_network["EstimatedSalary"].describe()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Consider salary above 140,000 is error or an outlier, then we can eliminate this error using np.nan. The attribute loc provides several ways to select rows and columns from a DataFrame. In this example, the first expression in brackets is the row indexer; the second expression selects the column. The expression social_...
social_network.loc[social_network["EstimatedSalary"] > 140000, "EstimatedSalary"] = np.nan social_network["EstimatedSalary"].describe()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Histogram One of the best ways to describe a variable is to report the values that appear in the dataset and how many times each value appears. This description is called the distribution of the variable. The most common representation of a distribution is a histogram, which is a graph that shows the frequency of each ...
#Convert series into a list age = list(social_network["Age"]) # Create dict format for age, it helps in building histogram easily. # The result is a dictionary that maps from values to frequencies. hist = {} for a in age: hist[a] = hist.get(a,0) + 1 #Same as dict format above. counter = Counter(age) #The result...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Plotting
plt.hist(age) plt.xlabel("Age") plt.ylabel("Freq") purchased_customer = social_network[social_network["Purchased"]==1] plt.hist(purchased_customer["Age"]) plt.xlabel("Age") plt.ylabel("Freq") social_network.head() no_purchase = social_network[social_network["Purchased"]==0] no_purchase.Age.mean() purchased_custom...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Some of the characteristics we might want to report are: central tendency: Do the values tend to cluster around a particular point? modes: Is there more than one cluster? spread: How much variability is there in the values? tails: How quickly do the probabilities drop off as we move away from the modes? outliers: Are ...
print("Variance: ",purchased_customer.Age.var()) print("Standard Deviation: ",purchased_customer.Age.std()) print("Variance: ",no_purchase.Age.var()) print("Standard Deviation:",no_purchase.Age.std())
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Effect Size An effect size is a summary statistic intended to describe (wait for it) the size of an effect. For example, to describe the difference between two groups, one obvious choice is the difference in the means. https://en.wikipedia.org/wiki/Effect_size
purchased_customer.Age.mean() - no_purchase.Age.mean() purchased_customer["EstimatedSalary"].mean() - no_purchase["EstimatedSalary"].mean() def CohenEffectSize(group1, group2): diff = group1.mean() - group2.mean() var1 = group1.var() var2 = group2.var() n1, n2 = len(group1), len(group2) pooled_var...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
PMF - Probability Mass Function Purchased
pmf_age_purchased = {} for age in purchased_customer["Age"].value_counts().index: pmf_age_purchased[age] = purchased_customer[purchased_customer["Age"]==age]["Age"].count() / purchased_customer["Age"].shape[0] #The Pmf is normalized so total probability is 1. sum(list(pmf_age_purchased.values()))
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Not Purchased
pmf_age_no_purchased = {} for age in no_purchase["Age"].value_counts().index: pmf_age_no_purchased[age] = no_purchase[no_purchase["Age"]==age]["Age"].count() / no_purchase["Age"].shape[0] sum(list(pmf_age_no_purchased.values()))
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Difference between Hist and PMF The biggest difference is that a Hist maps from values to integer counters; a Pmf maps from values to floating-point probabilities.
plt.bar(pmf_age_no_purchased.keys(), pmf_age_no_purchased.values()) plt.bar(pmf_age_purchased.keys(), pmf_age_purchased.values()) #27-41 ages = range(27, 41) diffs = [] for age in ages: p1 = pmf_age_purchased[age] p2 = pmf_age_no_purchased[age] diff = 100 * (p1 - p2) diffs.append(diff) plt.bar(ages, di...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Dataframe Indexing
### Create Dataframe from array array = np.random.randn(4, 2) df = pd.DataFrame(array) df columns = ['A', 'B'] df = pd.DataFrame(array, columns=columns) index = ['a', 'b', 'c', 'd'] df = pd.DataFrame(array, columns=columns, index=index) df
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
To select a row by label, you can use the loc attribute, which returns a Series
df.loc['a']
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
If the integer position of a row is known, rather than its label, you can use the iloc attribute, which also returns a Series.
df.iloc[0] indices = ['a', 'c'] df.loc[indices] df['a':'c'] df[0:2]
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Above the result in either case is a DataFrame, but notice that the first result includes the end of the slice; the second doesn’t. Limits of PMFs PMFs work well if the number of values is small. But as the number of values increases, the probability associated with each value gets smaller and the effect of random nois...
def PercentileRank(scores, your_score): count = 0 for score in scores: if score <= your_score: count += 1 percentile_rank = 100.0 * count / len(scores) return percentile_rank social_network.dropna(inplace=True) salary = list(social_network["EstimatedSalary"]) my_sal = 100000 Perc...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
CDF The CDF is the function that maps from a value to its percentile rank. The CDF is a function of x, where x is any value that might appear in the distribution. To evaluate CDF(x) for a particular value of x, we compute the fraction of values in the distribution less than or equal to x.
# This function is almost identical to PercentileRank, except that the result is a probability in the range 0–1 # rather than a percentile rank in the range 0–100. def EvalCdf(sample, x): count = 0.0 for value in sample: if value <= x: count += 1 prob = count / len(sample) return p...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
One way to read a CDF is to look up percentiles. For example, it looks like about 90% of people are aged less than 40 years, who didnt make a purchase. The CDF also provides a visual representation of the shape of the distribution. Common values appear as steep or vertical sections of the CDF; in this example, the mode...
step_plot(sorted(no_purchase["Age"]), no_purchase_prob, "Age")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
It looks like about only 30% or less of people are aged less than 40 years, who made a purchase. Remaining 70%le people are aged above 40.
step_plot(sorted(purchased_customer["Age"]), purchase_prob, "Age")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Estimated Salary (Purchase vs No Purchase)
no_purchase_prob = [] for sal in sorted(no_purchase["EstimatedSalary"]): no_purchase_prob.append(EvalCdf(no_purchase["EstimatedSalary"], sal)) purchase_prob = [] for sal in sorted(purchased_customer["EstimatedSalary"]): purchase_prob.append(EvalCdf(purchased_customer["EstimatedSalary"], sal))
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Under No purchase curve(Blue), the curve remains flat after 90K with minor bilps after that. But under purchase curve(orange), the curve keeps the steps increasing even after 90K, which suggest people with more salary have more purchasing power. And all above 50%le have 90K or more.
step_plot(sorted(no_purchase["EstimatedSalary"]), no_purchase_prob, "Estimated Salary") step_plot(sorted(purchased_customer["EstimatedSalary"]), purchase_prob, "Estimated Salary")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Quantiles: https://en.wikipedia.org/wiki/Quantile
Percentile(list(purchased_customer["EstimatedSalary"]),75)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Percentile ranks are useful for comparing measurements across different groups. For example, people who compete in foot races are usually grouped by age and gender. To compare people in different age groups, you can convert race times to percentile ranks. A few years ago I ran the James Joyce Ramble 10K in Dedham MA; I...
def PositionToPercentile(position, field_size): beat = field_size - position + 1 percentile = 100.0 * beat / field_size return percentile
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
In my age group, denoted M4049 for “male between 40 and 49 years of age”, I came in 26th out of 256. So my percentile rank in my age group was 90%. If I am still running in 10 years (and I hope I am), I will be in the M5059 division. Assuming that my percentile rank in my division is the same, how much slower should I ...
def PercentileToPosition(percentile, field_size): beat = percentile * field_size / 100.0 position = field_size - beat + 1 return position
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
There were 171 people in M5059, so I would have to come in between 17th and 18th place to have the same percentile rank. The finishing time of the 17th runner in M5059 was 46:05, so that’s the time I will have to beat to maintain my percentile rank. Modeling Distributions Exponential Distribution The CDF of the exponen...
babyboom = pd.read_csv('babyboom.dat',sep=" ", header=None) babyboom.columns = ["time", "gender", "weight", "minutes"] diffs = list(babyboom.minutes.diff()) e_cdf = [] l = 0.5 def exponential_distribution(x): e_cdf.append(1 - math.exp(-1* l * x))
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Normal Distribution The normal distribution, also called Gaussian, is commonly used because it describes many phenomena, at least approximately. It turns out that there is a good reason for its ubiquity. The normal distribution is characterized by two parameters: the mean, μ, and standard deviation σ. The normal distri...
def EvalNormalCdf(x, mu=0, sigma=1): return stat.norm.cdf(x, loc=mu, scale=sigma) mu = social_network["Age"].mean() sigma = social_network["Age"].std() step_plot(sorted(social_network["Age"]),EvalNormalCdf(sorted(social_network["Age"]), mu=mu, sigma=sigma), "Age")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Pareto Distribution https://en.wikipedia.org/wiki/Pareto_distribution Preferential Attachment https://en.wikipedia.org/wiki/Preferential_attachment Probability Distribution Function The derivative of CDF is PDF
Image('PDF.png')
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Evaluating a PDF for a particular value of x is usually not useful. The result is not a probability; it is a probability density. In physics, density is mass per unit of volume; in order to get a mass, you have to multiply by volume or, if the density is not constant, you have to integrate over volume. Similarly, p...
Xs = sorted(social_network["Age"]) mean, std = social_network["Age"].mean(), social_network["Age"].std() PDF = stat.norm.pdf(Xs, mean, std) step_plot(Xs, PDF, "Age", ylabel="Density")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Kernel Density Estimation Kernel density estimation (KDE) is an algorithm that takes a sample and finds an appropriately smooth PDF that fits the data. https://en.wikipedia.org/wiki/Kernel_density_estimation
sample = [random.gauss(mean, std) for i in range(500)] Kernel_density_estimate = stat.gaussian_kde(sample) sample_pdf = Kernel_density_estimate.evaluate(sorted(social_network["Age"])) step_plot(Xs, PDF, "Age", ylabel="Density") step_plot(Xs, sample_pdf, "Age", ylabel="Density")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Estimating a density function with KDE is useful for several purposes: Visualization: During the exploration phase of a project, CDFs are usually the best visualization of a distribution. After you look at a CDF, you can decide whether an estimated PDF is an appropriate model of the distribution. If so, it can be a be...
Image('distributions.png')
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Pmf and Hist are almost the same thing, except that a Pmf maps values to floating-point probabilities, rather than integer frequencies. If the sum of the probabilities is 1, the Pmf is normalized. Pmf provides Normalize, which computes the sum of the probabilities and divides through by a factor https://en.wikipedia.or...
mall_customer = pd.read_csv("Mall_Customers.csv") mall_customer.isnull().sum()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Overlapping data points look darker, so darkness is proportional to density. In this version of the plot we can see two details that were not apparent before: vertical clusters at Annual income 57k$. Jittering: https://blogs.sas.com/content/iml/2011/07/05/jittering-to-prevent-overplotting-in-statistical-graphics.html
plt.scatter(mall_customer["Age"], mall_customer["Annual Income (k$)"],alpha=0.2) plt.grid() plt.ylabel("Annual Income") plt.xlabel("Age")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
HexBin for large Dataset To handle larger datasets, another option is a hexbin plot, which divides the graph into hexagonal bins and colors each bin according to how many data points fall in it. An advantage of a hexbin is that it shows the shape of the relationship well, and it is efficient for large datasets, both in...
mall_customer.Age.describe()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Digitize computes the index of the bin that contains each value in df.htm3. The result is a NumPy array of integer indices. Values that fall below the lowest bin are mapped to index 0. Values above the highest bin are mapped to len(bins).
bins = np.arange(18, 75, 5) indices = np.digitize(mall_customer.Age, bins)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
groupby is a DataFrame method that returns a GroupBy object; used in a for loop, groups iterates the names of the groups and the DataFrames that represent them.
groups = mall_customer.groupby(indices)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
So, for example, we can print the number of rows in each group like this:
for i, group in groups: print(i, len(group)) for i, group in groups: print(i, len(group)) ages = [group.Age.mean() for i, group in groups] #heights cdf_group_income = defaultdict(list) for i, grp in groups: for income in grp["Annual Income (k$)"]: cdf_group_income[i].append(EvalCdf(grp["Annual I...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Correlation A correlation is a statistic intended to quantify the strength of the relationship between two variables. A challenge in measuring correlation is that the variables we want to compare are often not expressed in the same units. And even if they are in the same units, they come from different distributions. T...
def Cov(xs, ys, meanx=None, meany=None): xs = np.asarray(xs) ys = np.asarray(ys) if meanx is None: meanx = np.mean(xs) if meany is None: meany = np.mean(ys) cov = np.dot(xs-meanx, ys-meany) / len(xs) return cov
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
By default Cov computes deviations from the sample means, or you can provide known means. If xs and ys are Python sequences, np.asarray converts them to NumPy arrays. If they are already NumPy arrays, np.asarray does nothing. This implementation of covariance is meant to be simple for purposes of explanation. NumPy and...
def Corr(xs, ys): xs = np.asarray(xs) ys = np.asarray(ys) meanx, varx = np.mean(xs), np.var(xs) meany, vary = np.mean(ys), np.var(ys) corr = Cov(xs, ys, meanx, meany) / math.sqrt(varx * vary) return corr
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
MeanVar computes mean and variance slightly more efficiently than separate calls to np.mean and np.var. Pearson’s correlation is always between -1 and +1 (including both). If ρ is positive, we say that the correlation is positive, which means that when one variable is high, the other tends to be high. If ρ is negative,...
def SpearmanCorr(xs, ys): xranks = pd.Series(xs).rank() yranks = pd.Series(ys).rank() return Corr(xranks, yranks)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
I convert the arguments to pandas Series objects so I can use rank, which computes the rank for each value and returns a Series. Then I use Corr to compute the correlation of the ranks. I could also use Series.corr directly and specify Spearman’s method:
def SpearmanCorr(xs, ys): xs = pd.Series(xs) ys = pd.Series(ys) return xs.corr(ys, method='spearman') SpearmanCorr(mall_customer["Age"], mall_customer["Annual Income (k$)"]) SpearmanCorr(mall_customer["Annual Income (k$)"], mall_customer["Spending Score (1-100)"]) SpearmanCorr(mall_customer["Age"], mall_...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The Spearman rank correlation for the BRFSS data is 0.54, a little higher than the Pearson correlation, 0.51. There are several possible reasons for the difference, including: If the relationship is nonlinear, Pearson’s correlation tends to underestimate the strength of the relationship, and Pearson’s correlation can ...
def RMSE(estimates, actual): e2 = [(estimate-actual)**2 for estimate in estimates] mse = np.mean(e2) return math.sqrt(mse) def Estimate1(n=7, m=1000): mu = 0 sigma = 1 means = [] medians = [] for _ in range(m): xs = [random.gauss(mu, sigma) for i in range(n)] xbar = np.m...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
estimates is a list of estimates; actual is the actual value being estimated. In practice, of course, we don’t know actual; if we did, we wouldn’t have to estimate it. The purpose of this experiment is to compare the performance of the two estimators. When I ran this code, the RMSE of the sample mean was 0.38, which me...
Image('unbiased_estimator.png')
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
For an explanation of why S^2 is biased, and a proof that (Sn−1)^2 is unbiased, http://wikipedia.org/wiki/Bias_of_an_estimator. The biggest problem with this estimator is that its name and symbol are used inconsistently. The name “sample variance” can refer to either S^2 or (Sn−1)^2, and the symbol S^2 is used for eith...
def Estimate2(n=7, m=1000): mu = 0 sigma = 1 estimates1 = [] estimates2 = [] for _ in range(m): xs = [random.gauss(mu, sigma) for i in range(n)] biased = np.var(xs) unbiased = np.var(xs, ddof=1) estimates1.append(biased) estimates2.append(unbiased) pri...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Again, n is the sample size and m is the number of times we play the game. np.var computes S^2 by default and (Sn−1)^2 if you provide the argument ddof=1, which stands for “delta degrees of freedom.” DOF: http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics). Mean Error MeanError computes the mean difference be...
def MeanError(estimates, actual): errors = [estimate-actual for estimate in estimates] return np.mean(errors)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
When I ran this code, the mean error for S^2 was -0.13. As expected, this biased estimator tends to be too low. For (Sn−1)^2, the mean error was 0.014, about 10 times smaller. As m increases, we expect the mean error for (Sn−1)^2 to approach 0. Properties like MSE and bias are long-term expectations based on many itera...
#Estimate2()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Sampling Distributions Suppose you are a scientist studying gorillas in a wildlife preserve. You want to know the average weight of the adult female gorillas in the preserve. To weigh them, you have to tranquilize them, which is dangerous, expensive, and possibly harmful to the gorillas. But if it is important to obtai...
def SimulateSample(mu=90, sigma=7.5, n=9, m=1000): means = [] for j in range(m): xs = np.random.normal(mu, sigma, n) xbar = np.mean(xs) means.append(xbar) return sorted(means)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
mu and sigma are the hypothetical values of the parameters. n is the sample size, the number of gorillas we measured. m is the number of times we run the simulation.
means = SimulateSample() cdfs = [EvalCdf(means,m) for m in means] plt.step(sorted(means),cdfs) ci_5 = Percentile(means, 5) ci_95 = Percentile(means, 95) print(ci_5, ci_95) stderr = RMSE(means, 90) stderr
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
In each iteration, we choose n values from a normal distribution with the given parameters, and compute the sample mean, xbar. We run 1000 simulations and then compute the distribution, cdf, of the estimates. The result is shown in Figure. This distribution is called the sampling distribution of the estimator. It shows...
def Estimate3(n=7, m=1000): lam = 2 means = [] medians = [] for _ in range(m): xs = np.random.exponential(1.0/lam, n) L = 1 / np.mean(xs) Lm = math.log(2) / pd.Series(xs).median() means.append(L) medians.append(Lm) print('rmse L', RMSE(means, lam)) print('...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
When I run this experiment with λ = 2, the RMSE of L is 1.1. For the median-based estimator Lm, RMSE is 2.2. We can’t tell from this experiment whether L minimizes MSE, but at least it seems better than Lm. Sadly, it seems that both estimators are biased. For L the mean error is 0.39; for Lm it is 0.54. And neither con...
Estimate3()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Hypothesis Testing The fundamental question we want to address is whether the effects we see in a sample are likely to appear in the larger population. For example, in the Social Network ads sample we see a difference in mean Age for purchased customer and others. We would like to know if that effect reflects a real di...
data = (140, 110) heads, tails = data[0], data[1] actual = heads - tails def test_statistic(data): heads, tails = data["H"], data["T"] test_stat = abs(heads - tails) return test_stat def generate_sample(data): heads, tails = data[0], data[1] n = data[0] + data[1] toss_sample = {} sample = ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The result is about 0.059, which means that if the coin is fair, we expect to see a difference as big as 30 about 5.9% of the time. Interpreting the Results How should we interpret this result? By convention, 5% is the threshold of statistical significance. If the p-value is less than 5%, the effect is considered signi...
def TestStatistic(data): group1, group2 = data test_stat = abs(group1.mean() - group2.mean()) return test_stat def MakeModel(data): group1, group2 = data n, m = len(group1), len(group2) pool = np.hstack((group1, group2)) #print(pool.shape) return pool, n def RunModel(pool, n): ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
data is a pair of sequences, one for each group. The test statistic is the absolute difference in the means. MakeModel records the sizes of the groups, n and m, and combines the groups into one NumPy array, pool. RunModel simulates the null hypothesis by shuffling the pooled values and splitting them into two groups wi...
purchased_customer.dropna(inplace=True) no_purchase.dropna(inplace=True) data = purchased_customer.Age.values, no_purchase.Age.values ht = sample_generator(data) actual_diff = TestStatistic(data) def calculate_pvalue(data, iters=1000): test_stats = [TestStatistic(sample_generator(data)) ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The result pvalue is about 0.0, which means that we expect to see a difference as big as the observed effect about 0% of the time. So this effect is statistically significant. If we run the same analysis with estimated salary, the computed p-value is 0; after 1000 attempts, the simulation never yields an effect as big ...
def TestStatistic(data): group1, group2 = data test_stat = group1.mean() - group2.mean() return test_stat def MakeModel(data): group1, group2 = data n, m = len(group1), len(group2) pool = np.hstack((group1, group2)) #print(pool.shape) return pool, n def RunModel(pool, n): np.ra...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
DiffMeansOneSided inherits MakeModel and RunModel from above testing technique; the only difference is that TestStatistic does not take the absolute value of the difference. This kind of test is called one-sided because it only counts one side of the distribution of differences. The previous test, using both sides, is ...
def TestStatistic(data): group1, group2 = data test_stat = group1.std() - group2.std() return test_stat def MakeModel(data): group1, group2 = data n, m = len(group1), len(group2) pool = np.hstack((group1, group2)) #print(pool.shape) return pool, n def RunModel(pool, n): np.rand...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
This is a one-sided test because the hypothesis is that the standard deviation for customer purchasing is high, not just different. The p-value is 0.23, which is not statistically significant.
data = purchased_customer.Age.values, no_purchase.Age.values ht = sample_generator(data) actual_diff = TestStatistic(data) def calculate_pvalue(data, iters=1000): test_stats = [TestStatistic(sample_generator(data)) for _ in range(iters)] count = sum(1 for x in test_stats if x >= act...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Testing Correlation This framework can also test correlations. For example, in the NSFG data set, the correlation between customer's Age and his estimated salary is about 0.11. It seems like older customers have more salary. But could this effect be due to chance? For the test statistic, I use Pearson’s correlation, bu...
Corr(social_network["Age"], social_network["EstimatedSalary"]) def TestStatistic(data): xs, ys = data test_stat = abs(Corr(xs, ys)) return test_stat def RunModel(data): xs, ys = data xs = np.random.permutation(xs) return xs, ys
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
data is a pair of sequences. TestStatistic computes the absolute value of Pearson’s correlation. RunModel shuffles the xs and returns simulated data.
data = social_network.Age.values, social_network.EstimatedSalary.values actual_diff = TestStatistic(data) def calculate_pvalue(data, iters=1000): test_stats = [TestStatistic(RunModel(data)) for _ in range(iters)] count = sum(1 for x in test_stats if x >= actual_diff) return sorte...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The actual correlation is 0.11. The computed p-value is 0.019; after 1000 iterations the largest simulated correlation is 0.16. So although the observed correlation is small, it is statistically significant. This example is a reminder that “statistically significant” does not always mean that an effect is important, or...
def TestStatistic(data): observed = data n = sum(observed) expected = np.ones(6) * n / 6 test_stat = sum(abs(observed - expected)) return test_stat def RunModel(data): n = sum(data) values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(values, n, replace=True) freqs = Counter(rolls) ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The data are represented as a list of frequencies: the observed values are [8, 9, 19, 5, 8, 11]; the expected frequencies are all 10. The test statistic is the sum of the absolute differences The null hypothesis is that the die is fair, so we simulate that by drawing random samples from values. RunModel uses Hist to c...
data = [8, 9, 19, 5, 8, 11] actual_diff = TestStatistic(data) def calculate_pvalue(data, iters=1000): test_stats = [TestStatistic(RunModel(data)) for _ in range(iters)] count = sum(1 for x in test_stats if x >= actual_diff) return sorted(test_stats),count / iters test_stats, pva...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The p-value for this data is 0.13, which means that if the die is fair we expect to see the observed total deviation, or more, about 13% of the time. So the apparent effect is not statistically significant. Chi-squared tests In the previous section we used total deviation as the test statistic. But for testing proporti...
def TestStatistic(self, data): observed = data n = sum(observed) expected = np.ones(6) * n / 6 test_stat = sum((observed - expected)**2 / expected) return test_stat
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Squaring the deviations (rather than taking absolute values) gives more weight to large deviations. Dividing through by expected standardizes the deviations, although in this case it has no effect because the expected fre- quencies are all equal. The p-value using the chi-squared statistic is 0.04, substantially smalle...
def resample(xs): return np.random.choice(xs, len(xs), replace=True) def TestStatistic(data): group1, group2 = data test_stat = abs(group1.mean() - group2.mean()) return test_stat def MakeModel(data): group1, group2 = data n, m = len(group1), len(group2) pool = np.hstack((group1, group...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
FalseNegRate takes data in the form of two sequences, one for each group. Each time through the loop, it simulates an experiment by drawing a random sample from each group and running a hypothesis test. Then it checks the result and counts the number of false negatives. Resample takes a sequence and draws a sample with...
data = purchased_customer.Age.values, no_purchase.Age.values neg_rate = FalseNegRate(data) neg_rate
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Replication The hypothesis testing process I demonstrated in this above, strictly speaking, good practice. First, I performed multiple tests. If you run one hypothesis test, the chance of a false positive is about 1 in 20, which might be acceptable. But if you run 20 tests, you should expect at least one false positive...
#Implementation of Linear Least square def LeastSquares(xs, ys): meanx, varx = pd.Series(xs).mean(), pd.Series(xs).var() meany = pd.Series(ys).mean() slope = Cov(xs, ys, meanx, meany) / varx inter = meany - slope * meanx return inter, slope
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
LeastSquares takes sequences xs and ys and returns the estimated parameters inter and slope. For details on how it works, see http://wikipedia.org/wiki/Numerical_methods_for_linear_least_squares. FitLine, which takes inter and slope and re- turns the fitted line for a sequence of xs.
def FitLine(xs, inter, slope): fit_xs = np.sort(xs) fit_ys = inter + slope * fit_xs return fit_xs, fit_ys
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Least square fit between salary and experience
regression_data = pd.read_csv("Salary_Data.csv") inter, slope = LeastSquares(regression_data["YearsExperience"], regression_data["Salary"]) fit_xs, fit_ys = FitLine(regression_data["YearsExperience"], inter, slope) print("intercept: ", inter) print("Slope: ", slope)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The estimated intercept and slope are 27465.89 and 9134.96 salary per year. These values are hard to interpret in this form: the intercept is the expected salary of an employee, who has 0 year experience, like salary for fresher.
plt.scatter(regression_data["YearsExperience"], regression_data["Salary"]) plt.plot(fit_xs, fit_ys) plt.xlabel("Experience") plt.ylabel("Salary")
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
It’s a good idea to look at a figure like this to assess whether the relationship is linear and whether the fitted line seems like a good model of the relationship. Another useful test is to plot the residuals. A residuals function below
def Residuals(xs, ys, inter, slope): xs = np.asarray(xs) ys = np.asarray(ys) res = ys - (inter + slope * xs) return res
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0