markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We can then have a look at some predictions:
learn.show_results()
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
Or use the predict method on a row:
row, clas, probs = learn.predict(df.iloc[0]) row.show() clas, probs
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
To get prediction on a new dataframe, you can use the test_dl method of the DataLoaders. That dataframe does not need to have the dependent variable in its column.
test_df = df.copy() test_df.drop(['salary'], axis=1, inplace=True) dl = learn.dls.test_dl(test_df)
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
Then Learner.get_preds will give you the predictions:
learn.get_preds(dl=dl)
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
Note: Since machine learning models can't magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training fastai with Other Libraries As mentioned earlier, TabularPandas is a powerful and easy preproc...
to.xs[:3]
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
Now that everything is encoded, you can then send this off to XGBoost or Random Forests by extracting the train and validation sets and their values:
X_train, y_train = to.train.xs, to.train.ys.values.ravel() X_test, y_test = to.valid.xs, to.valid.ys.values.ravel()
nbs/44_tutorial.tabular.ipynb
fastai/fastai
apache-2.0
Adding Distributions Distributions can be attached to most any FloatParameter in the Bundle. To see a list of these available parameters, we can call b.get_adjustable_parameters. Note the exclude_constrained option which defaults to True: we can set distributions on constrained parameters (for priors, for example), b...
b.get_adjustable_parameters()
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
add_distribution is quite flexible and accepts several different syntaxes to add multiple distributions in one line. Here we'll just attach a distribution to a single parameter at a time. Just like when calling add_dataset or add_compute, add_distribution optionally takes a distribution tag -- and in the cases of dis...
b.add_distribution(qualifier='teff', component='primary', value=phoebe.gaussian(6000,100), distribution='mydist')
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
As you probably can expect by now, we also have methods to: get_distribution rename_distribution remove_distribution
print(b.get_distribution(distribution='mydist'))
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's add another distribution, with the same distribution tag, to the inclination of the binary.
b.add_distribution(qualifier='incl', component='binary', value=phoebe.uniform(80,90), distribution='mydist') print(b.get_distribution(distribution='mydist'))
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Accessing & Plotting Distributions The parameters we've created and attached are DistributionParameters and live in context='distribution', with all other tags matching the parameter they're referencing. For example, let's filter and look at the distributions we've added.
print(b.filter(context='distribution')) print(b.get_parameter(context='distribution', qualifier='incl')) print(b.get_parameter(context='distribution', qualifier='incl').tags)
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
The "value" of the parameter, is the distl distributon object itself.
b.get_value(context='distribution', qualifier='incl')
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
And because of that, we can call any method on the distl object, including plotting the distribution.
_ = b.get_value(context='distribution', qualifier='incl').plot(show=True)
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
If we want to see how multiple individual distributions interact and are correlated with each other via a corner plot, we can access the combined "distribution collection" from any number of distribution tags. This may not be terribly useful now, but is very useful when trying to create multivariate priors. b.get_dist...
_ = b.plot_distribution_collection(distribution='mydist', show=True)
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Sampling Distributions We can also sample from these distributions - either manually by calling sample on the distl or in bulk by respecting any covariances in the "distributon collection" via: b.sample_distribution_collection
b.sample_distribution_collection(distribution='mydist')
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
By default this just returns a dictionary with the twigs and sampled values. But if we wanted, we could have these applied immediately to the face-values by passing set_value=True, in which case a ParameterSet of changed parameters (including those via constraints) is returned instead.
changed_params = b.sample_distribution_collection(distribution='mydist', set_value=True) print(changed_params)
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Propagating Distributions through Forward Model Lastly, we can have PHOEBE automatically draw from a "distribution collection" multiple times and expose the distribution of the model itself.
print(b.get_parameter(qualifier='sample_from', context='compute'))
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Once sample_from is set, sample_num and sample_mode are exposed as visible parameters
b.set_value('sample_from', value='mydist') print(b.filter(qualifier='sample*'))
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now when we call run_compute, 10 different instances of the forward model will be computed from 10 random draws from the "distribution collection" but only the median and 1-sigma uncertainties will be exposed in the model.
b.run_compute(irrad_method='none') _ = b.plot(show=True)
development/tutorials/distributions.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
1. Functions and CEO Incomes In Which We Write Down a Recipe for Cake Let's start with a real data analysis task. We'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data were compiled for a Los Angeles Times analysis here, and ultimately came from filings mandated by the SEC f...
raw_compensation = Table.read_table('raw_compensation.csv') raw_compensation
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 1. When we first loaded this dataset, we tried to compute the average of the CEOs' pay like this: np.average(raw_compensation.column("Total Pay")) Explain why that didn't work. Hint: Try looking at some of the values in the "Total Pay" column. Write your answer here, replacing this text.
...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 2. Extract the first value in the "Total Pay" column. It's Mark Hurd's pay in 2015, in millions of dollars. Call it mark_hurd_pay_string.
mark_hurd_pay_string = ... mark_hurd_pay_string _ = tests.grade('q1_2')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 3. Convert mark_hurd_pay_string to a number of dollars. The string method strip will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of "100%".strip("%") is the string "100". You'll also need the function float, which convert...
mark_hurd_pay = ... mark_hurd_pay _ = tests.grade('q1_3')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times. This is where functions come in. First, we'll define our own function that packages together the code we wrote to convert a pay string to a pay number. This has its own benefits. Later in ...
def convert_pay_string_to_number(pay_string): """Converts a pay string like '$100 ' (in millions) to a number of dollars.""" return float(pay_string.strip("$")) _ = tests.grade('q1_4')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Running that cell doesn't convert any particular pay string. Rather, think of it as defining a recipe for converting a pay string to a number. Writing down a recipe for cake doesn't give you a cake. You have to gather the ingredients and get a chef to execute the instructions in the recipe to get a cake. Similarly, ...
convert_pay_string_to_number(mark_hurd_pay_string) # We can also compute Safra Catz's pay in the same way: convert_pay_string_to_number(raw_compensation.where("Name", are.equal_to("Safra A. Catz*")).column("Total Pay").item(0))
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
What have we gained? Well, without the function, we'd have to copy that 10**6 * float(pay_string.strip("$")) stuff each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing. We'd still have to call the function 102 times to convert all the salaries, which we'll f...
... ... ... ... twenty_percent = ... twenty_percent _ = tests.grade('q2_1')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Like the built-in functions, you can use named values as arguments to your function. Question 2. Use to_percentage again to convert the proportion named a_proportion (defined below) to a percentage called a_percentage. Note: You don't need to define to_percentage again! Just like other named things, functions stick ar...
a_proportion = 2**(.5) / 2 a_percentage = ... a_percentage _ = tests.grade('q2_2')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Here's something important about functions: Each time a function is called, it creates its own "space" for names that's separate from the main space where you normally define names. (Exception: all the names from the main space get copied into it.) So even though you defined factor = 100 inside to_percentage above an...
# You should see an error when you run this. (If you don't, you might # have defined factor somewhere above.) factor
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
As we've seen with the built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too. Question 3. Define a function called disemvowel. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a co...
def disemvowel(a_string): ... ... # An example call to your function. (It's often helpful to run # an example call from time to time while you're writing a function, # to see how it currently works.) disemvowel("Can you read this without vowels?") _ = tests.grade('q2_3')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Calls on calls on calls Just as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written. This is like a recipe for cake telling you to fo...
def num_non_vowels(a_string): """The number of characters in a string, minus the vowels.""" ... _ = tests.grade('q2_4')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Functions can also encapsulate code that does things rather than just computing values. For example, if you call print inside a function, and then call that function, something will get printed. The movies_by_year dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display...
movies_by_year = Table.read_table("movies_by_year.csv") rank = 5 fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1) print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function. Question 5. Write a function called print_kth_top_movie_year. It should take a single argume...
def print_kth_top_movie_year(k): # Our solution used 2 lines. ... ... # Example calls to your function: print_kth_top_movie_year(2) print_kth_top_movie_year(3) _ = tests.grade('q2_5')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
3. applying functions In Which Python Bakes 102 Cakes You'll get more practice writing functions, but let's move on. Defining a function is a lot like giving a name to a value with =. In fact, a function is a value just like the number 1 or the text "the"! For example, we can make a new name for the built-in functio...
our_name_for_max = max our_name_for_max(2, 6)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
The old name for max is still around:
max(2, 6)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Try just writing max or our_name_for_max (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.
max
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.
make_array(max, np.average, are.equal_to)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 1. Make an array containing any 3 other functions you've seen. Call it some_functions.
some_functions = ... some_functions _ = tests.grade('q3_1')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why this works:
make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Here's a simpler example that's actually useful: the table method apply. apply calls a function many times, once on each element in a column of a table. It produces an array of the results. Here we use apply to convert every CEO's pay to a number, using the function you defined:
raw_compensation.apply(convert_pay_string_to_number, "Total Pay")
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Here's an illustration of what that did: <img src="apply.png"/> Note that we didn't write something like convert_pay_string_to_number() or convert_pay_string_to_number("Total Pay"). The job of apply is to call the function we give it, so instead of calling convert_pay_string_to_number ourselves, we just write its name...
compensation = raw_compensation.with_column( "Total Pay ($)", ... compensation _ = tests.grade('q3_2')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Now that we have the pay in numbers, we can compute things about them. Question 3. Compute the average total pay of the CEOs in the dataset.
average_total_pay = ... average_total_pay _ = tests.grade('q3_3')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 4. Companies pay executives in a variety of ways: directly in cash; by granting stock or other "equity" in the company; or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)
cash_proportion = ... cash_proportion _ = tests.grade('q3_4')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Check out the "% Change" column in compensation. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are strings, not numbers, so like the "Total Pay" column, it's not usable without a bit ...
# For reference, our solution involved more than just this one line of code ... with_previous_compensation = ... with_previous_compensation _ = tests.grade('q3_5')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 6. What was the average pay of these CEOs in 2014? Does it make sense to compare this number to the number you computed in question 3?
average_pay_2014 = ... average_pay_2014 _ = tests.grade('q3_6')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 7. A skeptical student asks: "I already knew lots of ways to operate on each element of an array at once. For example, I can multiply each element of some_array by 100 by writing 100*some_array. What good is apply? How would you answer? Discuss with a neighbor. 4. Histograms Earlier, we computed the avera...
...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 2. Looking at the histogram, how many CEOs made more than \$30 million? (Answer the question by filling in your answer manually. You'll have to do a bit of arithmetic; feel free to use Python as a calculator.)
num_ceos_more_than_30_million = ...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 3. Answer the same question with code. Hint: Use the table method where and the property num_rows.
num_ceos_more_than_30_million_2 = ... num_ceos_more_than_30_million_2 _ = tests.grade('q4_3')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Question 4. Do most CEOs make around the same amount, or are there some who make a lot more than the rest? Discuss with someone near you. 5. Randomness Data scientists also have to be able to understand randomness. For example, they have to be able to assign individuals to treatment and control groups at random, and t...
two_groups = make_array('treatment', 'control') np.random.choice(two_groups)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
The big difference between the code above and all the other code we have run thus far is that the code above doesn't always return the same value. It can return either treatment or control, and we don't know ahead of time which one it will pick. We can repeat the process by providing a second argument, the number of ti...
np.random.choice(two_groups, 10)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
If we wanted to determine whether the random choice made by the function random is really fair, we could make a random selection a bunch of times and then count how often each selection shows up. In the next few code blocks, write some code that calls the choice function on the two_groups array one thousand times. Then...
# replace ... with code that will run the 'choice' function 1000 times; # the resulting array of choices will then have the name 'exp_results' exp_results = ... from collections import Counter Counter(exp_results) # the output from Counter tells you how many times 'treatment' and 'control' appear in the array # prod...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
A fundamental question about random events is whether or not they occur. For example: Did an individual get assigned to the treatment group, or not? Is a gambler going to win money, or not? Has a poll made an accurate prediction, or not? Once the event has occurred, you can answer "yes" or "no" to all these questions...
3 > 1 + 1
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
The value True indicates that the comparison is valid; Python has confirmed this simple fact about the relationship between 3 and 1+1. The full set of common comparison operators are listed below. <img src="comparison_operators.png"> Notice the two equal signs == in the comparison to determine equality. This is necessa...
5 = 10/2 5 == 10/2
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
An expression can contain multiple comparisons, and they all must hold in order for the whole expression to be True. For example, we can express that 1+1 is between 1 and 3 using the following expression.
1 < 1 + 1 < 3
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
The average of two numbers is always between the smaller number and the larger number. We express this relationship for the numbers x and y below. Try different values of x and y to confirm this relationship.
x = 12 y = 5 min(x, y) <= (x+y)/2 <= max(x, y)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
7 Comparing Strings Strings can also be compared, and their order is alphabetical. A shorter string is less than a longer string that begins with the shorter string.
'Dog' > 'Catastrophe' > 'Cat'
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Let's return to random selection. Recall the array two_groups which consists of just two elements, treatment and control. To see whether a randomly assigned individual went to the treatment group, you can use a comparison:
np.random.choice(two_groups) == 'treatment'
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
As before, the random choice will not always be the same, so the result of the comparison won't always be the same either. It will depend on whether treatment or control was chosen. With any cell that involves random selection, it is a good idea to run the cell several times to get a sense of the variability in the res...
def sign(x): if x > 0: return 'Positive' sign(3)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
This function returns the correct sign if the input is a positive number. But if the input is not a positive number, then the if expression evaluates to a False value, and so the return statement is skipped and the function call has no value. See what happens when you run the next block.
sign(-3)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
So let us refine our function to return Negative if the input is a negative number. We can do this by adding an elif clause, where elif is Python's shorthand for the phrase "else, if".
def sign(x): if x > 0: return 'Positive' elif x < 0: return 'Negative'
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Now sign returns the correct answer when the input is -3:
sign(-3)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
What if the input is 0? To deal with this case, we can add another elif clause:
def sign(x): if x > 0: return 'Positive' elif x < 0: return 'Negative' elif x == 0: return 'Neither positive nor negative' sign(0)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Run the previous code block for different inputs to our sign() function to make sure it does what we want it to. Equivalently, we can replaced the final elif clause by an else clause, whose body will be executed only if all the previous comparisons are False; that is, if the input value is equal to 0.
def sign(x): if x > 0: return 'Positive' elif x < 0: return 'Negative' else: return 'Neither positive nor negative' sign(0)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
9. The General Form A conditional statement can also have multiple clauses with multiple bodies, and only one of those bodies can ever be executed. The general format of a multi-clause conditional statement appears below. if &lt;if expression&gt;: &lt;if body&gt; elif &lt;elif expression 0&gt;: &lt;elif body 0&...
def draw_card(): """ Print out a random suit and numeric value representing a card from a standard 52-card deck. """ # pick a random number to determine the suit suit_num = np.random.uniform(0,1) # this function returns a random decimal number # between 0 a...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
11. Iteration It is often the case in programming – especially when dealing with randomness – that we want to repeat a process multiple times. For example, to check whether np.random.choice does in fact pick at random, we might want to run the following cell many times to see if Heads occurs about 50% of the time.
np.random.choice(make_array('Heads', 'Tails'))
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
We might want to re-run code with slightly different input or other slightly different behavior. We could copy-paste the code multiple times, but that's tedious and prone to typos, and if we wanted to do it a thousand times or a million times, forget it. A more automated solution is to use a for statement to loop over ...
for i in np.arange(3): print(i)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
It is instructive to imagine code that exactly replicates a for statement without the for statement. (This is called unrolling the loop.) A for statement simple replicates the code inside it, but before each iteration, it assigns a new value from the given sequence to the name we chose. For example, here is an unrolled...
i = np.arange(3).item(0) print(i) i = np.arange(3).item(1) print(i) i = np.arange(3).item(2) print(i)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Notice that the name i is arbitrary, just like any name we assign with =. Here we use a for statement in a more realistic way: we print 5 random choices from an array.
coin = make_array('Heads', 'Tails') for i in np.arange(5): print(np.random.choice(make_array('Heads', 'Tails')))
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
In this case, we simply perform exactly the same (random) action several times, so the code inside our for statement does not actually refer to i. 12. Augmenting Arrays While the for statement above does simulate the results of five tosses of a coin, the results are simply printed and aren't in a form that we can use f...
pets = make_array('Cat', 'Dog') np.append(pets, 'Another Pet')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
This keeps the array pets unchanged:
pets
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
But often while using for loops it will be convenient to mutate an array – that is, change it – when augmenting it. This is done by assigning the augmented array to the same name as the original.
pets = np.append(pets, 'Another Pet') pets
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Example: Counting the Number of Heads We can now simulate five tosses of a coin and place the results into an array. We will start by creating an empty array and then appending the result of each toss.
coin = make_array('Heads', 'Tails') tosses = make_array() for i in np.arange(5): tosses = np.append(tosses, np.random.choice(coin)) tosses
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Let us rewrite the cell with the for statement unrolled:
coin = make_array('Heads', 'Tails') tosses = make_array() i = np.arange(5).item(0) tosses = np.append(tosses, np.random.choice(coin)) i = np.arange(5).item(1) tosses = np.append(tosses, np.random.choice(coin)) i = np.arange(5).item(2) tosses = np.append(tosses, np.random.choice(coin)) i = np.arange(5).item(3) tosses ...
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
By capturing the results in an array we have given ourselves the ability to use array methods to do computations. For example, we can use np.count_nonzero to count the number of heads in the five tosses.
np.count_nonzero(tosses == 'Heads')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Iteration is a powerful technique. For example, by running exactly the same code for 1000 tosses instead of 5, we can count the number of heads in 1000 tosses.
tosses = make_array() for i in np.arange(1000): tosses = np.append(tosses, np.random.choice(coin)) np.count_nonzero(tosses == 'Heads')
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Example: Number of Heads in 100 Tosses It is natural to expect that in 100 tosses of a coin, there will be 50 heads, give or take a few. But how many is "a few"? What's the chance of getting exactly 50 heads? Questions like these matter in data science not only because they are about interesting aspects of randomness, ...
np.random.choice(coin, 10)
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Now let's study 100 tosses. We will start by creating an empty array called heads. Then, in each of the 10,000 repetitions, we will toss a coin 100 times, count the number of heads, and append it to heads.
N = 10000 heads = make_array() for i in np.arange(N): tosses = np.random.choice(coin, 100) heads = np.append(heads, np.count_nonzero(tosses == 'Heads')) heads
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Let us collect the results in a table and draw a histogram.
results = Table().with_columns( 'Repetition', np.arange(1, N+1), 'Number of Heads', heads ) results
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Here is a histogram of the data, with bins of width 1 centered at each value of the number of heads.
results.select('Number of Heads').hist(bins=np.arange(30.5, 69.6, 1))
notebooks/data8_notebooks/lab04/lab04.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
# 사전 μž‘μ—…: Hello World ν•™μŠ΅ λͺ©ν‘œ: λΈŒλΌμš°μ €μ—μ„œ ν…μ„œν”Œλ‘œμš° ν”„λ‘œκ·Έλž¨μ„ μ‹€ν–‰ν•©λ‹ˆλ‹€. λ‹€μŒμ€ 'Hello World' ν…μ„œν”Œλ‘œμš° ν”„λ‘œκ·Έλž¨μž…λ‹ˆλ‹€.
from __future__ import print_function import tensorflow as tf c = tf.constant('Hello, world!') with tf.Session() as sess: print(sess.run(c))
ml/cc/prework/ko/hello_world.ipynb
google/eng-edu
apache-2.0
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalizatio...
def fully_connected(prev_layer, num_units, training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the nu...
batch-norm/Batch_Normalization_Exercises.ipynb
hvillanua/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth, training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's de...
batch-norm/Batch_Normalization_Exercises.ipynb
hvillanua/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional...
batch-norm/Batch_Normalization_Exercises.ipynb
hvillanua/deep-learning
mit
Run a simple callback as soon as possible:
def hello_world(): print('Hello World!') loop.stop() loop.call_soon(hello_world) loop.run_forever()
notebook/aio34.ipynb
nwilbert/async-examples
mit
Coroutines can be scheduled in the eventloop (internally they are wrapped in a Task). The decorator is not necessary, but has several advantages: * documents that this is a coroutine (instead of scanning the code for yield) * provides some debugging magic, to detect unscheduled coroutines
@asyncio.coroutine def hello_world(): yield from asyncio.sleep(1.0) print('Hello World!') loop.run_until_complete(hello_world())
notebook/aio34.ipynb
nwilbert/async-examples
mit
Interconnect a Future and a coroutine, and wrap a courotine in a Task (a subclass of Future).
@asyncio.coroutine def slow_operation(future): yield from asyncio.sleep(1) future.set_result('Future is done!') def got_result(future): print(future.result()) loop.stop() future = asyncio.Future() future.add_done_callback(got_result) # wrap the coro in a special Future (a Task) and schedule it lo...
notebook/aio34.ipynb
nwilbert/async-examples
mit
Futures implement the coroutine interface, so they can be yielded from (yield from actually calls __iter__ before the iteration).
future = asyncio.Future() print(hasattr(future, '__iter__'))
notebook/aio34.ipynb
nwilbert/async-examples
mit
Brainstorm CTF phantom dataset tutorial Here we compute the evoked from raw for the Brainstorm CTF phantom tutorial dataset. For comparison, see :footcite:TadelEtAl2011 and: https://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf References .. footbibliography::
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt import mne from mne import fit_dipole from mne.datasets.brainstorm import bst_phantom_ctf from mne.io import read_raw_ctf print(__doc__)
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The data were collected with a CTF system at 2400 Hz.
data_path = bst_phantom_ctf.data_path(verbose=True) # Switch to these to use the higher-SNR data: # raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds') # dip_freq = 7. raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds') dip_freq = 23. erm_path = op.join(data_path, 'emptyroom_20150709_01.ds') raw = r...
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The sinusoidal signal is generated on channel HDAC006, so we can use that to obtain precise timing.
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')] plt.figure() plt.plot(times[times < 1.], sinusoid.T[times < 1.])
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's create some events using this signal by thresholding the sinusoid.
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The CTF software compensation works reasonably well:
raw.plot()
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
But here we can get slightly better noise suppression, lower localization bias, and a better dipole goodness of fit with spatio-temporal (tSSS) Maxwell filtering:
raw.apply_gradient_compensation(0) # must un-do software compensation first mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.) raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs) raw.plot()
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Our choice of tmin and tmax should capture exactly one cycle, so we can make the unusual choice of baselining using the entire epoch when creating our evoked data. We also then crop to a single time point (@t=0) because this is a peak in our signal.
tmin = -0.5 / dip_freq tmax = -tmin epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax, baseline=(None, None)) evoked = epochs.average() evoked.plot(time_unit='s') evoked.crop(0., 0.)
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's use a sphere head geometry model &lt;eeg_sphere_model&gt; and let's see the coordinate alignment and the sphere location.
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08) mne.viz.plot_alignment(raw.info, subject='sample', meg='helmet', bem=sphere, dig=True, surfaces=['brain']) del raw, epochs
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To do a dipole fit, let's use the covariance provided by the empty room recording.
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0) raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg', **mf_kwargs) cov = mne.compute_raw_covariance(raw_erm) del raw_erm dip, residual = fit_dipole(evoked, cov, sphere, verbose=True)
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare the actual position with the estimated one.
expected_pos = np.array([18., 0., 49.]) diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2)) print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1)) print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1)) print('Difference: %0.1f mm' % diff) print('Amplitude: %...
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Review: COO format Take a look at the slides that we just started in the last class, which cover the basics of sparse matrix storage formats: link These are available as native formats in SciPy. However, last time we went ahead and implemented COO using pure native Python objects. The goals of doing so were two-fold: ...
coo_rows = [name2id[e] for e in edges['Source']] coo_cols = [name2id[e] for e in edges['Target']] coo_vals = [1.] * len (coo_rows) assert len (coo_vals) == nnz # Sanity check against the raw data def coo_spmv (n, R, C, V, x): """ Returns y = A*x, where A has 'n' rows and is stored in COO format by the arr...
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
What follows picks up from last time. CSR format The compressed sparse row (CSR) format is an alternative to COO. The basic idea is to compress COO a little, by recognizing that there is redundancy in the row indices. To see that redundancy, the example in the slides sorts COO format by row. Exercise. Now create a CSR...
# Aside: What does this do? Try running it to see. z1 = [ 'q', 'v', 'c' ] z2 = [ 3 , 1 , 2 ] z3 = ['dog', 7 , 'man'] Z = list (zip (z1, z2, z3)) print "==> Before:" print Z Z.sort (key=lambda z: z[1]) print "\n==> After:" print Z # Note: Alternative to using a lambda (anonymous) function: def get_second_co...
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Exercise. Now implement a CSR-based sparse matrix-vector multiply.
def csr_spmv (n, ptr, ind, val, x): assert n > 0 assert type (ptr) == list assert type (ind) == list assert type (val) == list assert type (x) == list assert len (ptr) >= (n+1) # Why? assert len (ind) >= ptr[n] # Why? assert len (val) >= ptr[n] # Why? y = cse6040.dense_vector...
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Sparse matrix storage using SciPy (Numpy) Let's implement and time some of these routines below.
import scipy.sparse as sp
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause