markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
A Lightcurve can also be sliced to generate a new object. | lc_sliced = lc[100:200]
len(lc_sliced.counts) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Methods Concatenation Two light curves can be combined into a single object using the `join` method. Note that both of them must not have overlapping time arrays. | lc_1 = lc
lc_2 = Lightcurve(np.arange(1000, 2000), np.random.rand(1000)*1000, dt=1, skip_checks=True)
lc_long = lc_1.join(lc_2, skip_checks=True) # Or vice-versa
print(len(lc_long)) | 2000
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Truncation A light curve can also be truncated. | lc_cut = lc_long.truncate(start=0, stop=1000)
len(lc_cut) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
**Note** : By default, the `start` and `stop` parameters are assumed to be given as **indices** of the time array. However, the `start` and `stop` values can also be given as time values in the same value as the time array. | lc_cut = lc_long.truncate(start=500, stop=1500, method='time')
lc_cut.time[0], lc_cut.time[-1] | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Re-binning The time resolution (`dt`) can also be changed to a larger value.**Note** : While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division. | lc_rebinned = lc_long.rebin(2)
print("Old time resolution = " + str(lc_long.dt))
print("Number of data points = " + str(lc_long.n))
print("New time resolution = " + str(lc_rebinned.dt))
print("Number of data points = " + str(lc_rebinned.n)) | Old time resolution = 1
Number of data points = 2000
New time resolution = 2
Number of data points = 1000
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Sorting A lightcurve can be sorted using the `sort` method. This function sorts `time` array and the `counts` array is changed accordingly. | new_lc_long = lc_long[:] # Copying into a new object
new_lc_long = new_lc_long.sort(reverse=True)
new_lc_long.time[0] == max(lc_long.time) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
You can sort by the `counts` array using `sort_counts` method which changes `time` array accordingly: | new_lc = lc_long[:]
new_lc = new_lc.sort_counts()
new_lc.counts[-1] == max(lc_long.counts) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Plotting A curve can be plotted with the `plot` method. | lc.plot() | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
A plot can also be customized using several keyword arguments. | lc.plot(labels=('Time', "Counts"), # (xlabel, ylabel)
axis=(0, 1000, -50, 150), # (xmin, xmax, ymin, ymax)
title="Random generated lightcurve",
marker='c:') # c is for cyan and : is the marker style | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
The figure drawn can also be saved in a file using keywords arguments in the plot method itself. | lc.plot(marker = 'k', save=True, filename="lightcurve.png") | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
**Note** : See `utils.savefig` function for more options on saving a file. Sample Data Stingray also has a sample `Lightcurve` data which can be imported from within the library. | from stingray import sampledata
lc = sampledata.sample_data()
lc.plot() | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Checking the Light Curve for IrregularitiesYou can perform checks on the behaviour of the light curve, similar to what's done when instantiating a `Lightcurve` object when `skip_checks=False`, by calling the relevant method: | time = np.hstack([np.arange(0, 10, 0.1), np.arange(10, 20, 0.3)]) # uneven time resolution
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=1.0, skip_checks=True)
lc.check_lightcurve() | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Let's add some badly formatted GTIs: | gti = [(10, 100), (20, 30, 40), ((1, 2), (3, 4, (5, 6)))] # not a well-behaved GTI
lc = Lightcurve(time, counts, dt=0.1, skip_checks=True, gti=gti)
lc.check_lightcurve() | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
MJDREF and Shifting TimesThe `mjdref` keyword argument defines a reference time in Modified Julian Date. Often, X-ray missions count their internal time in seconds from a given reference date and time (so that numbers don't become arbitrarily large). The data is then in the format of Mission Elapsed Time (MET), or sec... | mjdref = 91254
time = np.arange(1000)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=1, skip_checks=True, mjdref=mjdref)
print(lc.mjdref)
mjdref_new = 91254 + 20
lc_new = lc.change_mjdref(mjdref_new)
print(lc_new.mjdref) | 91274
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
This change only affects the *reference time*, not the values given in the `time` attribute. However, it is also possible to shift the *entire light curve*, along with its GTIs: | gti = [(0,500), (600, 1000)]
lc.gti = gti
print("first three time bins: " + str(lc.time[:3]))
print("GTIs: " + str(lc.gti))
time_shift = 10.0
lc_shifted = lc.shift(time_shift)
print("Shifted first three time bins: " + str(lc_shifted.time[:3]))
print("Shifted GTIs: " + str(lc_shifted.gti)) | Shifted first three time bins: [10. 11. 12.]
Shifted GTIs: [[ 10. 510.]
[ 610. 1010.]]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Calculating a baseline**TODO**: Need to document this method Working with GTIs and Splitting Light CurvesIt is possible to split light curves into multiple segments. In particular, it can be useful to split light curves with large gaps into individual contiguous segments without gaps. | # make a time array with a big gap and a small gap
time = np.array([1, 2, 3, 10, 11, 12, 13, 14, 17, 18, 19, 20])
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, skip_checks=True)
lc.gti | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
This light curve has uneven bins. It has a large gap between 3 and 10, and a smaller gap between 14 and 17. We can use the `split` method to split it into three contiguous segments: | lc_split = lc.split(min_gap=2*lc.dt)
for lc_tmp in lc_split:
print(lc_tmp.time) | [1 2 3]
[10 11 12 13 14]
[17 18 19 20]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
This has split the light curve into three contiguous segments. You can adjust the tolerance for the size of gap that's acceptable via the `min_gap` attribute. You can also require a minimum number of data points in the output light curves. This is helpful when you're only interested in contiguous segments of a certain ... | lc_split = lc.split(min_gap=6.0)
for lc_tmp in lc_split:
print(lc_tmp.time) | [1 2 3]
[10 11 12 13 14 17 18 19 20]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
What if we only want the long segment? | lc_split = lc.split(min_gap=6.0, min_points=4)
for lc_tmp in lc_split:
print(lc_tmp.time) | [10 11 12 13 14 17 18 19 20]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
A special case of splitting your light curve object is to split by GTIs. This can be helpful if you want to look at individual contiguous segments separately: | # make a time array with a big gap and a small gap
time = np.arange(20)
counts = np.random.poisson(100, size=len(time))
gti = [(0,8), (12,20)]
lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti)
lc_split = lc.split_by_gti()
for lc_tmp in lc_split:
print(lc_tmp.time) | [1 2 3 4 5 6 7]
[13 14 15 16 17 18 19]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Because I'd passed in GTIs that define the range from 0-8 and from 12-20 as good time intervals, the light curve will be split into two individual ones containing all data points falling within these ranges.You can also apply the GTIs *directly* to the original light curve, which will filter `time`, `counts`, `countrat... | # make a time array with a big gap and a small gap
time = np.arange(20)
counts = np.random.poisson(100, size=len(time))
gti = [(0,8), (12,20)]
lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
**Caution**: This is one of the few methods that change the original state of the object, rather than returning a new copy of it with the changes applied! So any events falling outside of the range of the GTIs will be lost: | # time array before applying GTIs:
lc.time
lc.apply_gtis()
# time array after applying GTIs
lc.time | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
As you can see, the time bins 8-12 have been dropped, since they fall outside of the GTIs. Analyzing Light Curve SegmentsThere's some functionality in `stingray` aimed at making analysis of individual light curve segments (or chunks, as they're called throughout the code) efficient. One helpful function tells you the ... | dt=1.0
time = np.arange(0, 100, dt)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=dt, skip_checks=True)
min_total_counts = 300
min_total_bins = 2
estimated_chunk_length = lc.estimate_chunk_length(min_total_counts, min_total_bins)
print("The estimated length of each segment in secon... | The estimated length of each segment in seconds to satisfy both conditions is: 4.0
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
So we have time bins of 1 second time resolution, each with an average of 100 counts/bin. We require at least 2 time bins in each segment, and also a minimum number of total counts in the segment of 300. In theory, you'd expect to need 3 time bins (so 3-second segments) to satisfy the condition above. However, the Pois... | start_times, stop_times, lc_sums = lc.analyze_lc_chunks(chunk_length = 10.0, func=np.median)
lc_sums | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
This splits the light curve into 10-second segments, and then finds the median number of counts/bin in each segment. For a flat light curve like the one we generated above, this isn't super interesting, but this method can be helpful for more complex analyses. Instead of `np.median`, you can also pass in your own funct... | def myfunc(lc):
"""
Not a very interesting function
"""
return np.sum(lc.counts) * 10.0
start_times, stop_times, lc_result = lc.analyze_lc_chunks(chunk_length=10.0, func=myfunc)
lc_result | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Compatibility with `Lightkurve`The [`Lightkurve` package](https://docs.lightkurve.org) provides a large amount of complementary functionality to stingray, in particular for data observed with Kepler and TESS, stars and exoplanets, and unevenly sampled data. We have implemented a conversion method that converts to/from... | import lightkurve
lc_new = lc.to_lightkurve()
type(lc_new)
lc_new.time
lc_new.flux | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Let's do the rountrip to stingray: | lc_back = lc_new.to_stingray()
lc_back.time
lc_back.counts | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Similarly, we can transform `Lightcurve` objects to and from `astropy.TimeSeries` objects: | dt=1.0
time = np.arange(0, 100, dt)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=dt, skip_checks=True)
# convet to astropy.TimeSeries object
ts = lc.to_astropy_timeseries()
type(ts)
ts[:10] | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
SUMMARYEvaluate the readability, complexity and performance of a function.Write docstrings for functions following the NumPy/SciPy format.Write comments within a function to improve readability.Write and design functions with default arguments.Explain the importance of scoping and environments in Python as they relate ... | #example loop
numbers = [2, 3, 5]
squared = list()
for number in numbers:
squared.append(number ** 2)
squared
#ex1 loop as function
def squares_a_list(numerical_list):#function name and agruement
new_squared_list = list() #initialize output list
for number in numerical_list:
new_squared_list.appen... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
This function gave us the ability to do the same operation for multiple lists without having to rewrite any code and just calling the function. | larger_numbers = [5, 44, 55, 23, 11]
promoted_numbers = [73, 84, 95]
executive_numbers = [100, 121, 250, 103, 183, 222, 214]
squares_a_list(larger_numbers)
squares_a_list(promoted_numbers)
squares_a_list(executive_numbers) | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
It’s important to know what exactly is going on inside and outside of a function.In our function squares_a_list() we saw that we created a variable named new_squared_list.We can print this variable and watch all the elements be appended to it as we loop through the input list.But what happens if we try and print this v... | def squares_a_list(numerical_list):
new_squared_list = list()
for number in numerical_list:
new_squared_list.append(number ** 2)
print(new_squared_list)
return new_squared_list
squares_a_list(numbers)
new_squared_list | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Let’s talk more about function arguments.Arguments play a paramount role when it comes to adhering to the DRY principle as well as adding flexibility to your code.Let’s bring back the function we made named squares_a_list().The reason we made this function in the first place was to DRY out our code and avoid repeating ... | def exponent_a_list(numerical_list, exponent):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list
numbers = [2, 3, 5]
exponent_a_list(numbers, 3) #the 2nd arguement allows us to specify an exponent value
exponent_a_list(nu... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Functions can have any number of arguments and any number of optional arguments, but we must be careful with the order of the arguments.When we define our arguments in a function, all arguments with default values (aka optional arguments) need to be placed after required arguments.If any required arguments follow any a... | def exponent_a_list(exponent=2, numerical_list):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Up to this point, we have been calling functions with multiple arguments in a single way.When we call our function, we have been ordering the arguments in the order the function defined them in.So, in exponent_a_list(), the argument numerical_list is defined first, followed by the argument exponent.Naturally, we have b... | def exponent_a_list(numerical_list, exponent=2):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list
exponent_a_list([2, 3, 5], 5) | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
We showed earlier that we could also call the function by specifying exponent=5.Another way of calling this would be to also specify any of the argument names that do not have default values, in this case, numerical_list.What happens if we switch up the order of the arguments and put exponent=5 followed by numerical_li... | exponent_a_list(numerical_list=[2, 3, 5], exponent=5)
exponent_a_list(exponent=5, numerical_list=[2, 3, 5]) | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
What about if we switch up the ordering of the arguments without specifying any of the argument names.Our function doesn’t recognize the input arguments, and an error occurs because the two arguments are being swapped - it thinks 5 is the list, and [2, 3, 5] is the exponent.It’s important to take care when ordering and... | exponent_a_list(5, [2, 3, 5]) #this wont work because it thinkg that 5 is the list | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Functions can get very complicated, so it is not always obvious what they do just from looking at the name, arguments, or code.Therefore, people like to explain what the function does.The standard format for doing this is called a docstring.A docstring is a literal string that comes directly after the function def and ... | string1 = """This is a string"""
type(string1) | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Writing documentation for squares_a_list() using the NumPy style takes the following format.We can identify the brief description of the function at the top, the parameters that it takes in, and what object type they should be, as well as what to expect as an output.Here we can even see examples of how to run it and wh... | def squares_a_list(numerical_list):
"""
Squared every element in a list.
Parameters
----------
numerical_list : list
The list from which to calculate squared values
Returns
-------
list
A new list containing the squared value of each of the elements from the input list... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Using exponent_a_list(), a function from the previous section as an example, we include an optional note in the parameter definition and an explanation of the default value in the parameter description. | def exponent_a_list(numerical_list, exponent=2):
"""
Creates a new list containing specified exponential values of the input list.
Parameters
----------
numerical_list : list
The list from which to calculate exponential values from
exponent: int or float, optional
The exponent ... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Ah, remember how we talked about side effects back at the beginning of this module?Although we recommend avoiding side effects in your functions, there may be occasions where they’re unavoidable or required.In these cases, we must make it clear in the documentation so that the user of the function knows that their obje... | def function_name(param1, param2):
"""The first line is a short description of the function.
If your function includes side effects, explain it clearly here.
Parameters
----------
param1 : datatype
A description of param1.
.
.
.
Etc.
""" | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Ok great! Now that we’ve written and explained our functions with a standardized format, we can read it in our file easily, but what if our function is located in a different file?How can we learn what it does when reading our code?We learned in the first assignment that we can read more about built-in functions using ... | ?len # For example, if we want the docstring for the function len(): | Object `len # For example, if we want the docstring for the function len():` not found.
| MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
We all know that mistakes are a regular part of life.In coding, every line of code is at risk for potential errors, so naturally, we want a way of defending our functions against potential issues.Defensive programming is code written in such a way that, if errors do occur, they are handled in a graceful, fast and infor... | def exponent_a_list(numerical_list, exponent=2):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list
numerical_string = "123"
exponent_a_list(numerical_string)
def exponent_a_list(numerical_list, exponent=2):
if type(num... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Exceptions disrupt the regular execution of our code. When we raise an Exception, we are forcing our own error with our own message.If we wanted to raise an exception to solve the problem on the last slide, we could do the following. | numerical_string = "123"
exponent_a_list(numerical_string) | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Let’s take a closer look.The first line of code is an if statement - what needs to occur to trigger this new code we’ve written.This code translates to “If numerical_list is not of the type list…”.The second line does the complaining.We tell it to raise an Exception (throw an error) with this message.Now we get an erro... | if type(numerical_list) is not list:
raise Exception("You are not using a list for the numerical_list input.")
def exponent_a_list(numerical_list, exponent=2):
if type(numerical_list) is not list:
raise TypeError("You are not using a list for the numerical_list input.")
new_exponent_list = list()
... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Now that we can write exceptions, it’s important to document them.It’s a good idea to include details of any included exceptions in our function’s docstring.Under the NumPy docstring format, we explain our raised exception after the “Returns” section.We first specify the exception type and then an explanation of what c... | def exponent_a_list(numerical_list, exponent=2):
"""
Creates a new list containing specified exponential values of the input list.
Parameters
----------
numerical_list : list
The list from which to calculate exponential values from
exponent : int or float, optional
The exponent... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
In the last section, we learned about raising exceptions, which, in a lot of cases, helps the function user identify if they are using it correctly.But there are still some questions remaining:How can we be so sure that the code we wrote is doing what we want it to?Does our code work 100% of the time?These questions ca... | assert 1 == 2 , "1 is not equal to 2." | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
https://prog-learn.mds.ubc.ca/module6/assert2.png Let’s take a look at an example where the Boolean is True.Here, since the assert statement results in a True values, Python continues to run, and the next line of code is executed.When an assert is thrown due to a Boolean evaluating to False, the next line of code does ... | assert 1 == 1 , "1 is not equal to 1."
print('Will this line execute?')
assert 1 == 2 , "1 is not equal to 2."
print('Will this line execute?')
Not all assert statements need to have a message.
We can re-write the statement from before without one.
This time you’ll notice that the error doesn’t contain the particular... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Where do assert statements come in handy?Up to this point, we have been creating functions, and only after we have written them, we’ve tested if they work.Some programmers use a different approach: writing tests before the actual function. This is called Test-Driven Development.This may seem a little counter-intuitive,... | def exponent_a_list(numerical_list, exponent=2):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list
assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2"
assert exponent_a_list([1, ... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Just because all our tests pass, this does not mean our program is necessarily correct.It’s common that our tests can pass, but our code contains errors.Let’s take a look at the function bad_function(). It’s very similar to exponent_a_list except that it separately computes the first entry before doing the rest in the ... | def bad_function(numerical_list, exponent=2):
new_exponent_list = [numerical_list[0] ** exponent] # seed list with first element
for number in numerical_list[1:]:
new_exponent_list.append(number ** exponent)
return new_exponent_list
assert bad_function([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect ... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Often, we will be making functions that work on data.For example, perhaps we want to write a function called column_stats that returns some summary statistics in the form of a dictionary.The function here is something we might have envisioned. (Note that if we’re using test-driven development, this function will just b... | def column_stats(df, column):
stats_dict = {'max': df[column].max(),
'min': df[column].min(),
'mean': round(df[column].mean()),
'range': df[column].max() - df[column].min()}
return stats_dict | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
The values we chose in our columns should be simple enough to easily calculate the expected output of our function.Just like how we made unit tests using calculations we know to be true, we do the same using a simple dataset we call helper data.The dataframe must have a small dimension to keep the calculations simple.T... | import pandas as pd
data = {'name': ['Cherry', 'Oak', 'Willow', 'Fir', 'Oak'],
'height': [15, 20, 10, 5, 10],
'diameter': [2, 5, 3, 10, 5],
'age': [0, 0, 0, 0, 0],
'flowering': [True, False, True, False, False]}
forest = pd.DataFrame.from_dict(data)
forest
assert column_stats(forest... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
We use a systematic approach to design our function using a general set of steps to follow when writing programs.The approach we recommend includes 5 steps:1. Write the function stub: a function that does nothing but accepts all input parameters and returns the correct datatype.This means we are writing the skeleton of... | def exponent_a_list(numerical_list, exponent=2):
return list() | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
2. Write tests to satisfy the design specifications.This is where our assert statements come in.We write tests that we want our function to pass.In our exponent_a_list() example, we expect that our function will take in a list and an optional argument named exponent and then returns a list with the exponential value of... | def exponent_a_list(numerical_list, exponent=2):
return list()
assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list"
assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2"
assert exponent_a_list([1, 2, 3], 3) == [1, 8, 27], "incorrect output for exponent... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
3. Outline the program with pseudo-code.Pseudo-code is an informal but high-level description of the code and operations that we wish to implement.In this step, we are essentially writing the steps that we anticipate needing to complete our function as comments within the function.So for our function pseudo-code includ... | def exponent_a_list(numerical_list, exponent=2):
# create a new empty list
# loop through all the elements in numerical_list
# for each element calculate element ** exponent
# append it to the new list
return list()
assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list"
asser... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
4. Write code and test frequently.Here is where we fill in our function.As you work on the code, more and more tests of the tests that you wrote will pass until finally, all your assert statements no longer produce any error messages. | def exponent_a_list(numerical_list, exponent=2):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append(number ** exponent)
return new_exponent_list
assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list"
assert exponent_a_list([1, 2, 4, 7], 2) == [1,... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
5. Write documentation.Finally, we finish writing our function with a docstring. | def exponent_a_list(numerical_list, exponent=2):
""" Creates a new list containing specified exponential values of the input list.
Parameters
----------
numerical_list : list
The list from which to calculate exponential values from
exponent : int or float, optional
The exponent val... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
This has been quite a full module!We’ve learned how to make functions, how to handle errors gracefully, how to test our functions, and write the necessary documentation to keep our code comprehensible.These skills will all contribute to writing effective code.One thing we have not discussed yet is the actual code withi... | def squares_a_list(numerical_list):
new_squared_list = list()
for number in numerical_list:
new_squared_list.append(number ** 2)
return new_squared_list
def exponent_a_list(numerical_list, exponent):
new_exponent_list = list()
for number in numerical_list:
new_exponent_list.append... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Although it may seem useful when a function acts as a one-stop-shop that does everything you want in a single function, this also limits your ability to reuse code that lies within it.Ideally, functions should serve a single purpose.For example, let’s say we have a function that reads in a csv, finds the mean of each g... | import altair as alt
def load_filter_and_average(file, grouping_column, ploting_column):
df = pd.read_csv(file)
source = df.groupby(grouping_column).mean().reset_index()
chart = alt.Chart(source, width = 500, height = 300).mark_bar().encode(
x=alt.X(grouping_column),
... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
In this case, you want to simplify the function.Having a function that only calculates the mean values of the groups in the specified column is much more usable.A preferred function would look something like this, where the input is a dataframe we have already read in, and the output is the dataframe of mean values for... | def grouped_means(df, grouping_column):
grouped_mean = df.groupby(grouping_column).mean().reset_index()
return grouped_mean
cereal_mfr = grouped_means(cereal, 'mfr')
cereal_mfr
If we wanted, we could then make a second function that creates the desired plot part of the previous function.
def plot_mean(df, grou... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
3. Return a single objectFor the most part, we have only lightly touched on the fact that functions can return multiple objects, and it’s with good reason.Although functions are capable of returning multiple objects, that doesn’t mean that it’s the best option.For instance, what if we converted our function load_filter... | def load_filter_and_average(file, grouping_column, ploting_column):
df = pd.read_csv(file)
source = df.groupby(grouping_column).mean().reset_index()
chart = alt.Chart(source, width = 500, height = 300).mark_bar().encode(
x=alt.X(grouping_column),
y=alt.Y(ploting_c... | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Since our function returns a tuple, we can obtain the plot by selecting the first element of the output.This can be quite confusing. We would recommend separating the code into two functions and can have each one return a single object.It’s best to think of programming functions in the same way as mathematical function... | another_bad_idea[0] | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
It’s generally bad form to include objects in a function that were created outside of it.Take our grouped_means() function.What if instead of including df as an input argument, we just used cereal that we loaded earlier?The number one problem with doing this is now our function only works on the cereal data - it’s not ... | def grouped_means(df, grouping_column):
grouped_mean = df.groupby(grouping_column).mean().reset_index()
return grouped_mean
cereal = pd.read_csv('cereal.csv')
def bad_grouped_means(grouping_column):
grouped_mean = cereal.groupby(grouping_column).mean().reset_index()
return grouped_mean | _____no_output_____ | MIT | summary worksheet/M6 Function Fundamentals and Best Practices.ipynb | Lavendulaa/programming-in-python-for-data-science |
Introduction to the Interstellar Medium Jonathan Williams Figure 7.4: molecular rich spectrum this is a small portion (centered around CO 3-2) from the published spectrum in https://www.aanda.org/articles/aa/abs/2016/11/aa28648-16/aa28648-16.html the ascii file was provided by Jes Jorgensen | import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
%matplotlib inline
nu, flux = np.loadtxt('PILS_spectrum.txt', unpack=True)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel(r"$\nu$ [GHz]", fontsize=16)
ax.set_ylabel(r"Flux [Jy]", fontsize=16)
... | _____no_output_____ | CC0-1.0 | molecules/PILS_spectrum.ipynb | CambridgeUniversityPress/IntroductionInterstellarMedium |
K-Nearest Neighbors Algorithm and Its Application IntroductionAs we have learnt, NaiveBayes and decision tree are all eager learning algorithm which constructs a classification model before receiving new data to do queries. In contrast, lazy learning algorithm stores all training data until a query is made. K-nearest ... | import numpy as np
import pandas as pd
from nltk import word_tokenize
from sklearn.neighbors import KNeighborsClassifier
from collections import Counter
import matplotlib
matplotlib.use('svg')
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot') | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Feature ScoringWe are provided with an extra key words list for each category. We can use those words to score the likelihood of a senctence's category. For example, sentences contain words like "we", "introduce" and "design" are more likely to describe the research goal of the current paper, which should belongs to c... | # Read in key words for each category
def build_word_set(filename):
with open(filename) as f:
lines = f.readlines()
return {word.strip() for word in lines if word}
aim_set = build_word_set("./word_lists/aim.txt")
base_set = build_word_set("./word_lists/base.txt")
own_set = build_word_set("./word_li... | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Train data on classifier. Return feature matrix and label vector. If `train` is `false`, the output label vector will be empty. | def load_data(filename, train=True):
"""
Training data format like:
'AIMX In this paper we derive the equations for Loop Corrected Belief Propagation on a continuous variable Gaussian model'
For each single line of raw data, caculate the feature score of it and put it into feature matrix (and ... | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Let's try out one training data and see the accuracy of our classifier! | f, l = load_data('./arxiv_annotate10_7_3.txt')
sklearn_knn = KNeighborsClassifier(n_neighbors=3)
sklearn_knn.fit(f, l.ravel())
correct = 0.0
for i in range(len(f)):
if sklearn_knn.predict(f[i].reshape(1,-1)) == l[i]:
correct += 1
print correct/len(f) | 0.716417910448
| MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Performance AnalysisIn this part, we are going to evaluate the performance of our algorithm in contrast to the sklearn library.There are several perspectives for us to analysis an algorithm, time complexity, space complexity, error rate, etc.Basic kNN algorithm stores all samples, so the space complexity depends on th... | f, l = load_data('./data.txt') | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Test Sklearn LibraryOn one hand, we test sklearn library with different k. | x_points = [x for x in range(1,10)] + [x for x in range(10, 50, 3)]
y_points = []
for k in x_points:
correct = 0.0
sklearn_knn = KNeighborsClassifier(n_neighbors=k)
sklearn_knn.fit(f, l.ravel())
for i in range(len(f)):
if sklearn_knn.predict(f[i].reshape(1,-1)) == l[i]:
correct += 1... | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Test Our K-NNOn the other hand, we implement our own simplest K-NN algorithm first and test its accuracy rate. | class KNN:
def __init__(self, k):
self.k = k
def _euclidean_distance(self, data1, data2):
diff = np.power(data1 - data2, 2)
diff_sum = np.sum(diff, axis=0)
return np.sqrt(diff_sum)
def majority_vote(self, neighbors):
clusters = [neighbour[1][0] for neighbour in neig... | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
ConclusionFrom the figures above, we find that both method have a peak accuracy rate of 77 near 6. Congratulations! Our simplest k-NN algorithm works very well on this data set compared with sklearn library. When k is small than 6, there is a wave due to the sensitivity of the noise. At the same time, we can infer tha... | def majority_vote2(neighbors):
"""
neighbors[0] like: ([[1,2,3,4], ['MISC']], dist)
list of (label, weight), weight = # of same label * 1.0 / (avg dist / # of same )
of them
"""
AIM_label_cnt = 0
AIM_label_total_dist = 0
OWNX_label_cnt = 0
OWNX_label_total_dist = 0
BASIS_label_c... | _____no_output_____ | MIT | 2016/tutorial_final/175/tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
LIME Text Explainer via XAIThis tutorial demonstrates how to generate explanations using LIME's text explainer implemented by the Contextual AI library. Much of the tutorial overlaps with what is covered in the [LIME tabular tutorial](lime_tabular_explainer.ipynb). To recap, the main steps for generating explanations ... | # Some auxiliary imports for the tutorial
import pprint
import sys
import random
import numpy as np
from pprint import pprint
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfVectorizer
# Set... | _____no_output_____ | Apache-2.0 | tutorials/compiler/20newsgroup/sample.ipynb | SebastianWolf-SAP/contextual-ai |
Step 2: Load dataset and train a modelIn this tutorial, we rely on the 20newsgroups text dataset, which can be loaded via sklearn's dataset utility. Documentation on the dataset itself can be found [here](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html). To keep things simple, we will extract data for 3 ... | # Train on a subset of categories
categories = [
'rec.sport.baseball',
'soc.religion.christian',
'sci.med'
]
raw_train = datasets.fetch_20newsgroups(subset='train', categories=categories)
print(list(raw_train.keys()))
print(raw_train.target_names)
print(raw_train.target[:10])
raw_test = datasets.fetch_20n... | ['data', 'filenames', 'target_names', 'target', 'DESCR']
['rec.sport.baseball', 'sci.med', 'soc.religion.christian']
[1 0 2 2 0 2 0 0 0 1]
'Subsetting training sample to 200 to speed up.'
'Classifier score: 0.9689336691855583'
('Classifier predict func <bound method _BaseNB.predict_proba of '
'MultinomialNB(alpha=0.1,... | Apache-2.0 | tutorials/compiler/20newsgroup/sample.ipynb | SebastianWolf-SAP/contextual-ai |
Step 3: Instantiate the explainerHere, we will use the LIME Text Explainer. | explainer = ExplainerFactory.get_explainer(domain=xai.DOMAIN.TEXT)
clf.predict_proba | _____no_output_____ | Apache-2.0 | tutorials/compiler/20newsgroup/sample.ipynb | SebastianWolf-SAP/contextual-ai |
Step 4: Build the explainerThis initializes the underlying explainer object. We provide the `explain_instance` method below with the raw text - LIME's text explainer algorithm will conduct its own preprocessing in order to generate interpretable representations of the data. Hence we must define a custom `predict_fn` w... | def predict_fn(instance):
vec = vectorizer.transform(instance)
return clf.predict_proba(vec)
explainer.build_explainer(predict_fn)
clf = clf
feature_names = []
clf_fn = predict_fn
target_names_list = []
import os
import json
import sys
sys.path.append('../../../')
from xai.compiler.base import Configuration, ... | Interpret 100/200 samples
Interpret 200/200 samples
Warning: figure will exceed the page bottom, adding a new page.
| Apache-2.0 | tutorials/compiler/20newsgroup/sample.ipynb | SebastianWolf-SAP/contextual-ai |
Results | pprint("report generated : %s/20newsgroup-clsssification-model-interpreter-report.pdf" % os.getcwd())
('report generated : '
'/Users/i062308/Development/Explainable_AI/tutorials/compiler/20newsgroup/20newsgroup-clsssification-model-interpreter-report.pdf') | ('report generated : '
'/Users/i062308/Development/Explainable_AI/tutorials/compiler/20newsgroup/20newsgroup-clsssification-model-interpreter-report.pdf')
| Apache-2.0 | tutorials/compiler/20newsgroup/sample.ipynb | SebastianWolf-SAP/contextual-ai |
Fit DCE data | import sys
import matplotlib.pyplot as plt
import numpy as np
sys.path.append('..')
import dce_fit, relaxivity, signal_models, water_ex_models, aifs, pk_models
%load_ext autoreload
%autoreload 2 | _____no_output_____ | Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
--- First get the signal data | # Input time and signal values (subject 4)
t = np.array([19.810000,59.430000,99.050000,138.670000,178.290000,217.910000,257.530000,297.150000,336.770000,376.390000,416.010000,455.630000,495.250000,534.870000,574.490000,614.110000,653.730000,693.350000,732.970000,772.590000,812.210000,851.830000,891.450000,931.070000,97... | _____no_output_____ | Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
Convert enhancement to concentration | # First define some relevant parameters
R10_tissue, R10_vif = 1/1.3651, 1/1.7206
k_vif, k_tissue = 0.9946, 1.2037 # flip angle correction factor
hct = 0.46
# Specify relaxivity model, i.e. concentration --> relaxation rate relationship
c_to_r_model = relaxivity.c_to_r_linear(r1=5.0, r2=7.1)
# Specify signal model, i.... | _____no_output_____ | Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
Fit the concentration data using a pharmacokinetic model | # First create an AIF object
aif = aifs.patient_specific(t, c_p_vif)
# Now create a pharmacokinetic model object
pk_model = pk_models.patlak(t, aif)
# Set some initial parameters and fit the concentration data
weights = np.concatenate([np.zeros(7), np.ones(25)]) # (exclude first few points from fit)
pk_pars_0 = [{'vp... | Wall time: 30.9 ms
Fitted parameters: {'vp': 0.008097170500283217, 'ps': 0.00019992401629213917}
Expected: vp = 0.0081, ps = 2.00e-4
| Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
Alternative approach: fit the tissue signal directlyTo do this, we also need to create a water_ex_model object, which determines the relationship between R1 in each tissue compartment and the exponential R1 components. We start by assuming the fast water exchange limit (as implicitly assumed above when estimating tis... | # Create a water exchange model object.
water_ex_model = water_ex_models.fxl()
# Now fit the enhancement curve
%time pk_pars_enh, enh_fit = dce_fit.enh_to_pkp(enh_tissue, hct, k_tissue, R10_tissue, R10_vif, pk_model, c_to_r_model, water_ex_model, signal_model, pk_pars_0, weights)
plt.plot(t, enh_tissue, '.', label='t... | Wall time: 159 ms
Fitted parameters: {'vp': 0.008081262743467564, 'ps': 0.00019935657535213955}
Expected: vp = 0.0081, ps = 2.00e-4
| Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
Repeat the fit assuming slow water exchange...This time, we assume slow water exchange across the vessel wall. The result will be very different compared with fitting the concentration curve. | # Create a water exchange model object.
water_ex_model = water_ex_models.ntexl() # slow exchange across vessel wall, fast exchange across cell wall
# Now fit the enhancement curve
%time pk_pars_enh_ntexl, enh_fit_ntexl = dce_fit.enh_to_pkp(enh_tissue, hct, k_tissue, R10_tissue, R10_vif, pk_model, c_to_r_model, water_e... | Wall time: 166 ms
Fitted parameters: {'vp': 0.011282424728448814, 'ps': 0.00011163566464040331}
Expected: vp = 0.0113, ps = 1.12e-4
| Apache-2.0 | src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb | JonathanArvidsson/DCE-DSC-MRI_CodeCollection |
Implement Canny edge detection | # Try Canny using "wide" and "tight" thresholds
wide = cv2.Canny(gray, 30, 100)
tight = cv2.Canny(gray, 200, 240)
# Display the images
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.set_title('wide')
ax1.imshow(wide, cmap='gray')
ax2.set_title('tight')
ax2.imshow(tight, cmap='gray') | _____no_output_____ | MIT | cvnd/CVND_Exercises/1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb | sijoonlee/deep_learning |
TODO: Try to find the edges of this flowerSet a small enough threshold to isolate the boundary of the flower. | # Read in the image
image = cv2.imread('images/sunflower.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
## TODO: Define lower and upper thresholds for hysteresis
# right now the th... | _____no_output_____ | MIT | cvnd/CVND_Exercises/1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb | sijoonlee/deep_learning |
LAB - Sarcasm Detector LAB* Analyze input data, determine the sequence length (max)* Train BERT Sequence Classifier to detect sarcasm in the given dataset* Save the best model in './bert_sarcasm_detection_state_dict.pth'* Predict the sacasm for some headlines Download data and import packages | !wget https://github.com/ravi-ilango/acm-dec-2020-nlp/blob/main/lab4/sarcasm_data.zip?raw=true -O sarcasm_data.zip
!unzip sarcasm_data.zip
!pip install transformers
# imports
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from sklearn.model_selection import train_... | _____no_output_____ | MIT | part2/lab4/LAB_Sarcasm_Detector.ipynb | yasheshshroff/ODSC2021_NLP_PyTorch |
Load data and explore | def read_json(json_file):
json_data = []
file = open(json_file)
for line in file:
json_line = json.loads(line)
json_data.append(json_line)
return json_data
json_data = []
for json_file in ['./sarcasm_data/Sarcasm_Headlines_Dataset.json', './sarcasm_data/Sarcasm_Headlines_Dataset_v2.jso... | _____no_output_____ | MIT | part2/lab4/LAB_Sarcasm_Detector.ipynb | yasheshshroff/ODSC2021_NLP_PyTorch |
Working with CMIP6 data in the JASMIN Object StoreThis Notebook describes how to set up a virtual environment and then work with CMIP6 data in the JASMIN Object Store (stored in Zarr format). Start by creating a virtual environment and getting the packages installed... | # Import the required packages
import virtualenv
import pip
import os
# Define and create the base directory install virtual environments
venvs_dir = os.path.join(os.path.expanduser("~"), "nb-venvs")
if not os.path.isdir(venvs_dir):
os.makedirs(venvs_dir)
# Define the venv directory
venv_dir = os.path.join(v... | WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
| BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Accessing CMIP6 Data from the JASMIN (Zarr) Object Store **Pre-requisites**1. Required packages: `['xarray', 'zarr', 'fsspec', 'intake', 'intake_esm', 'aiohttp']`2. Data access: must be able to see JASMIN Object Store for CMIP6 (currently inside JASMIN firewall) Step 1: Import required packages | import xarray as xr
import intake
import intake_esm
import fsspec | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Step 2: read the CMIP6 Intake (ESM) catalog from githubWe define a collection ("col") that can be searched/filtered for required datasets. | col_url = "https://raw.githubusercontent.com/cedadev/" \
"cmip6-object-store/master/catalogs/ceda-zarr-cmip6.json"
col = intake.open_esm_datastore(col_url) | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
How many datasets are currently stored? | f'There are {len(col.df)} datasets' | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Step 3: Filter the catalog for historical and future dataIn this example, we want to compare the surface temperature ("tas") from theUKESM1-0-LL model, for a historical and future ("ssp585-bgc") scenario. | cat = col.search(source_id="UKESM1-0-LL",
experiment_id=["historical", "ssp585-bgc"],
member_id=["r4i1p1f2", "r12i1p1f2"],
table_id="Amon",
variable_id="tas")
# Extract the single record subsets for historical and future experiments
hist_cat = cat.search(experiment_id='historical')
ssp_cat = cat.searc... | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Step 4: Convert to xarray datasets Define a quick function to convert a catalog to an xarray `Dataset`. | def cat_to_ds(cat):
zarr_path = cat.df['zarr_path'][0]
fsmap = fsspec.get_mapper(zarr_path)
return xr.open_zarr(fsmap, consolidated=True, use_cftime=True) | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Extract the `tas` (surface air temperture) variable for the historical and future experiments. | hist_tas = cat_to_ds(hist_cat)['tas']
ssp_tas = cat_to_ds(ssp_cat)['tas'] | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Step 5: Subtract the historical from the future averageGenerate time-series means of historical and future data. Subtract the historical from the future scenario and plot the difference. | # Calculate time means
diff = ssp_tas.mean(axis=0) - hist_tas.mean(axis=0)
# Plot a map of the time-series means
diff.plot() | _____no_output_____ | BSD-2-Clause | notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb | RuthPetrie/ceda-notebooks |
Building a Small Model from ScratchBut before we continue, let's start defining the model:Step 1 will be to import tensorflow. | import tensorflow as tf | _____no_output_____ | Apache-2.0 | basic codes/training_deep_neuralnet.ipynb | MachineLearningWithHuman/ComputerVision |
We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*s... | model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf... | _____no_output_____ | Apache-2.0 | basic codes/training_deep_neuralnet.ipynb | MachineLearningWithHuman/ComputerVision |
around 40+ min | _____no_output_____ | Apache-2.0 | basic codes/training_deep_neuralnet.ipynb | MachineLearningWithHuman/ComputerVision | |
Data Science Unit 1 Sprint Challenge 2 Data Wrangling and StorytellingTaming data from its raw form into informative insights and stories. Data WranglingIn this Sprint Challenge you will first "wrangle" some data from [Gapminder](https://www.gapminder.org/about-gapminder/), a Swedish non-profit co-founded by Hans Ro... | import pandas as pd
cell_phones = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--cell_phones_total--by--geo--time.csv')
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints... | _____no_output_____ | MIT | tdu1s3.ipynb | cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling |
Part 1. Join data First, join the `cell_phones` and `population` dataframes (with an inner join on `geo` and `time`).The resulting dataframe's shape should be: (8590, 4) | df = pd.merge(cell_phones, population, how='inner')
df.shape | _____no_output_____ | MIT | tdu1s3.ipynb | cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling |
Then, select the `geo` and `country` columns from the `geo_country_codes` dataframe, and join with your population and cell phone data.The resulting dataframe's shape should be: (8590, 5) | df = pd.merge(geo_country_codes[['geo', 'country']], df)
df.shape | _____no_output_____ | MIT | tdu1s3.ipynb | cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.