markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Теперь попробуем классифицировать оставшиеся 10 картинок.
errors = 0 for i in range(1, 11): k = clf.predict(digits.data[-i].reshape(1, -1)) print("Классификатор предсказал число {}, на самом деле это {}. Числа {}совпали." .format(k[0], digits.target[-i], "" if k[0] == digits.target[-i] else "не ")) if k[0] != digits.target[-i]: ...
labs/demos-1.ipynb
1x0r/pspis
mit
Давайте посмотрим на "проблемные" числа:
fig = plt.figure(figsize=(12, 4)) frame = 1 for i in range(1, 11): k = clf.predict(digits.data[-i].reshape(1, -1)) if k[0] != digits.target[-i]: digit = 255 - digits.data[-i, :].reshape(8, 8) ax = fig.add_subplot(1, errors, frame) ax.imshow(digit, cmap='gray', interpolat...
labs/demos-1.ipynb
1x0r/pspis
mit
Define Task There are two types of tasks available in MatchZoo. mz.tasks.Ranking and mz.tasks.Classification. We will use a ranking task for this demo.
task = mz.tasks.Ranking() print(task)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Prepare Data
train_raw = mz.datasets.toy.load_data(stage='train', task=task) test_raw = mz.datasets.toy.load_data(stage='test', task=task) type(train_raw)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A DataPack is consists of three pandas.DataFrame:
train_raw.left.head() train_raw.right.head() train_raw.relation.head()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
It is also possible to convert a DataPack into a single pandas.DataFrame that holds all information.
train_raw.frame().head()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
However, using such pandas.DataFrame consumes much more memory if there are many duplicates in the texts, and that is the exact reason why we use DataPack. For more details about data handling, consult matchzoo/tutorials/data_handling.ipynb. Preprocessing MatchZoo preprocessors are used to convert a raw DataPack into a...
preprocessor = mz.preprocessors.BasicPreprocessor()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
There are two steps to use a preprocessor. First, fit. Then, transform. fit will only changes the preprocessor's inner state but not the input DataPack.
preprocessor.fit(train_raw)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
fit will gather useful information into its context, which will be used later in a transform or used to set hyper-parameters of a model.
preprocessor.context
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Once fit, the preprocessor has enough information to transform. transform will not change the preprocessor's inner state and the input DataPack, but return a transformed DataPack.
train_processed = preprocessor.transform(train_raw) test_processed = preprocessor.transform(test_raw) train_processed.left.head()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
As we can see, text_left is already in sequence form that nerual networks love. Just to make sure we have the correct sequence:
vocab_unit = preprocessor.context['vocab_unit'] print('Orig Text:', train_processed.left.loc['Q1']['text_left']) sequence = train_processed.left.loc['Q1']['text_left'] print('Transformed Indices:', sequence) print('Transformed Indices Meaning:', '_'.join([vocab_unit.state['index_term'][i] for i in sequence]))
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
For more details about preprocessing, consult matchzoo/tutorials/data_handling.ipynb. Build Model MatchZoo provides many built-in text matching models.
mz.models.list_available()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Let's use mz.models.DenseBaseline for our demo.
model = mz.models.DenseBaseline()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
The model is initialized with a hyper parameter table, in which values are partially filled. To view parameters and their values, use print.
print(model.params)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
to_frame gives you more informartion in addition to just names and values.
model.params.to_frame()[['Name', 'Description', 'Value']]
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
To set a hyper-parameter:
model.params['task'] = task model.params['mlp_num_units'] = 3 print(model.params)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Notice that we are still missing input_shapes, and that information is store in the preprocessor.
print(preprocessor.context['input_shapes'])
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
We may use update to load a preprocessor's context into a model's hyper-parameter table.
model.params.update(preprocessor.context)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Now we have a completed hyper-parameter table.
model.params.completed()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
With all parameters filled in, we can now build and compile the model.
model.build() model.compile()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
MatchZoo models are wrapped over keras models, and the backend property of a model gives you the actual keras model built.
model.backend.summary()
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
For more details about models, consult matchzoo/tutorials/models.ipynb. Train, Evaluate, Predict A DataPack can unpack itself into data that can be directly used to train a MatchZoo model.
x, y = train_processed.unpack() test_x, test_y = test_processed.unpack() model.fit(x, y, batch_size=32, epochs=5)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
An alternative to train a model is to use a DataGenerator. This is useful for delaying expensive preprocessing steps or doing real-time data augmentation. For some models that needs dynamic batch-wise information, using a DataGenerator is required. For more details about DataGenerator, consult matchzoo/tutorials/data_h...
data_generator = mz.DataGenerator(train_processed, batch_size=32) model.fit_generator(data_generator, epochs=5, use_multiprocessing=True, workers=4) model.evaluate(test_x, test_y) model.predict(test_x)
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
A Shortcut to Preprocessing and Model Building Since data preprocessing and model building are laborious and special setups of some models makes this even worse, MatchZoo provides prepare, a unified interface that handles interaction among data, model, and preprocessor automatically. More specifically, prepare does the...
for model_class in mz.models.list_available(): print(model_class) model, preprocessor, data_generator_builder, embedding_matrix = mz.auto.prepare( task=task, model_class=model_class, data_pack=train_raw, ) train_processed = preprocessor.transform(train_raw, verbose=0) test_pr...
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Save and Load the Model
model.save('my-model') loaded_model = mz.load_model('my-model')
tutorials/quick_start.ipynb
faneshion/MatchZoo
apache-2.0
Note that we gave it a parameter A_Planck. Most clik files have extra nuisance parameters, which you can list (for a given file) with,
clik.clik.get_extra_parameter_names()
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
You should attach parametes with these names to the clik object as we have done above (usually in a script these will be sampled parameters). With the clik object created, we can call it to compute the likelihood. The function expects a parameter cmb of the kind returned by CAMB or CLASS.
cmb = models.classy(lmax=3000)() cmb
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
Here's the negative log likelihood:
clik(cmb)
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
Putting it all together, a simple script which runs this likelihood would look like:
class planck(SlikPlugin): def __init__(self, **kwargs): super().__init__() # load Planck clik file and set up nuisance parameters self.clik = likelihoods.planck.clik( clik_file="plik_lite_v18_TT.clik/", # sample over nuisance parameter ...
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
Ready-to-go wrappers for specific clik files The previous example was easy because there was one single nuisance parameter, A_Planck. Other clik files have many more nuisance parameters, which must all be sampled over and in some cases have the right priors applied (which you can read about here), otherwise you will no...
class planck(SlikPlugin): def __init__(self): super().__init__() # load Planck clik file and set up nuisance parameters self.clik = likelihoods.planck.planck_2015_highl_TT( clik_file="plik_dx11dr2_HM_v18_TT.clik/", ) # set up cosmological params...
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
Common calibration parameters Despite that the Planck likelihood is broken up into different pieces, they sometimes share the same calibration parameters. To apply this correctly in your script, just define one single sampled calibration parameter, then in your __call__, set it across all the different likelihoods.
class planck(SlikPlugin): def __init__(self): super().__init__() # set up low and high L likelihood self.highl = likelihoods.planck.planck_2015_highl_TT( clik_file="plik_dx11dr2_HM_v18_TT.clik/", ) self.lowl = likelihoods.planck.planck_2015_lowl_TT( ...
cosmoslik_plugins/likelihoods/planck/clik.ipynb
marius311/cosmoslik
gpl-3.0
Word counting Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic: Split the string into lines using splitlines. Spl...
file = open("mobydick_chapter1.txt") mobydick = file.read() mobydick = mobydick.splitlines() mobydick = " ".join(mobydick) punctuation = ["-", ",", "."] mobydick = list(mobydick) mobydick_f = list(filter(lambda c: c not in punctuation, mobydick)) mobydick_f = "".join(mobydick_f) stop_words = ["of", "or", "in"] mo...
assignments/assignment07/AlgorithmsEx01.ipynb
SJSlavin/phys202-2015-work
mit
Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
def count_words(data): count = {} for w in range(0, len(data)): if data[w] in count: count[data[w]] += 1 else: count[data[w]] = 1 #this does not sort correctly, and from what I can tell, dictionaries can't be sorted anyway return(count) assert count...
assignments/assignment07/AlgorithmsEx01.ipynb
SJSlavin/phys202-2015-work
mit
Write a function sort_word_counts that return a list of sorted word counts: Each element of the list should be a (word, count) tuple. The list should be sorted by the word counts, with the higest counts coming first. To perform this sort, look at using the sorted function with a custom key and reverse argument.
def sort_word_counts(wc): """Return a list of 2-tuples of (word, count), sorted by count descending.""" wordlist = [] n = 0 for w in wc: wordlist.append((w, wc[w])) #http://stackoverflow.com/questions/3121979/how-to-sort-list-tuple-of-lists-tuples wordlist_s = sorted(wordlist, key=l...
assignments/assignment07/AlgorithmsEx01.ipynb
SJSlavin/phys202-2015-work
mit
Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt: Read the file into a string. Tokenize with stop words of 'the of and a to in is it that as'. Perform a word count, the sort and save the result in a variable named swc.
# YOUR CODE HERE file = open("mobydick_chapter1.txt") mobydick = file.read() mobydick_t = tokenize(mobydick, stop_words = "the of and a to in is it that as") mobydick_wc = count_words(mobydick_t) swc = sort_word_counts(mobydick_wc) assert swc[0]==('i',43) assert len(swc)==848
assignments/assignment07/AlgorithmsEx01.ipynb
SJSlavin/phys202-2015-work
mit
Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
# YOUR CODE HERE words, freq = zip(*swc) plt.bar(np.arange(len(words)), freq, linestyle="dotted") plt.title("Word Frequency") plt.xlabel("Word") plt.ylabel("Frequency") #plt.xticks(words) #couldn't figure out how to format the plot correctly # YOUR CODE HERE words, freq = zip(*swc) plt.scatter(freq, np.arange(0, ...
assignments/assignment07/AlgorithmsEx01.ipynb
SJSlavin/phys202-2015-work
mit
$$\text{???} = (MC)^2$$ Background What if we know the relative likelihood, but want the probability distribution? $$\mathbb{P}(X=x) = \frac{f(x)}{\int_{-\infty}^\infty f(x)dx}$$ But what if $\int f(x)dx$ is hard, or you can't sample from $f$ directly? This is the problem we will be trying to solve. First approach If s...
def metropolis_hastings(f, q, initial_state, num_iters): """ Generate a Markov Chain Monte Carlo using the Metropolis-Hastings algorithm. Parameters ---------- f : function the [relative] likelood function for the distribution we would like to approximate q : fun...
assets/notebooks/2017/04/.ipynb_checkpoints/18_mcmc-checkpoint.ipynb
amniskin/amniskin.github.io
mit
Estimation: $$q(y|x) \sim N(0,b^2)$$
plt.figure(2) plt.subplot(3,1,1) plt.title("STD = 0.1") plt.plot(std01) plt.subplot(3,1,2) plt.title("STD = 1") plt.plot(std1) plt.subplot(3,1,3) plt.title("STD = 10") plt.plot(std10) plt.tight_layout() plb.savefig("tmp/18-MCMC-Cauchy-Estimation_TS.png") plb.savefig("../../../pics/2017/04/18-MCMC-Cauchy-Estimation_TS.p...
assets/notebooks/2017/04/.ipynb_checkpoints/18_mcmc-checkpoint.ipynb
amniskin/amniskin.github.io
mit
Another Aspect How is this happening??? A property called detailed balance, which means, $$\pi_ip_{ij} = p_{ji}\pi_j$$ or in the continuous case: $$f(x)P_{xy} = f(y)P_{yx}$$ But we don't need to go over that... Unless you wanna... Proof? Let $f$ be the desired distribution (in our example, it was the Cauchy Distributi...
def psu_mcmc(X, q, numIters=10000): theta, lambd, k, b1, b2 = 1, 1, 20, 1, 1 thetas, lambds, ks, b1s, b2s = [], [], [], [], [] n = len(X) def f_k(theta, lambd, k, b1, b2): if 0 <= k and k <= n: return theta**sum(X[:k])*lambd**sum(X[k:])*np.exp(-k*theta-(n-k)*lambd) elif k < 0...
assets/notebooks/2017/04/.ipynb_checkpoints/18_mcmc-checkpoint.ipynb
amniskin/amniskin.github.io
mit
plt.plot(psu_data) plt.title("PSU Data") plb.savefig("tmp/psu_ts.png") plt.show()
assets/notebooks/2017/04/.ipynb_checkpoints/18_mcmc-checkpoint.ipynb
amniskin/amniskin.github.io
mit
This could be the purpose of a function: to print the lines of a birthday song for Emily. Now, we define a function to do this. Here is how you define a function: write def; the name you would like to call your function; a set of parentheses containing the parameter(s) of your function; a colon; a docstring describin...
def happy_birthday_to_emily(): # Function definition """ Print a birthday song to Emily. """ print("Happy Birthday to you!") print("Happy Birthday to you!") print("Happy Birthday, dear Emily.") print("Happy Birthday to you!")
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
If we execute the code above, we don't get any output. That's because we only told Python: "Here's a function to do this, please remember it." If we actually want Python to execute everything inside this function, we have to call it: 1.3 How to call a function It is important to distinguish between a function definitio...
# function definition: def happy_birthday_to_emily(): # Function definition """ Print a birthday song to Emily. """ print("Happy Birthday to you!") print("Happy Birthday to you!") print("Happy Birthday, dear Emily.") print("Happy Birthday to you!") # function call: print('Function cal...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.3.2 Calling a function from within another function We can also define functions that call other functions, which is very helpful if we want to split our task into smaller, more manageable subtasks:
def new_line(): """Print a new line.""" print() def two_new_lines(): """Print two new lines.""" new_line() new_line() print("Printing a single line...") new_line() print("Printing two lines...") two_new_lines() print("Printed two lines")
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
You can do the same tricks that we learnt to apply on the built-in functions, like asking for help or for a function type:
help(happy_birthday_to_emily) type(happy_birthday_to_emily)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
The help we get on a function will become more interesting once we learn about function inputs and outputs ;-) 1.4 Working with function input 1.4.1 Parameters and arguments We use parameters and arguments to make a function execute a task depending on the input we provide. For instance, we can change the function abov...
# function definition with using the parameter `name' def happy_birthday(name): """ Print a birthday song with the "name" of the person inserted. """ print("Happy Birthday to you!") print("Happy Birthday to you!") print(f"Happy Birthday, dear {name}.") print("Happy Birthday to you!") # fun...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
We can also store the name in a variable:
my_name="James" happy_birthday(my_name)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
If we forgot to specify the name, we get an error:
happy_birthday()
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Functions can have multiple parameters. We can for example multiply two numbers in a function (using the two parameters x and y) and then call the function by giving it two arguments:
def multiply(x, y): """Multiply two numeric values.""" result = x * y print(result) multiply(2020,5278238) multiply(2,3)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.4.2 Positional vs keyword parameters and arguments The function definition tells Python which parameters are positional and which are keyword. As you might remember, positional means that you have to give an argument for that parameter; keyword means that you can give an argument value, but this is not necessary bec...
def multiply(x, y, third_number=1): # x and y are positional parameters, third_number is a keyword parameter """Multiply two or three numbers and print the result.""" result=x*y*third_number print(result) multiply(2,3) # We only specify values for the positional parameters multiply(2,3,third_number=4) # We...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
If we do not specify a value for a positional parameter, the function call will fail (with a very helpful error message):
multiply(3)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.5 Output: the return statement Functions can have a return statement. The return statement returns a value back to the caller and always ends the execution of the function. This also allows us to use the result of a function outside of that function by assigning it to a variable:
def multiply(x, y): """Multiply two numbers and return the result.""" multiplied = x * y return multiplied #here we assign the returned value to variable z result = multiply(2, 5) print(result)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
We can also print the result directly (without assigning it to a variable), which gives us the same effect as using the print statements we used before:
print(multiply(30,20))
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
If we assign the result to a variable, but do not use the return statement, the function cannot return it. Instead, it returns None (as you can try out below). This is important to realize: even functions without a return statement do return a value, albeit a rather boring one. This value is called None (it’s a built-i...
def multiply_no_return(x, y): """Multiply two numbers and does not return the result.""" result = x * y is_this_a_result = multiply_no_return(2,3) print(is_this_a_result)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Returning multiple values Similarly as the input, a function can also return multiple values as output. We call such a collection of values a tuple (does this term sound familiar ;-)?).
def calculate(x,y): """Calculate product and sum of two numbers.""" product = x * y summed = x + y #we return a tuple of values return product, summed # the function returned a tuple and we unpack it to var1 and var2 var1, var2 = calculate(10,5) print("product:",var1,"sum:",var2)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Make sure you actually save your 2 values into 2 variables, or else you end up with errors or unexpected behavior:
#this will assign `var` to a tuple: var = calculate(10,5) print(var) #this will generate an error var1, var2, var3 = calculate(10,5)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Saving the resulting values in different variables can be useful when you want to use them in different places in your code:
def sum_and_diff_len_strings(string1, string2): """ Return the sum of and difference between the lengths of two strings. """ sum_strings = len(string1) + len(string2) diff_strings = len(string1) - len(string2) return sum_strings, diff_strings sum_strings, diff_strings = sum_and_diff_len_strings...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.6 Documenting your functions with docstrings Docstring is a string that occurs as the first statement in a function definition. For consistency, always use """triple double quotes""" around docstrings. Triple quotes are used even though the string fits on one line. This makes it easy to expand it later. There's no bl...
def my_function(param1, param2): """ This is a reST style. :param param1: this is a first param :param param2: this is a second param :returns: this is a description of what is returned """ return
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
You can see that this docstring describes the function goal, its parameters, its outputs, and the errors it raises. It is a good practice to write a docstring for your functions, so we will always do this! For now we will stick with single-sentence docstrings You can read more about this topic here, here, and here. 1....
def is_even(p): """Check whether a number is even.""" if p % 2 == 1: return False else: return True
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
If the function output is what you expect, Python will show nothing.
input_value = 2 expected_output = True actual_output = is_even(input_value) assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
However, when the actual output is different from what we expected, we got an error. Let's say we made a mistake in writing the function.
def is_even(p): """Check whether a number is even.""" if p % 2 == 1: return False else: return False input_value = 2 expected_output = True actual_output = is_even(input_value) assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.8 Storing a function in a Python module Since Python functions are nice blocks of code with a clear focus, wouldn't it be nice if we can store them in a file? By doing this, we make our code visually very appealing since we are only left with functions calls instead of function definitions. Please open the file utils...
from utils_chapter11 import happy_birthday happy_birthday('George') from utils_chapter11 import multiply multiply(1,2) from utils_chapter11 import is_even is_it_even = is_even(5) print(is_it_even)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
2. Variable scope Please note: scope is a hard concept to grasp, but we think it is important to introduce it here. We will do our best to repeat it during the course. Any variables you declare in a function, as well as the arguments that are passed to a function will only exist within the scope of that function, i.e.,...
def setx(): """Set the value of a variable to 1.""" x = 1 setx() print(x)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Even when we return x, it does not exist outside of the function:
def setx(): """Set the value of a variable to 1.""" x = 1 return x setx() print(x)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Also consider this:
x = 0 def setx(): """Set the value of a variable to 1.""" x = 1 setx() print(x)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
In fact, this code has produced two completely unrelated x's! So, you can not read a local variable outside of the local context. Nevertheless, it is possible to read a global variable from within a function, in a strictly read-only fashion.
x = 1 def getx(): """Print the value of a variable x.""" print(x) getx()
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
You can use two built-in functions in Python when you are unsure whether a variable is local or global. The function locals() returns a list of all local variables, and the function globals() - a list of all global variables. Note that there are many non-interesting system variables that these functions return, so in p...
a=3 b=2 def setb(): """Set the value of a variable b to 11.""" b=11 c=20 print("Is 'a' defined locally in the function:", 'a' in locals()) print("Is 'b' defined locally in the function:", 'b' in locals()) print("Is 'b' defined globally:", 'b' in globals()) setb() print("Is 'a' defined glo...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Finally, note that the local context stays local to the function, and is not shared even with other functions called within a function, for example:
def setb_again(): """Set the value of a variable to 3.""" b=3 print("in 'setb_again' b =", b) def setb(): """Set the value of a variable b to 2.""" b=2 setb_again() print("in 'setb' b =", b) b=1 setb() print("global b =", b)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
We call the function setb() from the global context, and we call the function setb_again() from the context of the function setb(). The variable b in the function setb_again() is set to 3, but this does not affect the value of this variable in the function setb() which is still 2. And as we saw before, the changes in s...
# you code here
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 2: Add another keyword parameter message to the multiply function, which will allow a user to print a message. The default value of this keyword parameter should be an empty string. Test this with 2 messages of your choice. Also test it without specifying a value for the keyword argument when calling a functi...
# function to modify: def multiply(x, y, third_number=1): """Multiply two or three numbers and print the result.""" result=x*y*third_number print(result)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 3: Write a function called multiple_new_lines which takes as argument an integer and prints that many newlines by calling the function newLine.
def new_line(): """Print a new line.""" print() # you code here
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 4: Let's refactor the happy birthday function to have no repetition. Note that previously we print "Happy birthday to you!" three times. Make another function happy_birthday_to_you() that only prints this line and call it inside the function happy_birthday(name).
def happy_birthday_to_you(): # your code here # original function - replace the print statements by the happy_birthday_to_you() function: def happy_birthday(name): """ Print a birthday song with the "name" of the person inserted. """ print("Happy Birthday to you!") print("Happy Birthday to you...
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 5: Try to figure out what is going on in the following examples. How does Python deal with the order of calling functions?
def multiply(x, y, third_number=1): """Multiply two or three numbers and print the result.""" result=x*y*third_number return result print(multiply(1+1,6-2)) print(multiply(multiply(4,2),multiply(2,5))) print(len(str(multiply(10,100))))
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 6: Complete this code to switch the values of two variables:
def switch_two_values(x,y): # your code here a='orange' b='apple' a,b = switch_two_values(a,b) # `a` should contain "apple" after this call, and `b` should contain "orange" print(a,b)
Chapters/Chapter 11 - Functions and scope.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
In the following lines are the codes to import the meshes that we will use
#Importando mallas matriz = bempp.api.import_grid('/home/milan/matriz_12x12x300_E16772.msh') grid_0 = bempp.api.import_grid('/home/milan/PH1_a5_l10_E5550_D2.msh') grid_1 = bempp.api.import_grid('/home/milan/PH2_a5_l10_E5550_D2.msh')
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Also, we have to define the boundary functions that we will use to apply the boundary conditions. In this case an armonic wave for Dirichlet and his derivate for Neumann
#Funciones de dirichlet y neumann def dirichlet_fun(x, n, domain_index, result): result[0] = 1.*np.exp(1j*k*x[0]) def neumann_fun(x, n, domain_index, result): result[0] = 1.*1j*k*n[0]*np.exp(1j*k*x[0])
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Now it's time to define the multitrace operators that represent the diagonal of the matrix. This operators have the information of the transmision between the geometries. The definition of the multitrace (A) is posible to see below: $$ A = \begin{bmatrix} -K & S\ D & K' \end{bmatrix} $$ where K represent the double lay...
#Operadores multitrazo Ai_m = bempp.api.operators.boundary.helmholtz.multitrace_operator(matriz, nm*k) Ae_m = bempp.api.operators.boundary.helmholtz.multitrace_operator(matriz, k) Ai_0 = bempp.api.operators.boundary.helmholtz.multitrace_operator(grid_0,nm*nc*k) Ae_0 = bempp.api.operators.boundary.helmholtz.multitrace_o...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
In order to obtain the spaces created with the multitrace opertaor it's posible to do the following:
#Espacios dirichlet_space_m = Ai_m[0,0].domain neumann_space_m = Ai_m[0,1].domain dirichlet_space_0 = Ai_0[0,0].domain neumann_space_0 = Ai_0[0,1].domain dirichlet_space_1 = Ai_1[0,0].domain neumann_space_1 = Ai_1[0,1].domain
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
To make the complete diagonal of the main matrix showed at beggining is necessary to define the identity operators:
#Operadores identidad ident_m = bempp.api.operators.boundary.sparse.identity(neumann_space_m, neumann_space_m, neumann_space_m) ident_0 = bempp.api.operators.boundary.sparse.identity(neumann_space_0, neumann_space_0, neumann_space_0) ident_1 = bempp.api.operators.boundary.sparse.identity(neumann_space_1, neumann_space_...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
And now assembly with the multitrace operators,
#Operadores diagonales op_m[1,1] = op_m[1,1] + 0.5 * ident_m * ((alfa_m -1)/alfa_m) op_0[1,1] = op_0[1,1] + 0.5 * ident_0* (alfa_c - 1) op_1[1,1] = op_1[1,1] + 0.5 * ident_1* (alfa_c - 1)
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
The contribution between the different geometries are represented via the operators between the meshes, below are showed the codes to create the operator between the meshes:
#Operadores entre mallas SLP_m_0 = bempp.api.operators.boundary.helmholtz.single_layer(neumann_space_m, dirichlet_space_0, dirichlet_space_0, nm*k) SLP_0_m = bempp.api.operators.boundary.helmholtz.single_layer(neumann_space_0, dirichlet_space_m, dirichlet_space_m, nm*k) DLP_m_0 = bempp.api.operators.boundary.helmholtz....
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
The first subinedx corresponds to the domain space, the second one to the range space. Now is time to create the big block that will have all the operators together, in this case the size is 6X6
#Matriz de operadores blocked = bempp.api.BlockedOperator(6,6)
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Below are showed the form to assembly all the operators in the big block:
#Diagonal blocked[0,0] = op_m[0,0] blocked[0,1] = op_m[0,1] blocked[1,0] = op_m[1,0] blocked[1,1] = op_m[1,1] blocked[2,2] = op_0[0,0] blocked[2,3] = op_0[0,1] blocked[3,2] = op_0[1,0] blocked[3,3] = op_0[1,1] blocked[4,4] = op_1[0,0] blocked[4,5] = op_1[0,1] blocked[5,4] = op_1[1,0] blocked[5,5] = op_1[1,1] #Contribu...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
The definition of boundary conditions, the discretization of the operators and the discretization of right side are:
#Condiciones de borde dirichlet_grid_fun_m = bempp.api.GridFunction(dirichlet_space_m, fun=dirichlet_fun) neumann_grid_fun_m = bempp.api.GridFunction(neumann_space_m, fun=neumann_fun) #Discretizacion lado izquierdo blocked_discretizado = blocked.strong_form() #Discretizacion lado derecho rhs = np.concatenate([dirichl...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Now it's time to solve the system of equations, in this work we used a gmres. Also we save the solution and arrays to plot the convergence later.
#Sistema de ecuaciones import inspect from scipy.sparse.linalg import gmres array_it = np.array([]) array_frame = np.array([]) it_count = 0 def iteration_counter(x): global array_it global array_frame global it_count it_count += 1 frame = inspect.currentframe().f_back arr...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Now we can reorder the solution and use it for calculate the field in some point in the exterior of the matrix:
#Campo interior interior_field_dirichlet_m = bempp.api.GridFunction(dirichlet_space_m, coefficients=x[:dirichlet_space_m.global_dof_count]) interior_field_neumann_m = bempp.api.GridFunction(neumann_space_m,coefficients=x[dirichlet_space_m.global_dof_count:dirichlet_space_m.global_dof_count + neumann_space_m.global_dof_...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Finally we plot the convergence and export it
import matplotlib matplotlib.use("Agg") from matplotlib import pyplot from matplotlib import rcParams rcParams["font.family"] = "serif" rcParams["font.size"] = 20 pyplot.figure(figsize = (15,10)) pyplot.title("Convergence") pyplot.plot(array_it, array_frame, lw=2) pyplot.xlabel("iteration") pyplot.ylabel("residual") py...
Code_instructions.ipynb
MilanUngerer/BEM_microwire
mit
Orçamento Primeiro, vamos ter uma visão geral do que foi orçado para a Secretaria Municipal de Educação desde 2011 até o ano corrente, bem como os valores congelados e já executados. Isso é possível com a consulta "Despesas"
df_lista = [] a = 0 for ano in anos: """consulta todos os anos da lista acima""" url_orcado = '{base_url}/consultarDespesas?anoDotacao={ano}&mesDotacao=12&codOrgao=16'.format(base_url=base_url, ano=ano) request_orcado = requests.get(url_orcado, headers=headers, ...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Uma visão dos valores orçados (valor atualizado no início do ano, após projeção mais adequada das receitas) e o liquidado, nos últimos meses do ano:
series = df_total[['anoExercicio', 'valOrcadoAtualizado','valLiquidado']].set_index('anoExercicio') series = series[['valOrcadoAtualizado', 'valLiquidado']].divide(1000000000) grafico1 = series[['valOrcadoAtualizado','valLiquidado']].plot(kind='bar', title ="Orçado x Liquidado", figsize=(15, 7), legend=True, fontsize...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Empenhos Empenho é o ato em que autoridade verifica a existência do crédito orçamentário e autoriza a execução da despesa (por exemplo, para realizar uma licitação). A partir daí, os valores vão sendo liquidados e pagos conforme a execução de um contrato. A API fornece apenas uma página na consulta. O script abaixo ch...
pagination = '&numPagina={PAGE}' ano_empenho = 2017 request_empenhos = requests.get('{base_url}/consultaEmpenhos?anoEmpenho={ano}&mesEmpenho=12&codOrgao=16'.format(base_url=base_url, ano=ano_empenho), headers=headers, verify=True).json() number_of_pages = request_e...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Opção 2. Série Histórica
pagination = '&numPagina={PAGE}'
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Atenção: as consultas podem demorar horas, a depender da quantidade de anos requerida; verifique se o número de anos acima é realmente necessário; faça apenas isso uma vez, e guarde a base para análises futuras
df_empenhos_lista = [] for ano in anos: request_empenhos = requests.get('{base_url}/consultaEmpenhos?anoEmpenho={ano}&mesEmpenho=12&codOrgao=16'.format(base_url=base_url, ano=ano), headers=headers, verify=True).json() number_of_pages = request_empenhos['metada...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Com os passos acima, fizemos a requisição de todas as páginas e convertemos o arquivo formato json em um DataFrame. Agora podemos trabalhar com a análise desses dado no Pandas. Para checar quantos registros existentes, vamos ver o final da lista (aqui havia apenas 2016-2017):
df_empenhos_serie.tail()
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Modalidades de Aplicação Aqui vemos a quantidade de recursos aplicados na Saúde, a título de exemplo, por Modalidade -- se é aplicação na rede direta ou repasse a organizações sociais. Note que o mesmo poderia ser feito para qualquer órgão, ou mesmo para a Prefeitura como um todo:
modalidades = df_empenhos_serie.groupby('txtModalidadeAplicacao')['valTotalEmpenhado', 'valLiquidado'].sum() modalidades # Outra maneira de fazer a mesma operação: #pd.pivot_table(df_empenhos, values='valTotalEmpenhado', index=['txtModalidadeAplicacao'], aggfunc=np.sum)
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Maiores despesas Aqui vamos produzir a lista das 15 maiores despesas da Educação neste período:
despesas = pd.pivot_table(df_empenhos_serie, values=['valLiquidado', 'valPagoExercicio'], index=['numCpfCnpj', 'txtRazaoSocial', 'txtDescricaoProjetoAtividade'], aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Fontes de recursos Agrupamento dos empenhos por fonte de recursos:
fonte = pd.pivot_table(df_empenhos_serie, values=['valLiquidado', 'valPagoExercicio'], index=['txtDescricaoFonteRecurso'], aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')...
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Passo 4. Quer salvar um csv? O objetivo deste tutorial não era fazer uma análise exaustiva da base, mas apenas mostrar o que é possível a partir do consumo da API. Você também pode salvar toda a base de empenhos num arquivo .csv e trabalhar no seu Excel (super te entendo). O Pandas também ajuda nisso! Assim:
df_empenhos_serie.to_csv('serie_empenhos.csv')
SOF_Execucao_Orcamentaria_SMESP.ipynb
campagnucci/api_sof
gpl-3.0
Create X and y Use only same_srv_rate and dst_host_srv_count
y = (data['class'] == 'anomaly').astype(int) y.value_counts() X = data[['same_srv_rate','dst_host_srv_count']]
exercises/05-IntrusionDetection.ipynb
albahnsen/ML_SecurityInformatics
mit
Download EEG Data The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file ...
#csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv" csv_url = "https://h2o-public-test-data.s3.amazonaws.com/smalldata/eeg/eeg_eyestate_splits.csv" data = h2o.import_file(csv_url)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the s...
data.columns
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
To select a subset of the columns to look at, typical Pandas indexing applies:
columns = ['AF3', 'eyeDetection', 'split'] data[columns].head()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0