markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The most common passengers are in the third class.
titanic_data['Embarked'].value_counts()
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
The most common port to embark from is Southamptom.
survivors['SibSp'].value_counts()
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
More often survivors had 0 sibliings or spouses onboards. Question 4 Within what range of standard deviations from the mean (0-1, 1-2, 2-3) is the median ticket price? Above or below the mean?
fare_mean = np.mean(titanic_data.Fare) print("The average fare prices is: " + str(fare_mean)) fare_median = np.median(titanic_data.Fare) print("The median fare prices is: " + str(fare_median)) std_all = np.std(titanic_data.Fare) print("The standard deviation of the fare prices is: " + str(std_all)) max = (fare_mean + std_all) min = (fare_mean - std_all) print("Anything within the " + str(min) + " and " + str(max) + " is within one standard deviation of the mean.")
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Because the median (14.4542) is between -17.46132647619951 and 81.86974241334872, we can conclude that it is in fact with one standard deviation of the mean. Question 5 How much more expensive was the 90th percentile ticket than the 5th percentile ticket? Are they the same class?
p90, p5 = np.percentile(titanic_data.Fare, [90,5]) diff = p90 - p5 print("The 5th percentile: " + str(p5)) print("The 90th percentile: " + str(p90)) print('The difference between the 90th and 5th percentile in ticket cost is ${}'.format(diff)) class_90 = titanic_data[titanic_data.Fare == p90].Pclass print("Tickets at the 90th percentile are associated with passengers in class: " + str(class_90.values[1])) class_5 = titanic_data[titanic_data.Fare == p5].Pclass print("Tickets at the 90th percentile are associated with passengers in class: " + str(class_5.values[1]))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Question 6 Which port has the heightest average ticket price paid by passengers?
s_average = np.mean(titanic_data.Fare[titanic_data.Embarked=='S']) print("South Hampton passengers paid an average of: $" + str(s_average)) q_average = np.mean(titanic_data.Fare[titanic_data.Embarked=='Q']) print("Queenstown passengers paid an average of: $" + str(q_average)) c_average = np.mean(titanic_data.Fare[titanic_data.Embarked=='C']) print("Cherbourg passengers paid an average of: $" + str(c_average))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Cherbourg is the port with the highest price average. Question 7 Which port has passengers from the most similar passenger class?
s_class_ave = titanic_data[titanic_data.Embarked=='S'].Pclass import scipy.stats as sp s_class_ave = sp.mode(s_class_ave) print("Passengers from Southampton are mostly from class " + str(s_class_ave[0][0]) + " with a count of " + str(s_class_ave[1][0])) q_class_ave = titanic_data[titanic_data.Embarked=='Q'].Pclass q_class_ave = sp.mode(q_class_ave) print("Passengers from Queenstown are mostly from class " + str(q_class_ave[0][0]) + " with a count of " + str(q_class_ave[1][0])) c_class_ave = titanic_data[titanic_data.Embarked=='C'].Pclass c_class_ave = sp.mode(c_class_ave) print("Passengers from Cherbourg are mostly from class " + str(c_class_ave[0][0]) + " with a count of " + str(c_class_ave[1][0]))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Southampton has the most passangers from the same class which is third at a count of 353. Question 8 How many male survivors in first class paid lower then the overall median ticket price?
male_over_median = titanic_data[(titanic_data.Fare < fare_median) & (titanic_data.Pclass == 1) & (titanic_data.Survived == 1) & (titanic_data.Sex == "male")] print("First-class male surviors that paid less then the median: " + str(male_over_median.values))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
NO first-class male survivors paid less the the median. Question 9 How much older/younger was the average surviving passenger with family members than the average non-surviving passenger without them?
surv_fam = np.mean(survivors[(survivors.SibSp > 0)|(survivors.Parch > 0)].Age) print("Average age of surviving passengers with any sort of family members: " + str(surv_fam) + "\n") dead_fam = np.mean(titanic_data[(titanic_data.Survived == 0) & (titanic_data.SibSp == 0) & (titanic_data.Parch==0)].Age) print("Average age of dead passengers without any sort of family members: " + str(dead_fam) + "\n") print("Age difference between surviving passengers w/ family members vs dead passengers w/o family members: " + str(surv_fam - dead_fam))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
The surviving passegners with family were 6.888 years younger then the dead without. See above for exact number. Question 10 Display the relationship between survival rate and the quantile of the ticket price for 20 integer quantiles.
import matplotlib.pyplot as plt %matplotlib inline quant_list = [] surv_percents = [] for i in range(20): i_per, iplus_per = np.percentile(titanic_data.Fare, [i*5, (i+1)*5]) total = np.sum((titanic_data.Fare > i_per) & (titanic_data.Fare <= iplus_per)) cut = (titanic_data.Fare > i_per) & (titanic_data.Fare <= iplus_per) surv = cut & (titanic_data.Survived==1) surv_percents.append(np.sum(surv)/total) quant_list.append(i_per) plt.figure(figsize=(10, 5)) plt.plot(quant_list, surv_percents) plt.xlabel("Ticket Price") plt.ylabel("Percent of Survival") plt.show()
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Using Blocks Obligatory "Hello World!"
p.Block("Hello World!")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Play around with alignment
p.Block("Hello World!", h_align="left")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Adding a title
p.Block("Hello World!", title="Announcement", h_align="left")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Writing out dataframes
p.Block(df.head())
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Writing out matplotlib plots
p.Block(df.A.plot())
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Raw HTML output
p.Block("<b>this text is bold</b>")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Composing blocks
p.VStack([p.Block("Hello World!", title="Announcement"), p.Block("<b>this text is bold</b>")])
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
In most cases, one does not need to explicitly wrap elements in blocks
p.Block(["Block %s" % i for i in range(8)])
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Splitting composite blocks into columns
p.Block(["Block %s" % i for i in range(8)], cols=4)
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Layout styling is cascading - styles will cascade from parent blocks to child blocks by default. This behavior can be disabled by setting inherit_cfg to false on the child blocks, or simply specifying the desired settings explicitly.
p.Block(["Block %s" % i for i in range(8)], cols=4, text_align="right")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Using specific block types is simple as well. As an example - the Paragraph block:
p.Block([p.Paragraph("First paragraph."), p.Paragraph("Second paragraph."), p.Paragraph("Third paragraph.")], text_align="right")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
The Pre block preserves whitespace formatting and is rendered using a fixed width font. Useful for rendering code-like text.
p.Pre(""" some: example: yaml: [1,2,3] data: "text" """)
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Creating custom blocks is trivial. For the majority of the cases, one can just inherit from the Container block, which has most of the plumbing already in place:
class Capitalize(p.Raw): def __init__(self, contents, **kwargs): # Stringify and capitalize contents = str(contents).upper() super(Capitalize, self).__init__(contents, **kwargs) Capitalize("this here text should look like shouting!") # Emails a block (or a report consisting of many blocks). The emailing is independent of previous reports being saved (e.g. there is no need to call save # before emailing). from smtplib import SMTPServerDisconnected try: p.Block('').email() except SMTPServerDisconnected: print("Please create ~/.pybloqs.cfg with entry for 'smtp_server'. See README.md and pybloqs/config.py for details.")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Page break
blocks = [p.Block("First page", styles={"page-break-after": "always"}), p.Block("Second page")] r = p.VStack(blocks) r.save("two_page_report.pdf")
docs/source/examples.ipynb
manahl/PyBloqs
lgpl-2.1
Unfortunately, we don't have a reference isotherm in our data. We are instead going to be creative and assume that the adsorption on the silica ($SiO_2$ sample) is a good representation of an adsorption on a non-porous version of the MCM-41 sample. Let's try:
iso_1 = next(i for i in isotherms_n2_77k if i.material == 'MCM-41') iso_2 = next(i for i in isotherms_n2_77k if i.material == 'SiO2') print(iso_1.material) print(iso_2.material) try: results = pgc.alpha_s(iso_1, reference_isotherm=iso_2, verbose=True) except Exception as e: print('ERROR!:', e)
docs/examples/alphas.ipynb
pauliacomi/pyGAPS
mit
The data in our reference isotherm is on a smaller range than that in the isotherm that we want to calculate! We are going to be creative again and first model the adsorption behaviour using a ModelIsotherm.
import pygaps model = pygaps.ModelIsotherm.from_pointisotherm(iso_2, model='BET', verbose=True)
docs/examples/alphas.ipynb
pauliacomi/pyGAPS
mit
With our model fitting the data pretty well, we can now try the $\alpha_s$ method again.
results = pgc.alpha_s(iso_1, model, verbose=True)
docs/examples/alphas.ipynb
pauliacomi/pyGAPS
mit
Реализация метода сопряжённых градиентов
def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None): x = x0 r = A.dot(x0) - b p = -r while np.linalg.norm(r) > tol: alpha = r.dot(r) / p.dot(A.dot(p)) x = x + alpha * p if callback is not None: callback(x) r_next = r + alpha * A.dot(p) beta = r_next.dot(r_next) / r.dot(r) p = -r_next + beta * p r = r_next return x import liboptpy.unconstr_solvers as methods import liboptpy.step_size as ss print("\t CG quadratic") cg_quad = methods.fo.ConjugateGradientQuad(A, b) x_cg = cg_quad.solve(x0, tol=1e-7, disp=True) print("\t Gradient Descent") gd = methods.fo.GradientDescent(f, grad_f, ss.ExactLineSearch4Quad(A, b)) x_gd = gd.solve(x0, tol=1e-7, disp=True) print("Condition number of A =", abs(max(eigs)) / abs(min(eigs)))
Spring2017-2019/15-ConjGrad/Seminar15.ipynb
amkatrutsa/MIPT-Opt
mit
График сходимости
plt.figure(figsize=(8,6)) plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r"$\|f'(x_k)\|^{CG}_2$", linewidth=2) plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:50]], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2) plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2) plt.legend(loc="best", fontsize=20) plt.xlabel(r"Iteration number, $k$", fontsize=20) plt.ylabel("Convergence rate", fontsize=20) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18) print([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()]) plt.figure(figsize=(8,6)) plt.plot([f(x) for x in cg_quad.get_convergence()], label=r"$f(x^{CG}_k)$", linewidth=2) plt.plot([f(x) for x in scopt_cg_array], label=r"$f(x^{CG_{PR}}_k)$", linewidth=2) plt.plot([f(x) for x in gd.get_convergence()], label=r"$f(x^{G}_k)$", linewidth=2) plt.legend(loc="best", fontsize=20) plt.xlabel(r"Iteration number, $k$", fontsize=20) plt.ylabel("Function value", fontsize=20) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18)
Spring2017-2019/15-ConjGrad/Seminar15.ipynb
amkatrutsa/MIPT-Opt
mit
Thus, null accuracy of ~62% if always predict death. Import pyplearnr and initialize optimized pipeline collection
%matplotlib inline %load_ext autoreload import sys import os sys.path.append("./pyplearnr") optimized_pipelines = {} %%time %autoreload import numpy as np import pyplearnr as ppl reload(ppl) kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3, inner_loop_fold_count=3) pipeline_schematic = [ {'scaler': { 'none': {}, 'standard': {}, 'min_max': {}, 'normal': {} }}, {'estimator': { 'knn': { 'n_neighbors': range(1,31), 'weights': ['uniform','distance'] }}} ] pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_schematic) print 'Number of pipelines: %d'%(len(pipelines)), '\n' kfcv.fit(X.values, y.values, pipelines, scoring_metric='auc') kfcv.fit(X.values, y.values, pipelines, best_inner_fold_pipeline_inds = {0:59}) kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=59) %autoreload kfcv.plot_best_pipeline_scores() %autoreload kfcv.plot_contest(color_by='scaler', all_folds=True, legend_loc='center left') %autoreload kfcv.fit(X.values, y.values, pipelines, best_inner_fold_pipeline_inds = {1:6}) kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=8) %autoreload %matplotlib inline kfcv.plot_best_pipeline_scores(number_size=18, markersize=14) %autoreload %matplotlib inline kfcv.plot_contest(number_size=8, markersize=7, all_folds=True, figsize=(10,40), color_by='scaler', box_line_thickness=2) kfcv.pipelines[29] # cmap = pylab.cm.viridis # print cmap.__doc__ worst_pipelines = [85, 67, 65, 84, 69, 83] for pipeline_ind in worst_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' worst_pipelines = [86, 75, 84, 79, 85, 83] for pipeline_ind in worst_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' worst_pipelines = [77, 61, 81, 83, 74, 82, 84] for pipeline_ind in worst_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] best_pipelines = [89, 93, 2, 91, 4, 3] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' best_pipelines = [91, 93, 5, 43, 4, 100] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' best_pipelines = [5, 4, 91, 3, 55, 49, 2] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] %%time %autoreload import numpy as np import pyplearnr as ppl reload(ppl) kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3, inner_loop_fold_count=3) pipeline_bundle_schematic = [ {'scaler': { 'standard': {}, 'normal': {}, 'min_max': {}, 'binary': {} }}, {'estimator': { 'knn': { 'n_neighbors': range(1,30) }, # 'svm': { # 'C': np.array([1.00000000e+00]) # } }} ] pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic) print 'Number of pipelines: %d'%(len(pipelines)), '\n' kfcv.fit(X.values, y.values, pipelines, scoring_metric='accuracy') kfcv.fit(X.values, y.values, pipelines, best_inner_fold_pipeline_inds = {1:24, 2:55}) kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=55) %autoreload %matplotlib inline kfcv.plot_best_pipeline_scores() %autoreload %matplotlib inline kfcv.plot_contest() best_pipelines = [91, 44, 89, 45, 3, 90] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' best_pipelines = [21, 18, 40, 38, 36, 35, 24] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] print '\n' best_pipelines = [55, 39, 41, 42, 47, 40, 114, 110] for pipeline_ind in best_pipelines: print pipeline_ind, kfcv.pipelines[pipeline_ind] %autoreload kfcv.print_report() kfcv.fit(X.values, y.values, pipelines, best_inner_fold_pipeline_inds = {2:18}) kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=18) %autoreload kfcv.print_report() best_inner_fold_pipelines = { 2: 9 } kfcv.fit(X.values, y.values, pipelines, best_inner_fold_pipeline_inds = best_inner_fold_pipelines) best_outer_fold_pipeline = 45 kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline = best_outer_fold_pipeline)
pyplearnr_test_code.ipynb
JaggedParadigm/pyplearnr
apache-2.0
4b. Repeat Question 2a and 2b for a NumPy array: 4c. Repeat Question 3b for a NumPy array: 5. Summarize the similarities and differences between arrays and lists in this context. 6. Using similar code as Questions 1 and 2, answer the same questions for a string. 7. Using similar code as Questions 1 and 2, answer the same questions for a tuple. 8. What are the differences between: 8a. the string - tuple group and the list - array group? Explain. 8b. the two for-loop approaches (from Questions 1 and 2)? Explain. 9. Consider the two approaches using for-loops shown in Questions 1 and 2 above. 9a. When would it be better to use the first for-loop approach when programming with a collection of data? 9b. When would it be better to use the second for-loop approach when programming with a collection of data? 10a. Before you run the code in the cell below, predict what you think will print.
## Q10 code def change(item): item = 100 print("before", list1) change(list1[0]) print("after", list1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
10b. Now run the code, did the contents of list1 change? 10c. If you called the change function on array1, would the contents of array1 change? 10d. Explain the behavior that you see when you run the change function on a list or array. 11a. Before you run the code in the cell below, predict what you think will print.
## Q11 code def change_first(collection): collection[0] = 100 print("before", list1) change_first(list1) print("after", list1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
11b. Now run the code, Did the contents of list1 change? 11c. If you called the change_first function on array1, would the contents of array1 change? 11d. Explain the behavior that you see when you run the change_first function on a list or array. Model 2: Assignments We also need to be understand when we assign a new variable to an existing collection of data, whether we are referring to the original collection or a copy of the entire collection: For each of the code cells below, predict the output BEFORE running them ("prediction"), then run and comment on whether the original variable changed or not ("analysis"). 12a. Prediction:
## Q12 code x = 0 y = x y = 50 print(x)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
12b. Analysis: 13a. Prediction:
## Q13 code list1 = list(range(5)) list2 = list1 list2[0] = 50 print(list1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
13b. Analysis: 14a. Prediction:
## Q14 code list3 = list(list1) list3[0] = 100 print(list1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
14b. Analysis: 15a. Prediction:
## Q15 code array1 = np.array(range(5)) array2 = array1 array2[0] = 50 print(array1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
15b. Analysis: 16a. Prediction:
## Q16 code array3 = np.array(array1) array3[0] = 100 print(array1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
16b. Analysis: 17a. Prediction:
## Q17 code dictionary1 = {"A":"alpha", "B":"beta", "C":"gamma"} dictionary2 = dictionary1 dictionary2["A"] = "first letter" print(dictionary1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
17b. Analysis: 18a. Prediction:
## Q18 code dictionary3 = dict(dictionary1) dictionary3["A"] = "T" print(dictionary1)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
18b. Analysis: Critical Thinking Questions 19. What is the default behavior of an assignment (=) in Python? In other words, does it make a second reference to an original collection, or does it make a copy of the entire collection? 20. What syntax is required to produce the opposite behavior (not the default behavior)? 21. In your own words, explain the main idea from this model. 22. Does this main idea apply to strings? 23. Here are two additional ways to make copies of a collection. Test whether they give you a second reference to the original collection or whether they make a copy of the entire collection, and describe your findings (try each method on each indicated data type): 23a. for lists and arrays, test a slice with no values and describe the results (copy or reference?). list4 = list1[:] 23b. for arrays and dictionaries, test the copy method and describe the results (copy or reference?). array4 = array1.copy() Model 3: List Comprehension We first saw list comprehensions very breifly in our first lesson on random numbers. List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operation(s) applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition (reference). A well written list comprehension can often take the place of a for loop. Let's work through some examples.
## run this code to make a list original = list(range(5)) print(original)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Write and run code that makes a new list, called squares that contains the squares of the values in original using a for loop.
## run this code to make a new list of squares of original ## this version uses list comprehension squares_lc = [x**2 for x in original] print(squares_lc)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
24a. Explain the results of the list comprehension above. 24b. How does the list comprehension work? Write and run code that makes a new list, called evens that contains the even values from original using a for loop.
## run this code to make a new list of even numbers from original ## this version uses list comprehension evens_lc = [x for x in original if x%2==0] print(evens_lc)
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
25a. Explain the results of the list comprehension above. 25b. How does the list comprehension work? Write and run code that uses a loop or loops to make the following list of tuples that contain an integer and its square: [(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25)] Now write and run code that makes the same results in a single line using list comprehension Note: you can also have multiple loop replacements in sequential order in a list comprehension or nested inside a list comprehension. Check out the examples below from here and here.
## run this code ## list comprehension of loops below [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y] ## run this code ## loop version of list comprehension above combs = [] for x in [1,2,3]: for y in [3,1,4]: if x != y: combs.append((x, y)) combs ## run this code ## list comprehension of loops below [[x+y for x in ['A', 'B']] for y in ['C', 'D']] ## run this code ## loop version of list comprehension above list_a = ['A', 'B'] list_b = ['C', 'D'] lcombs = [] for y in list_b: for x in list_a: lcombs.append([x+y]) lcombs
lesson24/Lesson24_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Reading the HTML file We read the url of soccer teams, get the body of response. We also create a cdr. It contains raw_content and url field. At the second part of this tutorial, we'll use it.
url = 'https://en.wikipedia.org/wiki/List_of_football_clubs_in_Italy' html_page = open('./resources/italy_teams.html', mode='r', encoding='utf-8').read() cdr = { 'raw_content': html_page, 'url': url, 'dataset': 'italy_team' } print('The first 600 chars of the html page:\n') print(html_page[:600])
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
Extracting the tables Extracting the tables in a Web page is very easy as ETK has a table extractor. We devide this phase into two parts. The first part is to create an instance of TableExtractor, and use that instance to extract the raw tables.
my_table_extractor = TableExtractor() tables_in_page = my_table_extractor.extract(html_page)[:14] print('Number of tables in this page:', len(tables_in_page), '\n') print('The first table in the page shows below: \n') print(json.dumps(tables_in_page[0].value, indent=2))
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
In the second part, we use JSON path to do further table extraction. Aside: ETK uses JSON paths to access data in JSON documents. Take a look at the excellent and short introduction to JSON paths: http://goessner.net/articles/JsonPath/
all_json_path = '$.cells[0:4].text' docs = list() for table in tables_in_page: # skipping the first row, the heading for row in table.value['rows'][1:]: doc = etk.create_document(row) row_values = doc.select_segments(all_json_path) # add the information we extracted in the knowledge graph of the doc. doc.kg.add_value('team', value=row_values[0].value) doc.kg.add_value('city_name', value=row_values[1].value) doc.kg.add_value('stadium', value=row_values[2].value) capacity_split = re.split(' |,', row_values[3].value) if capacity_split[-1] != '': capacity = int(capacity_split[-2] + capacity_split[-1]) if len(capacity_split) > 1 else int( capacity_split[-1]) doc.kg.add_value('capacity', value=capacity) docs.append(doc) print('Number of rows extracted from that page', len(docs), '\n') print('Sample rows(5):') for doc in docs[:5]: print(doc.kg.value, '\n')
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
The extracted tables are now stored in your JSON document. construct a dict that maps city names to all geonames records that contain the city name with population greater than 25,000.
file_name = './resources/cities_ppl_25000.json' file = open(file_name, 'r') city_dataset = json.loads(file.read()) file.close() city_list = list(city_dataset.keys()) print('There are', len(city_list), 'cities with population great than or equal to 25,000.\n') print('City list samples(20):\n') print(city_list[:20])
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
Identifying the city names in geonames and linking to geonames There are many ways to do this step. We will do it using the ETK glossary extractor to illustrate how to use other extractors and how to chain the results of one extractor as input to other extractors. Using data from the geonames.org web site, we prepared a list of all cities in the world with population greater than 25,000. We use this small glossary to make the code run faster, but you may want to try it with the full list of cities. First, we need to load the glossary in ETK. We're using the default tokenizer to tokenize the strings. Besides, we set ngrams to zero to let the program choose the best ngram number automatically.
my_glossary_extractor = GlossaryExtractor(glossary=city_list, extractor_name='tutorial_glossary', tokenizer=etk.default_tokenizer, ngrams=3, case_sensitive=False)
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
Now we are going to use the glossary to extract from the Home city column all the strings that match names in geonames. This method will allow us to extract the geonames city name from cells that may contain extraneous information. To run the glossary extractor over all cells containing Home city we use a JSON path that selects these cells across all tables. Our list of extractions has the names of cities that we know appear in geonames. Often, different cities in the world have the same name (e.g., Paris, France and Paris, Texas). To get the latitude and longitude, we need to identify the correct city. We know all the cities are in Italy, so we can easily filter.
hit_count = 0 for doc in docs: city_json_path = '$.cells[1].text' row_values = doc.select_segments(city_json_path) # use the city field of the doc, run the GlossaryExtractor extractions = doc.extract(my_glossary_extractor, row_values[0]) if extractions: path = '$."' + extractions[0].value + '"[?(@.country == "Italy")]' jsonpath_expr = jex.parse(path) city_match = jsonpath_expr.find(city_dataset) if city_match: hit_count += 1 # add corresponding values of city_dataset into knowledge graph of the doc for field in city_match[0].value: doc.kg.add_value(field, value=city_match[0].value[field]) print('There\'re', hit_count, 'hits for city_list.\n') print('Final result sample:\n') print(json.dumps(docs[0].kg.value, indent=2))
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
Part 2 ETK Module
import os import sys import json import requests import jsonpath_ng.ext as jex import re from etk.etk import ETK from etk.document import Document from etk.etk_module import ETKModule from etk.knowledge_graph_schema import KGSchema from etk.utilities import Utility from etk.extractors.table_extractor import TableExtractor from etk.extractors.glossary_extractor import GlossaryExtractor class ItalyTeamsModule(ETKModule): def __init__(self, etk): ETKModule.__init__(self, etk) self.my_table_extractor = TableExtractor() file_name = './resources/cities_ppl_25000.json' file = open(file_name, 'r') self.city_dataset = json.loads(file.read()) file.close() self.city_list = list(self.city_dataset.keys()) self.my_glossary_extractor = GlossaryExtractor(glossary=self.city_list, extractor_name='tutorial_glossary', tokenizer=etk.default_tokenizer, ngrams=3, case_sensitive=False) def process_document(self, cdr_doc: Document): new_docs = list() doc_json = cdr_doc.cdr_document if 'raw_content' in doc_json and doc_json['raw_content'].strip() != '': tables_in_page = self.my_table_extractor.extract( doc_json['raw_content'])[:14] for table in tables_in_page: # skipping the first row, the heading for row in table.value['rows'][1:]: doc = etk.create_document(row) all_json_path = '$.cells[0:4].text' row_values = doc.select_segments(all_json_path) # add the information we extracted in the knowledge graph of the doc. doc.kg.add_value('team', value=row_values[0].value) doc.kg.add_value('city_name', value=row_values[1].value) doc.kg.add_value('stadium', value=row_values[2].value) capacity_split = re.split(' |,', row_values[3].value) if capacity_split[-1] != '': capacity = int(capacity_split[-2] + capacity_split[-1]) if len(capacity_split) > 1 else int( capacity_split[-1]) doc.kg.add_value('capacity', value=capacity) city_json_path = '$.cells[1].text' row_values = doc.select_segments(city_json_path) # use the city field of the doc, run the GlossaryExtractor extractions = doc.extract( self.my_glossary_extractor, row_values[0]) if extractions: path = '$."' + \ extractions[0].value + '"[?(@.country == "Italy")]' jsonpath_expr = jex.parse(path) city_match = jsonpath_expr.find(self.city_dataset) if city_match: # add corresponding values of city_dataset into knowledge graph of the doc for field in city_match[0].value: doc.kg.add_value( field, value=city_match[0].value[field]) new_docs.append(doc) return new_docs def document_selector(self, doc) -> bool: return doc.cdr_document.get("dataset") == "italy_team" if __name__ == "__main__": url = 'https://en.wikipedia.org/wiki/List_of_football_clubs_in_Italy' html_page = open('./resources/italy_teams.html', mode='r', encoding='utf-8').read() cdr = { 'raw_content': html_page, 'url': url, 'dataset': 'italy_team' } kg_schema = KGSchema(json.load(open('./resources/master_config.json'))) etk = ETK(modules=ItalyTeamsModule, kg_schema=kg_schema) etk.parser = jex.parse cdr_doc = Document(etk, cdr_document=cdr, mime_type='json', url=cdr['url']) results = etk.process_ems(cdr_doc)[1:] print('Total docs:', len(results)) print("Sample result:\n") print(json.dumps(results[0].value, indent=2))
notebooks/etk_tutorial.ipynb
usc-isi-i2/etk
mit
Leveraging Word2vec for Text Classification Many machine learning algorithms requires the input features to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is one hot encoding methods such as bag of words or tf-idf. The advantage of these approach is that they have fast execution time, while the main drawback is they lose the ordering & semantics of the words. The motivation behind converting text into semantic vectors (such as the ones provided by Word2Vec) is that not only do these type of methods have the capabilities to extract the semantic relationships (e.g. the word powerful should be closely related to strong as oppose to another word like bank), but they should be preserve most of the relevant information about a text while having relatively low dimensionality. In this notebook, we'll take a look at how a Word2Vec model can also be used as a dimensionality reduction algorithm to feed into a text classifier. A good one should be able to extract the signal from the noise efficiently, hence improving the performance of the classifier. Data Preparation We'll download the text classification data, read it into a pandas dataframe and split it into train and test set.
import os from subprocess import call def download_data(base_dir='.'): """download Reuters' text categorization benchmarks from its url.""" train_data = 'r8-train-no-stop.txt' test_data = 'r8-test-no-stop.txt' concat_data = 'r8-no-stop.txt' base_url = 'http://www.cs.umb.edu/~smimarog/textmining/datasets/' if not os.path.isdir(base_dir): os.makedirs(base_dir, exist_ok=True) dir_prefix_flag = ' --directory-prefix ' + base_dir # brew install wget # on a mac if you don't have it train_data_path = os.path.join(base_dir, train_data) if not os.path.isfile(train_data_path): call('wget ' + base_url + train_data + dir_prefix_flag, shell=True) test_data_path = os.path.join(base_dir, test_data) if not os.path.isfile(test_data_path): call('wget ' + base_url + test_data + dir_prefix_flag, shell=True) concat_data_path = os.path.join(base_dir, concat_data) if not os.path.isfile(concat_data_path): # concatenate train and test files, we'll make our own train-test splits # the > piping symbol directs the concatenated file to a new file, it # will replace the file if it already exists; on the other hand, the >> symbol # will append if it already exists train_test_path = os.path.join(base_dir, 'r8-*-no-stop.txt') call('cat {} > {}'.format(train_test_path, concat_data_path), shell=True) return concat_data_path base_dir = 'data' data_path = download_data(base_dir) data_path def load_data(data_path): texts, labels = [], [] with open(data_path) as f: for line in f: label, text = line.split('\t') # texts are already tokenized, just split on space # in a real use-case we would put more effort in preprocessing texts.append(text.split()) labels.append(label) return pd.DataFrame({'texts': texts, 'labels': labels}) data = load_data(data_path) data['labels'] = data['labels'].astype('category') print('dimension: ', data.shape) data.head() label_mapping = data['labels'].cat.categories data['labels'] = data['labels'].cat.codes X = data['texts'] y = data['labels'] test_size = 0.1 random_state = 1234 X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=test_size, random_state=random_state, stratify=y) # val_size = 0.1 # X_train, X_val, y_train, y_val = train_test_split( # X_train, y_train, test_size=val_size, random_state=random_state, stratify=y_train)
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
Gensim Implementation After feeding the Word2Vec algorithm with our corpus, it will learn a vector representation for each word. This by itself, however, is still not enough to be used as features for text classification as each record in our data is a document not a word. To extend these word vectors and generate document level vectors, we'll take the naive approach and use an average of all the words in the document (We could also leverage tf-idf to generate a weighted-average version, but that is not done here). The Word2Vec algorithm is wrapped inside a sklearn-compatible transformer which can be used almost the same way as CountVectorizer or TfidfVectorizer from sklearn.feature_extraction.text. Almost - because sklearn vectorizers can also do their own tokenization - a feature which we won't be using anyway because the corpus we will be using is already tokenized. In the next few code chunks, we will build a pipeline that transforms the text into low dimensional vectors via average word vectors as use it to fit a boosted tree model, we then report the performance of the training/test set. The transformers folder that contains the implementation is at the following link.
from xgboost import XGBClassifier from sklearn.pipeline import Pipeline from transformers import GensimWord2VecVectorizer gensim_word2vec_tr = GensimWord2VecVectorizer(size=50, min_count=3, sg=1, alpha=0.025, iter=10) xgb = XGBClassifier(learning_rate=0.01, n_estimators=100, n_jobs=-1) w2v_xgb = Pipeline([ ('w2v', gensim_word2vec_tr), ('xgb', xgb) ]) w2v_xgb import time start = time.time() w2v_xgb.fit(X_train, y_train) elapse = time.time() - start print('elapsed: ', elapse) w2v_xgb from sklearn.metrics import accuracy_score, confusion_matrix y_train_pred = w2v_xgb.predict(X_train) print('Training set accuracy %s' % accuracy_score(y_train, y_train_pred)) confusion_matrix(y_train, y_train_pred) y_test_pred = w2v_xgb.predict(X_test) print('Test set accuracy %s' % accuracy_score(y_test, y_test_pred)) confusion_matrix(y_test, y_test_pred)
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
We can extract the Word2vec part of the pipeline and do some sanity check of whether the word vectors that were learned made any sense.
vocab_size = len(w2v_xgb.named_steps['w2v'].model_.wv.index2word) print('vocabulary size:', vocab_size) w2v_xgb.named_steps['w2v'].model_.wv.most_similar(positive=['stock'])
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
Keras Implementation We'll also show how we can use a generic deep learning framework to implement the Wor2Vec part of the pipeline. There are many variants of Wor2Vec, here, we'll only be implementing skip-gram and negative sampling. The flow would look like the following: An (integer) input of a target word and a real or negative context word. This is essentially the skipgram part where any word within the context of the target word is a real context word and we randomly draw from the rest of the vocabulary to serve as the negative context words. An embedding layer lookup (i.e. looking up the integer index of the word in the embedding matrix to get the word vector). A dot product operation. As the network trains, words which are similar should end up having similar embedding vectors. The most popular way of measuring similarity between two vectors $A$ and $B$ is the cosine similarity. \begin{align} similarity = cos(\theta) = \frac{\textbf{A}\cdot\textbf{B}}{\parallel\textbf{A}\parallel_2 \parallel \textbf{B} \parallel_2} \end{align} The denominator of this measure acts to normalize the result – the real similarity operation is on the numerator: the dot product between vectors $A$ and $B$. Followed by a sigmoid output layer. Our network is a binary classifier since it's distinguishing words from the same context versus those that aren't.
# the keras model/graph would look something like this: from keras import layers, optimizers, Model # adjustable parameter that control the dimension of the word vectors embed_size = 100 input_center = layers.Input((1,)) input_context = layers.Input((1,)) embedding = layers.Embedding(vocab_size, embed_size, input_length=1, name='embed_in') center = embedding(input_center) # shape [seq_len, # features (1), embed_size] context = embedding(input_context) center = layers.Reshape((embed_size,))(center) context = layers.Reshape((embed_size,))(context) dot_product = layers.dot([center, context], axes=1) output = layers.Dense(1, activation='sigmoid')(dot_product) model = Model(inputs=[input_center, input_context], outputs=output) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=0.01)) model.summary() # then we can feed in the skipgram and its label (whether the word pair is in or outside # the context) batch_center = [2354, 2354, 2354, 69, 69] batch_context = [4288, 203, 69, 2535, 815] batch_label = [0, 1, 1, 0, 1] model.train_on_batch([batch_center, batch_context], batch_label)
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
The transformers folder that contains the implementation is at the following link.
from transformers import KerasWord2VecVectorizer keras_word2vec_tr = KerasWord2VecVectorizer(embed_size=50, min_count=3, epochs=5000, negative_samples=2) keras_word2vec_tr keras_w2v_xgb = Pipeline([ ('w2v', keras_word2vec_tr), ('xgb', xgb) ]) keras_w2v_xgb.fit(X_train, y_train) y_train_pred = keras_w2v_xgb.predict(X_train) print('Training set accuracy %s' % accuracy_score(y_train, y_train_pred)) confusion_matrix(y_train, y_train_pred) y_test_pred = keras_w2v_xgb.predict(X_test) print('Test set accuracy %s' % accuracy_score(y_test, y_test_pred)) confusion_matrix(y_test, y_test_pred) print('vocabulary size:', keras_w2v_xgb.named_steps['w2v'].vocab_size_) keras_w2v_xgb.named_steps['w2v'].most_similar(positive=['stock'])
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
Benchmarks We'll compare the word2vec + xgboost approach with tfidf + logistic regression. The latter approach is known for its interpretability and fast training time, hence serves as a strong baseline. Note that for sklearn's tfidf, we didn't use the default analyzer 'words', as this means it expects that input is a single string which it will try to split into individual words, but our texts are already tokenized, i.e. already lists of words.
from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer(stop_words='english', analyzer=lambda x: x) logistic = LogisticRegression(solver='liblinear', multi_class='auto') tfidf_logistic = Pipeline([ ('tfidf', tfidf), ('logistic', logistic) ]) from scipy.stats import randint, uniform w2v_params = {'w2v__size': [100, 150, 200]} tfidf_params = {'tfidf__ngram_range': [(1, 1), (1, 2)]} logistic_params = {'logistic__C': [0.5, 1.0, 1.5]} xgb_params = {'xgb__max_depth': randint(low=3, high=12), 'xgb__colsample_bytree': uniform(loc=0.8, scale=0.2), 'xgb__subsample': uniform(loc=0.8, scale=0.2)} tfidf_logistic_params = {**tfidf_params, **logistic_params} w2v_xgb_params = {**w2v_params, **xgb_params} from sklearn.model_selection import RandomizedSearchCV cv = 3 n_iter = 3 random_state = 1234 scoring = 'accuracy' all_models = [ ('w2v_xgb', w2v_xgb, w2v_xgb_params), ('tfidf_logistic', tfidf_logistic, tfidf_logistic_params) ] all_models_info = [] for name, model, params in all_models: print('training:', name) model_tuned = RandomizedSearchCV( estimator=model, param_distributions=params, cv=cv, n_iter=n_iter, n_jobs=-1, verbose=1, scoring=scoring, random_state=random_state, return_train_score=False ).fit(X_train, y_train) y_test_pred = model_tuned.predict(X_test) test_score = accuracy_score(y_test, y_test_pred) info = name, model_tuned.best_score_, test_score, model_tuned all_models_info.append(info) columns = ['model_name', 'train_score', 'test_score', 'estimator'] results = pd.DataFrame(all_models_info, columns=columns) results = (results .sort_values('test_score', ascending=False) .reset_index(drop=True)) results
keras/text_classification/word2vec_text_classification.ipynb
ethen8181/machine-learning
mit
Using nbtlib The Named Binary Tag (NBT) file format is a simple structured binary format that is mainly used by the game Minecraft (see the official specification for more details). This short documentation will show you how you can manipulate nbt data using the nbtlib module. Loading a file
import nbtlib nbt_file = nbtlib.load('nbt_files/bigtest.nbt') nbt_file['stringTest']
docs/Usage.ipynb
vberlier/nbtlib
mit
By default nbtlib.load will figure out by itself if the specified file is gzipped, but you can also use the gzipped= keyword only argument if you know in advance whether the file is gzipped or not.
uncompressed_file = nbtlib.load('nbt_files/hello_world.nbt', gzipped=False) uncompressed_file.gzipped
docs/Usage.ipynb
vberlier/nbtlib
mit
The nbtlib.load function also accepts the byteorder= keyword only argument. It lets you specify whether the file is big-endian or little-endian. The default value is 'big', which means that the file is interpreted as big-endian by default. You can set it to 'little' to use the little-endian format.
little_endian_file = nbtlib.load('nbt_files/hello_world_little.nbt', byteorder='little') little_endian_file.byteorder
docs/Usage.ipynb
vberlier/nbtlib
mit
Objects returned by the nbtlib.load function are instances of the nbtlib.File class. The nbtlib.load function is actually a small helper around the File.load classmethod. If you need to load files from an already opened file-like object, you can use the File.parse class method.
from nbtlib import File with open('nbt_files/hello_world.nbt', 'rb') as f: hello_world = File.parse(f) hello_world
docs/Usage.ipynb
vberlier/nbtlib
mit
The File class inherits from Compound, which inherits from dict. This means that you can use standard dict operations to access data inside of the file.
nbt_file.keys()
docs/Usage.ipynb
vberlier/nbtlib
mit
Modifying files
from nbtlib.tag import * with nbtlib.load('nbt_files/demo.nbt') as demo: demo['counter'] = Int(demo['counter'] + 1) demo
docs/Usage.ipynb
vberlier/nbtlib
mit
If you don't want to use a context manager, you can call the .save method manually to overwrite the original file or make a copy by specifying a different path. The .save method also accepts the gzipped= keyword only argument. By default, the copy will be gzipped if the original file is gzipped. Similarly, you can use the byteorder= keyword only argument to specify whether the file should be saved using the big-endian or little-endian format. By default, the copy will be saved using the same format as the original file.
demo = nbtlib.load('nbt_files/demo.nbt') ... demo.save() # overwrite demo.save('nbt_files/demo_copy.nbt', gzipped=True) # make a gzipped copy demo.save('nbt_files/demo_little.nbt', byteorder='little') # convert the file to little-endian nbtlib.load('nbt_files/demo_copy.nbt')['counter'] nbtlib.load('nbt_files/demo_little.nbt', byteorder='little')['counter']
docs/Usage.ipynb
vberlier/nbtlib
mit
You can also write nbt data to an already opened file-like object using the .write method.
with open('nbt_files/demo_copy.nbt', 'wb') as f: demo.write(f)
docs/Usage.ipynb
vberlier/nbtlib
mit
Creating files
new_file = File({ 'foo': String('bar'), 'spam': IntArray([1, 2, 3]), 'egg': List[String](['hello', 'world']) }) new_file.save('nbt_files/new_file.nbt') loaded_file = nbtlib.load('nbt_files/new_file.nbt') loaded_file.gzipped loaded_file.byteorder
docs/Usage.ipynb
vberlier/nbtlib
mit
New files are uncompressed by default. You can use the gzipped= keyword only argument to create a gzipped file. New files are also big-endian by default. You can use the byteorder= keyword only argument to set the endianness of the file to either 'big' or 'little'.
new_file = File( {'thing': LongArray([1, 2, 3])}, gzipped=True, byteorder='little' ) new_file.save('nbt_files/new_file_gzipped_little.nbt') loaded_file = nbtlib.load('nbt_files/new_file_gzipped_little.nbt', byteorder='little') loaded_file.gzipped loaded_file.byteorder
docs/Usage.ipynb
vberlier/nbtlib
mit
Performing operations on tags With the exception of ByteArray, IntArray and LongArray tags, every tag type inherits from a python builtin, allowing you to make use of their rich and familiar interfaces. ByteArray, IntArray and LongArray tags on the other hand, inherit from numpy arrays instead of the builtin array type in order to benefit from numpy's efficiency. | Base type | Associated nbt tags | | ------------------- | ------------------------------------ | | int | Byte, Short, Int, Long | | float | Float, Double | | str | String | | numpy.ndarray | ByteArray, IntArray, LongArray | | list | List | | dict | Compound | All the methods and operations that are usually available on the the base types can be used on the associated tags.
my_list = List[String](char.upper() for char in 'hello') my_list.reverse() my_list[3:] my_array = IntArray([1, 2, 3]) my_array + 100 my_pizza = Compound({ 'name': String('Margherita'), 'price': Double(5.7), 'size': String('medium') }) my_pizza.update({'name': String('Calzone'), 'size': String('large')}) my_pizza['price'] = Double(my_pizza['price'] + 2.5) my_pizza
docs/Usage.ipynb
vberlier/nbtlib
mit
Serializing nbt tags to snbt While using repr() on nbt tags outputs a python representation of the tag, calling str() on nbt tags (or simply printing them) will return the nbt literal representing that tag.
example_tag = Compound({ 'numbers': IntArray([1, 2, 3]), 'foo': String('bar'), 'syntax breaking': Float(42), 'spam': String('{"text":"Hello, world!\\n"}') }) print(repr(example_tag)) print(str(example_tag)) print(example_tag)
docs/Usage.ipynb
vberlier/nbtlib
mit
Converting nbt tags to strings will serialize them to snbt. If you want more control over the way nbt tags are serialized, you can use the nbtlib.serialize_tag function. In fact, using str on nbt tags simply calls nbtlib.serialize_tag on the specified tag.
from nbtlib import serialize_tag print(serialize_tag(example_tag)) serialize_tag(example_tag) == str(example_tag)
docs/Usage.ipynb
vberlier/nbtlib
mit
You might have noticed that by default, the nbtlib.serialize_tag function will render strings with single ' or double " quotes based on their content to avoid escaping quoting characters. The string is serialized such that the type of quotes used is different from the first quoting character found in the string. If the string doesn't contain any quoting character, the nbtlib.serialize_tag function will render the string as a double " quoted string.
print(String("contains 'single' quotes")) print(String('contains "double" quotes')) print(String('''contains 'single' and "double" quotes'''))
docs/Usage.ipynb
vberlier/nbtlib
mit
You can overwrite this behavior by setting the quote= keyword only argument to either a single ' or a double " quote.
print(serialize_tag(String('forcing "double" quotes'), quote='"'))
docs/Usage.ipynb
vberlier/nbtlib
mit
The nbtlib.serialize_tag function can be used with the compact= keyword only argument to remove all the extra whitespace from the output.
print(serialize_tag(example_tag, compact=True))
docs/Usage.ipynb
vberlier/nbtlib
mit
If you'd rather have something a bit more readable, you can use the indent= keyword only argument to tell the nbtlib.serialize_tag function to output indented snbt. The argument can be either a string or an integer and will be used to define how to render each indentation level.
nested_tag = Compound({ 'foo': List[Int]([1, 2, 3]), 'bar': String('name'), 'values': List[Compound]([ {'test': String('a'), 'thing': ByteArray([32, 32, 32])}, {'test': String('b'), 'thing': ByteArray([64, 64, 64])} ]) }) print(serialize_tag(nested_tag, indent=4))
docs/Usage.ipynb
vberlier/nbtlib
mit
If you need the output ot be indented with tabs instead, you can set the indent= argument to '\t'.
print(serialize_tag(nested_tag, indent='\t'))
docs/Usage.ipynb
vberlier/nbtlib
mit
Note that the indent= keyword only argument can be set to any string, not just '\t'.
print(serialize_tag(nested_tag, indent='. '))
docs/Usage.ipynb
vberlier/nbtlib
mit
Creating tags from nbt literals nbtlib supports creating nbt tags from their literal representation. The nbtlib.parse_nbt function can parse snbt and return the appropriate tag.
from nbtlib import parse_nbt parse_nbt('hello') parse_nbt('{foo:[{bar:[I;1,2,3]},{spam:6.7f}]}')
docs/Usage.ipynb
vberlier/nbtlib
mit
Note that the parser ignores whitespace.
parse_nbt("""{ foo: [1, 2, 3], bar: "name", values: [ { test: "a", thing: [B; 32B, 32B, 32B] }, { test: "b", thing: [B; 64B, 64B, 64B] } ] }""")
docs/Usage.ipynb
vberlier/nbtlib
mit
Defining schemas In order to avoid wrapping values manually every time you edit a compound tag, you can define a schema that will take care of converting python types to predefined nbt tags automatically.
from nbtlib import schema MySchema = schema('MySchema', { 'foo': String, 'bar': Short }) my_object = MySchema({'foo': 'hello world', 'bar': 21}) my_object['bar'] *= 2 my_object
docs/Usage.ipynb
vberlier/nbtlib
mit
By default, you can interact with keys that are not defined in the schema. However, if you use the strict= keyword only argument, the schema instance will raise a TypeError whenever you try to access a key that wasn't defined in the original schema.
MyStrictSchema = schema('MyStrictSchema', { 'foo': String, 'bar': Short }, strict=True) strict_instance = MyStrictSchema() strict_instance.update({'foo': 'hello world'}) strict_instance try: strict_instance['something'] = List[String](['this', 'raises', 'an', 'error']) except TypeError as exc: print(exc)
docs/Usage.ipynb
vberlier/nbtlib
mit
The schema function is a helper that creates a class that inherits from CompoundSchema. This means that you can also inherit from the class manually.
from nbtlib import CompoundSchema class MySchema(CompoundSchema): schema = { 'foo': String, 'bar': Short } MySchema({'foo': 'hello world', 'bar': 42})
docs/Usage.ipynb
vberlier/nbtlib
mit
You can also set the strict class attribute to True to create a strict schema type.
class MyStrictSchema(CompoundSchema): schema = { 'foo': String, 'bar': Short } strict = True try: MyStrictSchema({'something': Byte(5)}) except TypeError as exc: print(exc)
docs/Usage.ipynb
vberlier/nbtlib
mit
Combining schemas and custom file types If you need to deal with files that always have a particular structure, you can create a specialized file type by combining it with a schema. For instance, this is how you would create a file type that opens minecraft structure files. First, we need to define what a minecraft structure is, so we create a schema that matches the tag hierarchy.
Structure = schema('Structure', { 'DataVersion': Int, 'author': String, 'size': List[Int], 'palette': List[schema('State', { 'Name': String, 'Properties': Compound, })], 'blocks': List[schema('Block', { 'state': Int, 'pos': List[Int], 'nbt': Compound, })], 'entities': List[schema('Entity', { 'pos': List[Double], 'blockPos': List[Int], 'nbt': Compound, })], })
docs/Usage.ipynb
vberlier/nbtlib
mit
Now let's test our schema by creating a structure. We can see that all the types are automatically applied.
new_structure = Structure({ 'DataVersion': 1139, 'author': 'dinnerbone', 'size': [1, 2, 1], 'palette': [ {'Name': 'minecraft:dirt'} ], 'blocks': [ {'pos': [0, 0, 0], 'state': 0}, {'pos': [0, 1, 0], 'state': 0} ], 'entities': [], }) type(new_structure['blocks'][0]['pos']) type(new_structure['entities'])
docs/Usage.ipynb
vberlier/nbtlib
mit
Now we can create a custom file type that wraps our structure schema. Since structure files are always gzipped we can override the load method to default the gzipped argument to True. We also overwrite the constructor so that it can take directly an instance of our structure schema as argument.
class StructureFile(File, Structure): def __init__(self, structure_data=None): super().__init__(structure_data or {}) self.gzipped = True @classmethod def load(cls, filename, gzipped=True): return super().load(filename, gzipped)
docs/Usage.ipynb
vberlier/nbtlib
mit
We can now use the custom file type to load, edit and save structure files without having to specify the tags manually.
structure_file = StructureFile(new_structure) structure_file.save('nbt_files/new_structure.nbt') # you can load it in a minecraft world!
docs/Usage.ipynb
vberlier/nbtlib
mit
So now let's try to edit the structure. We're going to replace all the dirt blocks with stone blocks.
with StructureFile.load('nbt_files/new_structure.nbt') as structure_file: structure_file['palette'][0]['Name'] = 'minecraft:stone'
docs/Usage.ipynb
vberlier/nbtlib
mit
As you can see we didn't need to specify any tag to edit the file.
print(serialize_tag(StructureFile.load('nbt_files/new_structure.nbt'), indent=4))
docs/Usage.ipynb
vberlier/nbtlib
mit
We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM fields $E_y$ and $E_z$ at every timestep so we can analyze them later:
import numpy as np niter = 1000 Ez_t = np.zeros((niter,sim.nx)) tmax = niter * sim.dt print("\nRunning simulation up to t = {:g} ...".format(tmax)) while sim.t <= tmax: print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r') Ez_t[sim.n,:] = sim.emf.Ez sim.iter() print("\nDone.")
python/R-L Waves.ipynb
zambzamb/zpic
agpl-3.0
EM Waves As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below:
import matplotlib.pyplot as plt iter = sim.n//2 plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$") plt.grid(True) plt.xlabel("$x_1$ [$c/\omega_n$]") plt.ylabel("$E$ field []") plt.title("$E_z$, t = {:g}".format( iter * sim.dt)) plt.legend() plt.show()
python/R-L Waves.ipynb
zambzamb/zpic
agpl-3.0
R/L-Waves To analyze the dispersion relation of the R/L-waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction for the L-wave and the two solutions for the R-wave. Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
import matplotlib.pyplot as plt import matplotlib.colors as colors # (omega,k) power spectrum win = np.hanning(niter) for i in range(sim.nx): Ez_t[:,i] *= win sp = np.abs(np.fft.fft2(Ez_t))**2 sp = np.fft.fftshift( sp ) k_max = np.pi / sim.dx omega_max = np.pi / sim.dt plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-5), extent = ( -k_max, k_max, -omega_max, omega_max ), aspect = 'auto', cmap = 'gray') plt.colorbar().set_label('$|FFT(E_z)|^2$') # Theoretical curves wC = Bx0 wR = 0.5*(np.sqrt( wC**2 + 4) + wC) wL = 0.5*(np.sqrt( wC**2 + 4) - wC) w = np.linspace(wL, omega_max, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1+wC/w) ) ) plt.plot( k, w, label = "L-wave", color = 'b' ) w = np.linspace(wR + 1e-6, omega_max, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1-wC/w) ) ) plt.plot( k, w, label = "R-wave", color = 'r') w = np.linspace(1e-6, wC - 1e-6, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1-Bx0/w) ) ) plt.plot( k, w, label = "R-wave", color = 'r' ) plt.ylim(0,12) plt.xlim(0,12) plt.xlabel("$k$ [$\omega_n/c$]") plt.ylabel("$\omega$ [$\omega_n$]") plt.title("R/L-waves dispersion relation") plt.legend() plt.show()
python/R-L Waves.ipynb
zambzamb/zpic
agpl-3.0
Implementación no vectorizada Seudocódigo pascal function secante_modificada(f(x), x_0, delta_x) x_actual = x_0 error_permitido = 0.000001 while(True) x_anterior = x_actual x_actual = raiz(f(x), x_anterior, delta_x) if x_raiz_actual != 0 error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100 end if if error_relativo &lt; error_permitido exit end if end while mostrar x_actual end function o también pascal function secante_modificada(f(x), x_0, delta_x) x_actual = x_0 for 1 to maxima_iteracion do x_anterior = x_actual x_actual = raiz(f(x), x_anterior, delta_x) end for mostrar x_actual end function
def secante_modificada(f, x_0, delta_x): print("{0:s} \t {1:15s} \t {2:15s} \t {3:15s}".format('i', 'x anterior', 'x actual', 'error relativo %')) x_actual = x_0 i = 0 print("{0:d} \t {1:15s} \t {2:.15f} \t {3:15s}".format(i, '???????????????', x_actual, '???????????????')) error_permitido = 0.000001 while True: x_anterior = x_actual x_actual = raiz(f, x_anterior, delta_x) if x_actual != 0: error_relativo = abs((x_actual - x_anterior)/x_actual)*100 i = i + 1 print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15.11f}".format(i, x_anterior, x_actual, error_relativo)) if (error_relativo < error_permitido) or (i>=20): break print('\nx =', x_actual)
01_Raices_de_ecuaciones_de_una_variable/07_Secante_modificada.ipynb
ClaudioVZ/Metodos_numericos_I
gpl-2.0
Ejemplo 2 Encontrar la raiz de \begin{equation} y = x^{5} + x^{3} + 3 \end{equation} usar $x = 0$ y $\Delta x = -0.5$
def f(x): # f(x) = x^5 + x^3 + 3 y = x**5 + x**3 + 3 return y derivada(f, 0, -0.5) raiz(f, 0, -0.5) secante_modificada(f, 0, -0.5)
01_Raices_de_ecuaciones_de_una_variable/07_Secante_modificada.ipynb
ClaudioVZ/Metodos_numericos_I
gpl-2.0
Ejemplo 3 Encontrar la raiz de \begin{equation} y = x^{5} + x^{3} + 3 \end{equation} usar $x = 0$ y $\Delta x = -1$
secante_modificada(f, 0, -1)
01_Raices_de_ecuaciones_de_una_variable/07_Secante_modificada.ipynb
ClaudioVZ/Metodos_numericos_I
gpl-2.0
从头编写训练循环 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/writing_a_training_loop_from_scratch"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td> </table> 设置
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
简介 Keras 提供了默认的训练与评估循环 fit() 和 evaluate()。使用内置方法进行训练和评估指南中介绍了它们的用法。 如果想要自定义模型的学习算法,同时又能利用 fit() 的便利性(例如,使用 fit() 训练 GAN),则可以将 Model 类子类化并实现自己的 train_step() 方法,此方法可在 fit() 中重复调用。<a href="https://tensorflow.google.cn/guide/keras/customizing_what_happens_in_fit/" data-md-type="link">自定义 fit() 的功能</a>指南对此进行了介绍。 现在,如果您想对训练和评估进行低级别控制,则应当从头开始编写自己的训练和评估循环。这正是本指南要介绍的内容。 使用 GradientTape:第一个端到端示例 在 GradientTape 作用域内调用模型使您可以检索层的可训练权重相对于损失值的梯度。利用优化器实例,您可以使用上述梯度来更新这些变量(可以使用 model.trainable_weights 检索这些变量)。 我们考虑一个简单的 MNIST 模型:
inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu")(inputs) x2 = layers.Dense(64, activation="relu")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs)
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们使用带自定义训练循环的 mini-batch 梯度对其进行训练。 首先,我们需要优化器、损失函数和数据集:
# Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the training dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) x_test = np.reshape(x_test, (-1, 784)) # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] # Prepare the training dataset. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(batch_size)
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
下面是我们的训练循环: 我们打开一个遍历各周期的 for 循环 对于每个周期,我们打开一个分批遍历数据集的 for 循环 对于每个批次,我们打开一个 GradientTape() 作用域 在此作用域内,我们调用模型(前向传递)并计算损失 在作用域之外,我们检索模型权重相对于损失的梯度 最后,我们根据梯度使用优化器来更新模型的权重
epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables auto-differentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train, training=True) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_weights) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %s samples" % ((step + 1) * batch_size))
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
指标的低级处理 我们在此基本循环中添加指标监视。 在这种从头开始编写的训练循环中,您可以轻松重用内置指标(或编写的自定义指标)。下面列出了具体流程: 在循环开始时实例化指标 在每个批次后调用 metric.update_state() 当您需要显示指标的当前值时,调用 metric.result() 当您需要清除指标的状态(通常在周期结束)时,调用 metric.reset_states() 我们利用这些知识在每个周期结束时基于验证数据计算 SparseCategoricalAccuracy
# Get model inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
下面是我们的训练和评估循环:
import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Update training metric. train_acc_metric.update_state(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val, training=False) # Update val metrics val_acc_metric.update_state(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time))
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
使用 tf.function 加快训练步骤的速度 TensorFlow 2.0 中的默认运行时为 Eager Execution。因此,上面的训练循环会以 Eager 模式执行。 这对于调试非常有用,但计算图编译具有确定的性能优势。将您的计算描述为静态计算图可以使框架应用全局性能优化。当框架受约束而以贪心方式一个接一个地执行运算,而又不知道接下来会发生什么时,便无法做到这一点。 以张量为输入的任何函数都可以编译为静态计算图。只需添加一个 @tf.function 装饰器,具体如下所示:
@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们对评估步骤执行相同的操作:
@tf.function def test_step(x, y): val_logits = model(x, training=False) val_acc_metric.update_state(y, val_logits)
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0