markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
TOP 20 most asylum seekers
population_type_count('Asylum seekers')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
TOP20 Internally displaced
population_type_count('Internally displaced')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
TOP20 most stateless
population_type_count('Stateless')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
TOP20 most Returned IDPs
population_type_count('Returned IDPs')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
TOP20 most Returned refugees
population_type_count('Returned refugees')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
9) Which were the TOP10 countries, most refugees returned home from in 2013?
recent.columns returned_refugees = recent[recent['Population type'] == 'Returned refugees'] returned_refugees.groupby('Origin_Returned_from')['2013'].sum() returned_table = pd.DataFrame(returned_refugees.groupby('Origin_Returned_from')['2013'].sum()) returned_table.sort_values(by='2013', ascending = False).head(10)
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
10) Which country of origin were most asylum seekers from in 2013?
asylum_seekers = recent[recent['Population type'] == 'Asylum seekers'] seekers_table = pd.DataFrame(asylum_seekers.groupby('Origin_Returned_from')['2013'].sum()) seekers_table.sort_values(by='2013', ascending = False).head(10)
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
11) What is the overall number of (asylum seekers/refugees/idp) for each year?
df.columns years_available=['2000', '2001','2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013'] pop_types=['Asylum seekers', 'Refugees', 'Internally displaced'] totals_dict_list=[] for poptype in pop_types: poptype_dictionary ={} for year in years_available: ...
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
12) Line graph or stacked bar chart for 11)
asylum_over_time = totals_dict_list[0] asylums_table = pd.DataFrame(asylum_over_time, index=['Total asylum seekers per year']) asylums_table #asylums_table.plot(kind='bar') refugees_over_time = totals_dict_list[1] refugees_table = pd.DataFrame(refugees_over_time, index=['Total refugees per year']) refugees_table idp_...
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
Inverse Distance Verification: Cressman and Barnes Compare inverse distance interpolation methods Two popular interpolation schemes that use inverse distance weighting of observations are the Barnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses the ratio between distance of an obse...
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import cKDTree from metpy.interpolate.geometry import dist_2 from metpy.interpolate.points import barnes_point, cressman_point from metpy.interpolate.tools import average_spacing, calc_kappa def draw_circle(ax, x, y, r, m, label): th = np.lins...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Generate random x and y coordinates, and observation values proportional to x * y. Set up two test grid locations at (30, 30) and (60, 60).
np.random.seed(100) pts = np.random.randint(0, 100, (10, 2)) xp = pts[:, 0] yp = pts[:, 1] zp = xp * xp / 1000 sim_gridx = [30, 60] sim_gridy = [30, 60]
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Set up a cKDTree object and query all of the observations within "radius" of each grid point. The variable indices represents the index of each matched coordinate within the cKDTree's data list.
grid_points = np.array(list(zip(sim_gridx, sim_gridy))) radius = 40 obs_tree = cKDTree(list(zip(xp, yp))) indices = obs_tree.query_ball_point(grid_points, r=radius)
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
For grid 0, we will use Cressman to interpolate its value.
x1, y1 = obs_tree.data[indices[0]].T cress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1) cress_obs = zp[indices[0]] cress_val = cressman_point(cress_dist, cress_obs, radius)
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
For grid 1, we will use barnes to interpolate its value. We need to calculate kappa--the average distance between observations over the domain.
x2, y2 = obs_tree.data[indices[1]].T barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2) barnes_obs = zp[indices[1]] kappa = calc_kappa(average_spacing(list(zip(xp, yp)))) barnes_val = barnes_point(barnes_dist, barnes_obs, kappa)
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Plot all of the affiliated information and interpolation values.
fig, ax = plt.subplots(1, 1, figsize=(15, 10)) for i, zval in enumerate(zp): ax.plot(pts[i, 0], pts[i, 1], '.') ax.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1])) ax.plot(sim_gridx, sim_gridy, '+', markersize=10) ax.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches') ax.plot(...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
For each point, we will do a manual check of the interpolation values by doing a step by step and visual breakdown. Plot the grid point, observations within radius of the grid point, their locations, and their distances from the grid point.
fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.annotate(f'grid 0: ({sim_gridx[0]}, {sim_gridy[0]})', xy=(sim_gridx[0] + 2, sim_gridy[0])) ax.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10) mx, my = obs_tree.data[indices[0]].T mz = zp[indices[0]] for x, y, z in zip(mx, my, mz): d = np.sqrt((sim_gridx[0] - ...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Step through the cressman calculations.
dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625]) values = np.array([0.064, 1.156, 3.364, 0.225]) cres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists) total_weights = np.sum(cres_weights) proportion = cres_weights / total_weights value = values * proportion v...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Now repeat for grid 1, except use barnes interpolation.
fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.annotate(f'grid 1: ({sim_gridx[1]}, {sim_gridy[1]})', xy=(sim_gridx[1] + 2, sim_gridy[1])) ax.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10) mx, my = obs_tree.data[indices[1]].T mz = zp[indices[1]] for x, y, z in zip(mx, my, mz): d = np.sqrt((sim_gridx[1] - ...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Step through barnes calculations.
dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779]) values = np.array([2.809, 6.241, 4.489, 2.704]) weights = np.exp(-dists**2 / kappa) total_weights = np.sum(weights) value = np.sum(values * (weights / total_weights)) print('Manual barnes value:\t', value) print('Metpy barnes value:\t', bar...
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
metpy/MetPy
bsd-3-clause
Read the IMERG files for one day.
inpath = '%s%4d/%02d/%02d/' % (imergpath, year, 4, 1) files = sorted(glob('%s3B-HHR*' % inpath)) nt = len(files) with h5py.File(files[0]) as f: lats = f['Grid/lat'][:] lons = f['Grid/lon'][:] fillvalue = f['Grid/precipitationCal'].attrs['_FillValue'] nlon, nlat = len(lons), len(lats) Pimerg = np.ma.m...
150528 How Much of Earth is Raining at Any One Time.ipynb
JacksonTanBS/iPythonNotebooks
gpl-2.0
Calculate the ratio of the number of grid boxes with rain (above 0.01 mm / h threshold) to the number of valid grid boxes (mainly between ±60° latitudes).
print(np.ma.sum(Pimerg > 0.01) / np.ma.count(Pimerg))
150528 How Much of Earth is Raining at Any One Time.ipynb
JacksonTanBS/iPythonNotebooks
gpl-2.0
Therefore, about 5.5% of the Earth's area is raining at any one time. The simplification made here is that the ratio of grid boxes is similar to the ratio of areas. Or, putting it in another way, a grid box at high latitude has a smaller area than a grid box at the equator and thus contribute less to the ratio. However...
# print system info: please manually input non-core imported modules %load_ext version_information %version_information numpy, h5py
150528 How Much of Earth is Raining at Any One Time.ipynb
JacksonTanBS/iPythonNotebooks
gpl-2.0
Data Simulation
#@title Create fake dataset of adgroups and conversion values #@markdown We are generating random data: each row is an individual conversion #@markdown with a given conversion value. \ #@markdown For each conversion, we know the #@markdown adgroup, which is our only feature here and just consists of 3 letters. n_conse...
cocoa/cocoa_template.ipynb
google/consent-based-conversion-adjustments
apache-2.0
Preprocessing
#@title Split adgroups in separate levels #@markdown We preprocess our data. Consenting and non-consenting data are #@markdown concatenated to ensure that they have the same feature-columns. \ #@markdown We then split our adgroup-string into its components and dummy code each. #@markdown The level of each letter in the...
cocoa/cocoa_template.ipynb
google/consent-based-conversion-adjustments
apache-2.0
Create NearestCustomerMatcher object and run conversion-adjustments. We now have our fake data in the right format – similarity here depends alone on the adgroup of a given customer. In reality, we would have a gCLID and a timestamp for each customer that we could pass as id_columns to the matcher.\ Other example featu...
matcher = nearest_consented_customers.NearestCustomerMatcher( data_consenting, conversion_column='conversion_value', id_columns=['index']) data_adjusted = matcher.calculate_adjusted_conversions( data_nonconsenting, number_nearest_neighbors=100) #@title We generated a new dataframe containing the conversion-val...
cocoa/cocoa_template.ipynb
google/consent-based-conversion-adjustments
apache-2.0
TF Lattice 聚合函数模型 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/lattice/tutorials/aggregate_function_models"> <img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a></td> <td><a target="_blank" href="https://cola...
#@test {"skip": true} !pip install tensorflow-lattice pydot
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
导入所需的软件包:
import tensorflow as tf import collections import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl logging.disable(sys.maxsize)
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
下载 Puzzles 数据集:
train_dataframe = pd.read_csv( 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv') train_dataframe.head() test_dataframe = pd.read_csv( 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv') test_dataframe.head()
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
提取并转换特征和标签
# Features: # - star_rating rating out of 5 stars (1-5) # - word_count number of words in the review # - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website # - includes_photo if the review includes a photo of the puzzle # - num_helpful number of people that found this revie...
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
设置用于在本指南中进行训练的默认值:
LEARNING_RATE = 0.1 BATCH_SIZE = 128 NUM_EPOCHS = 500 MIDDLE_DIM = 3 MIDDLE_LATTICE_SIZE = 2 MIDDLE_KEYPOINTS = 16 OUTPUT_KEYPOINTS = 8
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
特征配置 使用 tfl.configs.FeatureConfig 设置特征校准和按特征的配置。特征配置包括单调性约束、按特征的正则化(请参阅 tfl.configs.RegularizerConfig)以及点阵模型的点阵大小。 请注意,我们必须为希望模型识别的任何特征完全指定特征配置。否则,模型将无法获知存在这样的特征。对于聚合模型,将自动考虑这些特征并将其处理为不规则特征。 计算分位数 尽管 tfl.configs.FeatureConfig 中 pwl_calibration_input_keypoints 的默认设置为“分位数”,但对于预制模型,我们必须手动定义输入关键点。为此,我们首先定义自己的辅助函数来计算分位数。
def compute_quantiles(features, num_keypoints=10, clip_min=None, clip_max=None, missing_value=None): # Clip min and max if desired. if clip_min is not None: features = np.maximum(features, clip_min) features = np.append(...
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
定义我们的特征配置 现在我们可以计算分位数了,我们为希望模型将其作为输入的每个特征定义一个特征配置。
# Feature configs are used to specify how each feature is calibrated and used. feature_configs = [ tfl.configs.FeatureConfig( name='star_rating', lattice_size=2, monotonicity='increasing', pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( ...
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
聚合函数模型 要构造 TFL 预制模型,首先从 tfl.configs 构造模型配置。使用 tfl.configs.AggregateFunctionConfig 构造聚合函数模型。它会先应用分段线性和分类校准,随后再将点阵模型应用于不规则输入的每个维度。然后,它会在每个维度的输出上应用聚合层。接下来,会应用可选的输出分段线性校准。
# Model config defines the model structure for the aggregate function model. aggregate_function_model_config = tfl.configs.AggregateFunctionConfig( feature_configs=feature_configs, middle_dimension=MIDDLE_DIM, middle_lattice_size=MIDDLE_LATTICE_SIZE, middle_calibration=True, middle_calibration_num_k...
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
每个聚合层的输出是已校准点阵在不规则输入上的平均输出。下面是在第一个聚合层内使用的模型:
aggregation_layers = [ layer for layer in aggregate_function_model.layers if isinstance(layer, tfl.layers.Aggregation) ] tf.keras.utils.plot_model( aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
现在,与任何其他 tf.keras.Model 一样,我们编译该模型并将其拟合到我们的数据中。
aggregate_function_model.compile( loss='mae', optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) aggregate_function_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
训练完模型后,我们可以在测试集中对其进行评估。
print('Test Set Evaluation...') print(aggregate_function_model.evaluate(test_xs, test_ys))
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
tensorflow/docs-l10n
apache-2.0
I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
violations_df = violations_df[violations_df['Vehicle Year'] != 0] # print(violations_df['Vehicle Year'].head())
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
"Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
# print(violations_df['Date First Observed'].head()) import datetime violations_df.head()['Issue Date'].astype(datetime.datetime) import datetime violations_df.head()['Issue Date'].astype(datetime.datetime) #violations_df['Issue Date'] = violations_df['Issue Date'].astype('datetime64[ns]') # violations_df.dtypes
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
"Violation time" is... not a time. Make it a time.
import re # violations_df['Violation Time'].head() import numpy as np # Build a function using that method def time_to_datetime(str_time): try: #str_time = re.sub('^0', '', str_time) if isinstance(str_time, str): str_time = str_time + "M" #print("Trying to convert", str_time, "i...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
import re def vehicle_colors(vehicle): if isinstance(vehicle, str): #print(vehicle) vehicle = re.sub('^GY$', 'GREY', vehicle) #vehicle = vehicle.replace("GY", "GREY") vehicle = re.sub('^WH$', 'WHITE', vehicle) vehicle = re.sub('^BR$', 'BROWN', vehicle) vehicle = re.s...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
Join the data with the Parking Violations Code dataset from the NYC Open Data site.
print(violations_df.columns) violations_df['Violation Code'].head() violations_df print(dof_violations_df.columns) dof_violations_df['CODE'].head() # violations_df.merge(dof_violations_df, left_on='Violation Code', right_on='CODE') #print(dof_violations_df['CODE'][0]) #test = dof_violations_df['CODE'][0] def to_in...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
Join the data with the Parking Violations Code dataset from the NYC Open Data site.
violations_df = violations_df.merge(dof_violations_df, left_on='Violation Code', right_on='CODE')
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
How much money did NYC make off of parking violations?
violations_df.describe() violations_df.columns violations_df['Summons Number'] violations_df['Manhattan\xa0 96th St. & below'] violations_df['Manhattan\xa0 96th St. & below'] = violations_df['Manhattan\xa0 96th St. & below'].apply(to_int) violations_df['Manhattan\xa0 96th St. & below'].sum() violations_df['All Ot...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
What's the most lucrative kind of parking violation? Below lists all the ones that chaarge $115, which is the highest amount
violations_df.groupby('CODE')['All Other Areas'].mean().sort_values(ascending=False).head(8)
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
The most frequent parking violation is:
import matplotlib.pyplot as plt %matplotlib inline freqNYviolations = violations_df.groupby('CODE')['All Other Areas'].count().sort_values(ascending=False).head().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple']) freqNYviolations.set_title('Most Frequent NYC Parking Violations by Code') f...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
New Jersey has bad drivers, but does it have bad parkers, too?
nj_violations_df = violations_df[violations_df['Registration State'] != 'NJ'] print("There were", len(nj_violations_df['All Other Areas']), "parking violations in NYC by drivers registered in New Jersey") print("These violations totaled", nj_violations_df['All Other Areas'].sum(), "of revenue")
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
How much money does NYC make off of all non-New York vehicles? 11. Make a chart of the top few.
violations_df.groupby('Registration State')['All Other Areas'].sum().sort_values(ascending=False).head(8) import matplotlib.pyplot as plt %matplotlib inline nonny_violations_df = violations_df[violations_df['Registration State'] != 'NY'] nonny_violations_df = nonny_violations_df[violations_df['Registration State'] !...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am. What's the average ticket cost in NYC?
print("The average ticket cost in NYC is $", violations_df['All Other Areas'].mean())
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
Make a graph of the number of tickets per day.
violations_df.groupby('Issue Date')['All Other Areas'].count().sort_values(ascending=False) freqnonNYviolations = violations_df.groupby('Issue Date')['All Other Areas'].count().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple']) freqnonNYviolations.set_title('Frequency of NYC Parking Viol...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
Make a graph of the amount of revenue collected per day.
freqnonNYviolations = violations_df.groupby('Issue Date')['All Other Areas'].sum().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple']) freqnonNYviolations.set_title('Revenue of NYC Parking Violations by Date') freqnonNYviolations.set_xlabel('Date') freqnonNYviolations.set_ylabel('Revenue G...
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
M0nica/python-foundations-hw
mit
我们都知道,序列可以迭代,下面说明具体原因: iter 函数 解释器需要迭代对象 x 时候,会自动调用 iter(x) 内置的 iter 函数有以下作用。 检查对象是否实现了 __iter__ 方法,如果实现了就调用它,获取一个迭代器 如果没有实现 __iter__ 方法,但是实现了 __getitem__ 方法,Python 会创建一个迭代器,尝试按顺序(从索引 0 开始)获取元素 如果尝试失败,Python 抛出 TypeError 异常,通常提示 C object is not iterable,其中 C 是目标对象所属的类 任何 Pytho 序列都可迭代的原因是实现了 __getitem__ 方法。其实标准的...
from collections import abc class Foo: def __iter__(self): pass issubclass(Foo, abc.Iterable) f = Foo() isinstance(f, abc.Iterable)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
不过要注意,前面定义的 Sentence 类是可迭代的,却无法通过 issubclass(Sentence, abc.Iterable) 测试 从 Python 3.4 开始,检测对象 x 是否可迭代,最准确的方法是调用 iter(x) 函数,如果不可迭代,再处理 TypeError 异常,这回比使用 isinstance(x, abc.Iterable) 更准确,因为 iter(x) 会考虑到 __getitem__ 方法 迭代对象之前显式检查或许没必要,因为试图迭代不可迭代对象时,抛出的错误很明显。如果除了跑出 TypeError 异常之外还要进一步处理,可以使用 try/except 块,无需显式检查。如果要保存对象,等以...
s = 'ABC' for char in s: print(char)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
如果用 while 循环,要像下面这样:
s = 'ABC' it = iter(s) while True: try: print(next(it)) except StopIteration: # 这个异常表示迭代器到头了 del it break
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
标准迭代器接口有两个方法: __next__ 返回下一个可用的元素,如果没有元素了,抛出 StopIteration 异常 __iter__ 返回 self,以便在应该使用可迭代对象的地方使用迭代器,比如 for 循环 这个接口在 collections.abc.Iterator 抽象基类中,这个类定义了 __next__ 抽象方法,而且继承自 Iterable 类: __iter__ 抽象方法则在 Iterable 类中定义 abc.Iterator 抽象基类中 __subclasshook__ 的方法作用就是检查有没有 __iter__ 和 __next__ 属性 检查对象 x 是否为 迭代器 的最好方式是调用 isinsta...
s3 = Sentence('Pig and Pepper') it = iter(s3) it next(it) next(it) next(it) next(it) list(it) # 到头后,迭代器没用了 list(s3) # 如果想再次迭代,要重新构建迭代器
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
因为迭代器只需要 __next__ 和 __iter__ 两个方法,所以除了调用 next() 方法,以及捕获 StopIteration 异常之外,没有办法检查是否还有遗留元素。此外,也没有办法 ”还原“ 迭代器。如果想再次迭代,那就要调用 iter(...) 传入之前构造迭代器传入的可迭代对象。传入迭代器本身没用,因为前面说过 Iterator.__iter__ 方法实现方式是返回实例本身,所以传入迭代器无法还原已经耗尽的迭代器 我们可以得出迭代器定义如下:实现了无参数的 __next__ 方法,返回序列中的下一个元素,如果没有元素了,那么抛出 StopIteration 异常。Python 中迭代器还实现了 __iter__ ...
import re import reprlib RE_WORD = re.compile('\w+') class Sentence: def __init__(self, text): self.text = text self.words = RE_WORD.findall(text) def __repr__(self): return 'Sentence(%s)' % reprlib.repr(self.text) def __iter__(self): return SentenceItera...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
注意,对于这个例子来说,没有必要在 SentenceIterator 类中实现 __iter__ 方法,不过这么做是对的,因为迭代器应该实现 __next__ 和 __iter__ 两个方法,而且这么做能让迭代器通过 issubclass(SentenceInterator, abc.Iterator) 测试。如果让 SentenceIterator 继承 abc.Iterator 类,那么它会继承 abc.Iterator.__iter__ 这个具体方法 注意 SentenceIterator 类的大多数代码在处理迭代器内部状态,稍后会说明如何简化,不过我们先讨论一个看似合理实则错误的实现捷径 把 Sentence 变成迭代器:坏...
import re import reprlib RE_WORD = re.compile('\w+') class Sentence: def __init__(self, text): self.text = text self.words = RE_WORD.findall(text) def __repr__(self): return 'Sentence(%s)' % reprlib.repr(self.text) def __iter__(self): for word in self.wor...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
在这个例子中,迭代器其实是生成器对象,每次调用 __iter__ 方法都会自动创建,因为这里的 __iter__ 方法是生成器函数 生成器函数的工作原理 只要 Python 函数定义体中有 yield 关键字,该函数就是生成器函数,调用生成器函数时,会返回一个生成器对象。也就是说,生成器函数是生成器工厂 下面用一个特别简单的函数说明生成器行为:
def gen_123(): yield 1 yield 2 yield 3 gen_123 gen_123() for i in gen_123(): print(i) g = gen_123() next(g) next(g) next(g) next(g) # 生成器函数定义体执行完毕后,跑出 StopIteration 异常
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
生成器函数会创建一个生成器对象,包装生成器函数的定义体。把生成器传给 next(..) 函数时,生成器函数会向前,执行函数定义体中的下一个 yield 语句,返回产出的值,并在函数定义体的当前位置暂停。最终函数的定义体返回时,外层的生成器对象会抛出 StopIteration 异常 -- 这一点与迭代器协议一致 下面例子更清楚的说明了生成器函数定义体的执行过程:
def gen_AB(): print('start') yield 'A' print('continue') yield 'B' print('end') for c in gen_AB(): print('-->', c)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
现在在我们应该知道 Sentence.__iter__ 作用了: __iter__ 方法是生成器函数,调用时会构建一个实现了迭代器接口的生成器对象,因此不用再定义 SentenceIterator 类了。 这一版 Sentence 类比之前简短多了,但还不够懒惰,懒惰实现是指尽可能延后生成值,这样能节省内存,或许还可以避免做无用的处理 Sentence 类第 4 版:惰性实现 设计 Iterator 接口时考虑了惰性:next(my_iterator) 一次生成一个元素。惰性求值和及早求值是编程语言理论的技术术语 目前的 Sentence 类不具有惰性,因为 __init__ 方法急迫的构建好了文本中的单词列表,然后绑定到 self...
import re import reprlib RE_WORD = re.compile('\w+') class Sentence: def __init__(self, text): self.text = text def __repr__(self): return 'Sentence(%s)' % reprlib.repr(self.text) def __iter__(self): for match in RE_WORD.finditer(self.text): yield mat...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
生成器表达式 简单的生成器函数,如前面的例子中使用的那个,可以替换成生成器表达式 生成器表达式可以理解为列表推导式的惰性版本:不会迫切的构建列表,而是返回一共额生成器,按需惰性产称元素。也就是说,如果列表推导是制造列表的工厂,那么生成器表达式是制造生成器的工厂 下面展示了一个生成器表达式,并与列表推导式对比:
def gen_AB(): print('start') yield 'A' print('continue') yield 'B' print('end') res1 = [x * 3 for x in gen_AB()] for i in res1: print('-->', i) res2 = (x * 3 for x in gen_AB()) res2 for i in res2: print('-->', i)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
可以看出,生成器表达式会产出生成器,因此可以使用生成器表达式进一步减少 Sentence 类的代码:
import re import reprlib RE_WORD = re.compile('\w+') class Sentence: def __init__(self, text): self.text = text def __repr__(self): return 'Sentence(%s)' % reprlib.repr(self.text) def __iter__(self): return (match.group() for match in RE_WORD.finditer(self.text))
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
这里用的是生成器表达式构建生成器,然后将其返回,不过最终效果一样:调用 __iter__ 方法会得到一个生成器对象 生成器表达式是语法糖:完全可以替换成生成器函数,不过有时使用生成器表达式更加便利 何时使用生成器表达式 遇到简单的情况,可以使用成器表达式,因为因为这样扫一眼就知道代码作用 如果生成器表达式要分成多行,最好使用生成器函数,提高可读性 如果函数或构造方法只有一个参数,传入生成器表达式时不用写一堆调用函数的括号,再写一堆括号围住生成器表达式,只写一对括号就行,如果生成器表达式后面还有其他参数,那么必须使用括号围住,否则会抛出 SynataxError 异常 另一个例子:等差数列生成器
class ArithmeticProgression: def __init__(self, begin, step, end=None): self.begin = begin self.step = step self.end = end # 无穷数列 def __iter__(self): # self 赋值给 result,不过要先强制转成前面加法表达式类型(两个支持加法的对象返回一个对象) result = type(self.begin + self.step)(self.begin) ...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
上面的类完全可以用一个生成器函数代替
def aritprog_gen(begin, step, end=None): result = type(begin + step)(begin) forever = end is None index = 0 while forever or result < end: yield result index += 1 result = begin + step * index
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
上面的实现很棒,但是要记住,标准库中有很多现成的生成器,下面会用 itertools 模块实现,这个版本更棒 使用 itertools 生成等差数列 itertools 提供了 19 个生成器函数,结合起来很有意思。 例如 itertools.count 函数返回的生成器能生成多个数。如果不传入参数,itertools.count 函数会生成从 0 开始的整数数列。不过,我们可以提供 start 和 step 值,这样实现的作用与 aritprog_gen 函数相似
import itertools gen = itertools.count(1, .5) next(gen) next(gen) next(gen) next(gen)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
然而 itertools.count 函数从不停止,因此,调用 list(count())) 会产生一个特别大的列表,超出可用的内存 不过,itertools.takewhile 函数不同,他会生成一个使用另一个生成器的生成器,在指定条件计算结果为 False 时候停止,因此,可以把这两个函数结合:
gen = itertools.takewhile(lambda n: n < 3, itertools.count(1, .5)) list(gen)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
所以,我们可以将等差数列写成这样:
import itertools def aritprog_gen(begin, step, end=None): first = type(begin+step)(begin) ap_gen = itertools.count(first, step) if end is not None: ap_gen = itertools.takewhile(lambda n: n < end, ap_gen) return ap_gen
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
注意, aritprog_gen 不是生成器函数,因为没有 yield 关键字,但是会返回一个生成器,因此它和其他的生成器函数一样,是一个生成器工厂函数 标准库中的生成器函数 标准库中有很多生成器,有用于逐行迭代文本文件的对象,还有出色的 os.walk 函数,不过本节专注于通用的函数:参数为任意可迭代对象,返回值是生成器,用于生成选中的,计算出的和重新排列的元素。 第一组是过滤生成器函数,如下:
def vowel(c): return c.lower() in 'aeiou' # 字符串各个元素传给 vowel 函数,为真则返回对应元素 list(filter(vowel, 'Aardvark')) import itertools # 与上面相反 list(itertools.filterfalse(vowel, 'Aardvark')) # 处理 字符串,跳过 vowel 为真的元素,然后产出剩余的元素,不再检查 list(itertools.dropwhile(vowel, 'Aardvark')) #返回真值对应的元素,立即停止,不再检查 list(itertools.takewhile(vowel...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
下面是映射生成器函数:
sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1] import itertools # 产出累计的总和 list(itertools.accumulate(sample)) # 如果提供了函数,那么把前两个元素给他,然后把计算结果和下一个元素给它,以此类推 list(itertools.accumulate(sample, min)) list(itertools.accumulate(sample, max)) import operator list(itertools.accumulate(sample, operator.mul)) # 计算乘积 list(itertools.accum...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
接下来是用于合并的生成器函数:
# 先产生第一个元素,然后产生第二个参数的所有元素,以此类推,无缝连接到一起 list(itertools.chain('ABC', range(2))) list(itertools.chain(enumerate('ABC'))) # chain.from_iterable 函数从可迭代对象中获取每个元素, # 然后按顺序把元素连接起来,前提是各个元素本身也是可迭代对象 list(itertools.chain.from_iterable(enumerate('ABC'))) list(zip('ABC', range(5), [10, 20, 30, 40])) #只要有一个生成器到头,就停止 # 处理到最长的迭代器到...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
itertools.product 生成器是计算笛卡尔积的惰性方式,从输入的各个迭代对象中获取元素,合并成由 N 个元素构成的元组,与嵌套的 for 循环效果一样。repeat指明重复处理多少次可迭代对象。下面演示 itertools.product 的用法
list(itertools.product('ABC', range(2))) suits = 'spades hearts diamonds clubs'.split() list(itertools.product('AK', suits)) # 传入一个可迭代对象,产生一系列只有一个元素的元祖,不是特别有用 list(itertools.product('ABC')) # repeat = N 重复 N 次处理各个可迭代对象 list(itertools.product('ABC', repeat=2)) list(itertools.product(range(2), repeat=3)) rows = iter...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
把输入的各个元素扩展成多个输出元素的生成器函数:
ct = itertools.count() next(ct) # 不能构建 ct 列表,因为 ct 是无穷的 next(ct), next(ct), next(ct) list(itertools.islice(itertools.count(1, .3), 3)) cy = itertools.cycle('ABC') next(cy) list(itertools.islice(cy, 7)) rp = itertools.repeat(7) # 重复出现指定元素 next(rp), next(rp) list(itertools.repeat(8, 4)) # 4 次数字 8 list(map(operator...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
itertools 中 combinations, comb 和 permutations 生成器函数,连同 product 函数称为组合生成器。itertool.product 和其余组合学函数有紧密关系,如下:
# 'ABC' 中每两个元素 len() == 2 的各种组合 list(itertools.combinations('ABC', 2)) # 包括相同元素的每两个元素的各种组合 list(itertools.combinations_with_replacement('ABC', 2)) # 每两个元素的各种排列 list(itertools.permutations('ABC', 2)) list(itertools.product('ABC', repeat=2))
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
用于重新排列元素的生成器函数:
# 产出由两个元素组成的元素,形式为 (key, group),其中 key 是分组标准, #group 是生成器,用于产出分组里的元素 list(itertools.groupby('LLLAAGGG')) for char, group in itertools.groupby('LLLLAAAGG'): print(char, '->', list(group)) animals = ['duck', 'eagle', 'rat', 'giraffe', 'bear', 'bat', 'dolphin', 'shark', 'lion'] animals.sort(key=len) anima...
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
Python 3.3 中新语法 yield from 如果生成器函数需要产生两一个生成器生成的值,传统方法是使用 for 循环
def chain(*iterables): # 自己写的 chain 函数,标准库中的 chain 是用 C 写的 for it in iterables: for i in it: yield i s = 'ABC' t = tuple(range(3)) list(chain(s, t))
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
chain 生成器函数把操作依次交给接收到的各个可迭代对象处理。为此 Python 3.3 引入了新语法,如下:
def chain(*iterables): for i in iterables: yield from i # 详细语法在 16 章讲 list(chain(s, t))
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
可迭代的归约函数 接受可迭代对象,然后返回单个结果,叫归约函数。
all([1, 2, 3]) # 所有元素为真返回 True all([1, 0, 3]) any([1, 2, 3]) # 有元素为真就返回 True any([1, 0, 3]) any([0, 0, 0]) any([]) g = (n for n in [0, 0.0, 7, 8]) any(g) next(g) # any 碰到一个为真就不往下判断了
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
还有一个内置的函数接受一个可迭代对象,返回不同的值 -- sorted,reversed 是生成器函数,与此不同,sorted 会构建并返回真正的列表,毕竟要读取每一个元素才能排序。它返回的是一个排好序的列表。这里提到 sorted,是因为它可以处理任何可迭代对象 当然,sorted 和这些归约函数只能处理最终会停止的可迭代对象,这些函数会一直收集元素,永远无法返回结果 深入分析 iter 函数 iter 函数还有一个鲜为人知的用法:传两个参数,使用常规的函数或任何可调用的对象创建迭代器。这样使用时,第一个参数必须是可调用对象,用于不断调用(没有参数),产出各个值,第二个是哨符,是个标记值,当可调用对象返回这个值时候,触发迭代器抛 ...
from random import randint def d6(): return randint(1, 6) d6_iter = iter(d6, 1) d6_iter for roll in d6_iter: print(roll)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
内置函数 iter 的文档有一个实用的例子,逐行读取文件,直到遇到空行或者到达文件末尾为止:
# for line in iter(fp.readline, '\n'): # process_line(line)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
把生成器当成协程 Python 2.2 引入了 yield 关键字实现的生成器函数,Python 2.5 为生成器对象添加了额外的方法和功能,其中最引人关注的是 .send() 方法 与 .__next__() 方法一样,.send() 方法致使生成器前进到下一个 yield 语句。不过 send() 方法还允许使用生成器的客户把数据发给自己,即不管传给 .send() 方法什么参数,那个参数都会成为生成器函数定义体中对应的 yield 表达式的值。也就是说,.send() 方法允许在客户代码和生成器之间双向交换数据。而 .__next__() 方法只允许客户从生成器中获取数据 这是一项重要的 “改进”,甚至改变了生成器本性,这样使...
def f(): x=0 while True: x += 1 yield x
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
我们无法通过函数调用抽象产出这个过程,下面似乎能抽象产出这个过程:
def f(): def do_yield(n): yield n x = 0 while True: x += 1 do_yield(x)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
调用 f() 会得到一个死循环,而不是生成器,因为 yield 只能将最近的外层函数变成生成器函数。虽然生成器函数看起来像函数,可是我们不能通过简单的函数调用把职责委托给另一个生成器函数。 Python 新引入的 yield from 语法允许生成器或协程把工作委托给第三方完成,这样就无需嵌套 for 循环作为变通了。在函数调用前面加上 yield from 能 ”解决“ 上面的问题,如下:
def f(): def do_yield(n): yield n x = 0 while True: x += 1 yield from do_yield(x)
content/fluent_python/14_iter.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
Задание #1 Рассмотрим набор таблиц описывающая индустрию бубликов (bagel) bagel, типы бубликов и компании их делающие: name STRING price FLOAT made_by STRING И purchase: bagel_name STRING franchise STRING date INT quantity INT purchaser_age INT Где purchase.bagel_name ссылается bagel.name т purchase.franchise н...
%sql SELECT * FROM bagel LIMIT 3; %sql SELECT * FROM purchase LIMIT 3;
Неделя 1/Задание в классе/Лабораторная-3-2.ipynb
bakanchevn/DBCourseMirea2017
gpl-3.0
Напишите запрос для получение суммарного дохода для каждого типа бубликов, для которых средний возраст покупателя больше 18 Exercise #2 Воспользуемся упрощенной версией precipitation_full таблицы, которая имеет только дневные daily осадки только в CA, и имеет следующую схему: station_id day precipitation
%sql SELECT * FROM precipitation LIMIT 5;
Неделя 1/Задание в классе/Лабораторная-3-2.ipynb
bakanchevn/DBCourseMirea2017
gpl-3.0
Верните id станции, у которых средние количество осадков > 75. (попробуйте написать это вложенным запросом): перепишите в GROUP BY: Посмотрим на время выполнения
%time %sql SELECT DISTINCT p.station_id FROM precipitation p WHERE (SELECT AVG(precipitation) FROM precipitation WHERE station_id = p.station_id) > 75; %time %sql SELECT p.station_id FROM precipitation p GROUP BY p.station_id HAVING AVG(p.precipitation) > 75;
Неделя 1/Задание в классе/Лабораторная-3-2.ipynb
bakanchevn/DBCourseMirea2017
gpl-3.0
Initializing a new function There are two ways to create a function $f: G \to \mathbb{C}$: On general LCAs $G$, the function is represented by an analytical expression. If $G = \mathbb{Z}_{\mathbf{p}}$ with $p_i \geq 1$ for every $i$ ($G$ is a direct sum of discrete groups with finite period), a table of values (multi...
def gaussian(vector_arg, k = 0.1): return math.exp(-sum(i**2 for i in vector_arg)*k) # Gaussian function on Z Z = LCA([0]) gauss_on_Z = LCAFunc(gaussian, domain = Z) print(gauss_on_Z) # Printing show(gauss_on_Z) # LaTeX output # Gaussian function on T T = LCA([1], [False]) gauss_on_T = LCAFunc(gaussian, domain = ...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Notice how the print built-in and the to_latex() method will show human-readable output. With a table of values Functions on $\mathbb{Z}_\mathbf{p}$ can be defined using a table of values, if $p_i \geq 1$ for every $p_i \in \mathbf{p}$.
# Create a table of values table_data = [[1,2,3,4,5], [2,3,4,5,6], [3,4,5,6,7]] # Create a domain matching the table domain = LCA([3, 5]) table_func = LCAFunc(table_data, domain) show(table_func) print(table_func([1, 1])) # [1, 1] maps to 3
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Function evaluation A function $f \in \mathbb{C}^G$ is callable. To call (i.e. evaluate) a function, pass a group element.
# An element in Z element = [0] # Evaluate the function gauss_on_Z(element)
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
The sample() method can be used to sample a function on a list of group elements in the domain.
# Create a list of sample points [-6, ..., 6] sample_points = [[i] for i in range(-6, 7)] # Sample the function, returns a list of values sampled_func = gauss_on_Z.sample(sample_points) # Plot the result of sampling the function plt.figure(figsize = (8, 3)) plt.title('Gaussian function on $\mathbb{Z}$') plt.plot(samp...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Shifts Let $f: G \to \mathbb{C}$ be a function. The shift operator (or translation operator) $S_{h}$ is defined as $$S_{h}[f(g)] = f(g - h).$$ The shift operator shifts $f(g)$ by $h$, where $h, g \in G$. The shift operator is implemented as a method called shift.
# The group element to shift by shift_by = [3] # Shift the function shifted_gauss = gauss_on_Z.shift(shift_by) # Create sample poits and sample sample_points = [[i] for i in range(-6, 7)] sampled1 = gauss_on_Z.sample(sample_points) sampled2 = shifted_gauss.sample(sample_points) # Create a plot plt.figure(figsize = (...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Pullbacks Let $\phi: G \to H$ be a homomorphism and let $f:H \to \mathbb{C}$ be a function. The pullback of $f$ along $\phi$, denoted $\phi^*(f)$, is defined as $$\phi^*(f) := f \circ \phi.$$ The pullback "moves" the domain of the function $f$ to $G$, i.e. $\phi^*(f) : G \to \mathbb{C}$. The pullback is of f is calcula...
def linear(arg): return sum(arg) # The original function f = LCAFunc(linear, LCA([10])) show(f) # A homomorphism phi phi = HomLCA([2], target = [10]) show(phi) # The pullback of f along phi g = f.pullback(phi) show(g)
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
We now sample the functions and plot them.
# Sample the functions and plot them sample_points = [[i] for i in range(-5, 15)] f_sampled = f.sample(sample_points) g_sampled = g.sample(sample_points) # Plot the original function and the pullback plt.figure(figsize = (8, 3)) plt.title('Linear functions') label = '$f \in \mathbb{Z}_{10}$' plt.plot(sample_points, f_...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Pushforwards Let $\phi: G \to H$ be a epimorphism and let $f:G \to \mathbb{C}$ be a function. The pushforward of $f$ along $\phi$, denoted $\phi_*(f)$, is defined as $$(\phi_*(f))(g) := \sum_{k \in \operatorname{ker}\phi} f(k + h), \quad \phi(g) = h$$ The pullback "moves" the domain of the function $f$ to $H$, i.e. $\p...
# We create a function on Z and plot it def gaussian(arg, k = 0.05): """ A gaussian function. """ return math.exp(-sum(i**2 for i in arg)*k) # Create gaussian on Z, shift it by 5 gauss_on_Z = LCAFunc(gaussian, LCA([0])) gauss_on_Z = gauss_on_Z.shift([5]) # Sample points and sampled function s_points =...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
First we do a pushforward with only one term. Not enough terms are present in the sum to capture what the pushforward would look like if the sum went to infinity.
terms = 1 # Pushforward of the function along phi gauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms) # Sample the functions and plot them pushforward_sampled = gauss_on_Z_10.sample(sample_points) plt.figure(figsize = (8, 3)) label = 'A gaussian function on $\mathbb{Z}$ and \ pushforward to $\mathbb{Z}_{10}$ with few...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Next we do a pushforward with more terms in the sum, this captures what the pushforward would look like if the sum went to infinity.
terms = 9 gauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms) # Sample the functions and plot them pushforward_sampled = gauss_on_Z_10.sample(sample_points) plt.figure(figsize = (8, 3)) plt.title('A gaussian function on $\mathbb{Z}$ and \ pushforward to $\mathbb{Z}_{10}$ with enough terms') plt.plot(s_points, f_sampl...
docs/notebooks/functions.ipynb
tommyod/abelian
gpl-3.0
Load the component using KFP SDK
import kfp.components as comp mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/ml_engine/deploy/component.yaml') help(mlengine_deploy_op)
components/gcp/ml_engine/deploy/sample.ipynb
kubeflow/pipelines
apache-2.0
Sample Note: The following sample code works in IPython notebook or directly in Python code. In this sample, you deploy a pre-built trained model from gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/ to Cloud ML Engine. The deployed model is kfp_sample_model. A new version is created every time the s...
# Required Parameters PROJECT_ID = '<Please put your project ID here>' # Optional Parameters EXPERIMENT_NAME = 'CLOUDML - Deploy' TRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/'
components/gcp/ml_engine/deploy/sample.ipynb
kubeflow/pipelines
apache-2.0
Example pipeline that uses the component
import kfp.dsl as dsl import json @dsl.pipeline( name='CloudML deploy pipeline', description='CloudML deploy pipeline' ) def pipeline( model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/', project_id = PROJECT_ID, model_id = 'kfp_sample_model', version_id = '', r...
components/gcp/ml_engine/deploy/sample.ipynb
kubeflow/pipelines
apache-2.0
Env setup
# This is needed to display the images. %matplotlib inline
research/object_detection/object_detection_tutorial.ipynb
jiaphuan/models
apache-2.0
Object detection imports Here are the imports from the object detection module.
from utils import label_map_util from utils import visualization_utils as vis_util
research/object_detection/object_detection_tutorial.ipynb
jiaphuan/models
apache-2.0
Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varyin...
# What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_...
research/object_detection/object_detection_tutorial.ipynb
jiaphuan/models
apache-2.0