Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
8,200
Given the following text description, write Python code to implement the functionality described below step by step Description: matplotlib의 여러가지 플롯 matplotlib는 기본적인 라인 프롯 이외에도 다양한 챠트/플롯 유형을 지원한다. 바 차트 x, y 데이터 중 하나가 카테고리 값인 경우에는 bar 명령과 barh 명령으로 바 차트(bar chart) 시각화를 할 수 있다. 가로 방향으로 바 차트를 그리려면 barh 명령을 사용한다. 자세한 내용은 다음 웹사이트를 참조한다. http Step1: xerr 인수나 yerr 인수를 지정하면 에러 바(error bar)를 추가할 수 있다. Step2: 두 개 이상의 바 차트를 한번에 그리는 경우도 있다. Step3: 또는 bottom 인수로 바의 위치를 조정하여 겹친 바 차트(stacked bar chart)도 그릴 수 있다. Step4: 스템 플롯 바 차트와 유사하지만 폭(width)이 없는 스템 플롯(stem plot)도 있다. 주로 이산 확률 함수나 자기상관관계(auto-correlation)를 묘사할 때 사용된다. http Step5: 파이 차트 카테고리 별 값의 상대적인 비교를 해야 할 때는 pie 명령으로 파이 차트(pie chart)를 그릴 수 있다. http Step6: 히스토그램 히스토그램을 그리기 위한 hist 명령도 제공한다. hist 명령은 bins 인수로 데이터 집계 구간을 받는다. 또한 반환값으로 데이터 집계 결과를 반환해주므로 이 결과를 다른 코드에서 사용할 수도 있다. http Step7: 스캐터 플롯 두 개의 데이터 집합, 예를 들면 두 벡터의 상관관계를 살펴보려면 scatter 명령으로 스캐터 플롯을 그린다. http Step8: Imshow 지금 까지는 1개 혹은 2개의 1차원 자료에 대한 시각화를 살펴보았다. 이제는 행과 열을 가진 2차원 데이터의 시각화에 대해 알아본다. 예를 들어 화상(image) 데이터는 전형적인 2차원 자료이다. 가장 간단한 2차원 자료 시각화 방법은 imshow 명령을 써서 2차원 자료 자체를 각 위치의 명암으로 표시하는 것이다. 자료의 시각화를 돕기위해 다양한 2차원 인터폴레이션을 지원한다. http Step9: 컨투어 플롯 2차원 자료를 시각화하는 또다른 방법은 명암이 아닌 등고선(contour)을 사용하는 방법이다. contour 혹은 contourf 명령을 사용한다. http Step10: 3D 서피스 플롯 입력 변수가 x, y 두 개이고 출력 변수가 z 하나인 경우에는 3차원 자료가 된다. 3차원 플롯은 일반 플롯과 달리 Axes3D라는 3차원 전용 axes를 생성해야 한다. plot_wireframe, plot_surface 명령을 사용한다. http
Python Code: y = [2, 3, 1] x = np.arange(len(y)) xlabel = ['A', 'B', 'C'] plt.bar(x, y, align='center') #보통은 이 명령어를 쳐야 가운데를 기준으로 x가 정렬, 설정 없으면 left가 디폴트 plt.xticks(x, xlabel); Explanation: matplotlib의 여러가지 플롯 matplotlib는 기본적인 라인 프롯 이외에도 다양한 챠트/플롯 유형을 지원한다. 바 차트 x, y 데이터 중 하나가 카테고리 값인 경우에는 bar 명령과 barh 명령으로 바 차트(bar chart) 시각화를 할 수 있다. 가로 방향으로 바 차트를 그리려면 barh 명령을 사용한다. 자세한 내용은 다음 웹사이트를 참조한다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.bar http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.barh 바 차트 작성시 주의점은 첫번째 인수인 left 가 x축에서 바(bar)의 왼쪽 변의 위치를 나타낸다는 점이다. 바의 위치를 xtick 위치의 중앙으로 옮기려면 align='center' 인수를 지정해야 한다. End of explanation people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim') y_pos = np.arange(len(people)) performance = 3 + 10 * np.random.rand(len(people)) error = np.random.rand(len(people)) plt.barh(y_pos, performance, xerr=error, align='center', alpha=0.4) plt.yticks(y_pos, people) plt.xlabel('Performance'); Explanation: xerr 인수나 yerr 인수를 지정하면 에러 바(error bar)를 추가할 수 있다. End of explanation n_groups = 5 means_men = (20, 35, 30, 35, 27) std_men = (2, 3, 4, 1, 2) means_women = (25, 32, 34, 20, 25) std_women = (3, 5, 2, 3, 3) fig, ax = plt.subplots() index = np.arange(n_groups) bar_width = 0.35 opacity = 0.4 error_config = {'ecolor': '0.3'} rects1 = plt.bar(index, means_men, bar_width, alpha=opacity, color='b', yerr=std_men, error_kw=error_config, label='Men') rects2 = plt.bar(index + bar_width, means_women, bar_width, alpha=opacity, color='r', yerr=std_women, error_kw=error_config, label='Women') plt.xlabel('Group') plt.ylabel('Scores') plt.title('Scores by group and gender') plt.xticks(index + bar_width, ('A', 'B', 'C', 'D', 'E')) plt.legend() plt.tight_layout() Explanation: 두 개 이상의 바 차트를 한번에 그리는 경우도 있다. End of explanation N = 5 menMeans = (20, 35, 30, 35, 27) womenMeans = (25, 32, 34, 20, 25) menStd = (2, 3, 4, 1, 2) womenStd = (3, 5, 2, 3, 3) ind = np.arange(N) # the x locations for the groups width = 0.35 # the width of the bars: can also be len(x) sequence p1 = plt.bar(ind, menMeans, width, color='r', yerr=menStd) p2 = plt.bar(ind, womenMeans, width, color='y', bottom=menMeans, yerr=womenStd) plt.ylabel('Scores') plt.title('Scores by group and gender') plt.xticks(ind + width/2., ('G1', 'G2', 'G3', 'G4', 'G5')) plt.yticks(np.arange(0, 81, 10)) plt.legend((p1[0], p2[0]), ('Men', 'Women')) Explanation: 또는 bottom 인수로 바의 위치를 조정하여 겹친 바 차트(stacked bar chart)도 그릴 수 있다. End of explanation x = np.linspace(0.1, 2*np.pi, 10) markerline, stemlines, baseline = plt.stem(x, np.cos(x), '-.') plt.setp(markerline, 'markerfacecolor', 'b') plt.setp(baseline, 'color', 'r', 'linewidth', 2); Explanation: 스템 플롯 바 차트와 유사하지만 폭(width)이 없는 스템 플롯(stem plot)도 있다. 주로 이산 확률 함수나 자기상관관계(auto-correlation)를 묘사할 때 사용된다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.stem End of explanation labels = 'Frogs', 'Hogs', 'Dogs', 'Logs' sizes = [15, 30, 45, 10] colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral'] explode = (0, 0.1, 0, 0) #툭 튀어나오게끔 plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90) plt.axis('equal'); #보기 좋게끔 정사각형 형태의 틀로 맞춰서 출력된다. Explanation: 파이 차트 카테고리 별 값의 상대적인 비교를 해야 할 때는 pie 명령으로 파이 차트(pie chart)를 그릴 수 있다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.pie End of explanation x = np.random.randn(5000) arrays, bins, patches = plt.hist(x, bins=50, normed=True) #bins를 100개 이상 넣을 경우에 패치즈를 잡아먹기 때문에(메모리) 필요가 없다 arrays bins Explanation: 히스토그램 히스토그램을 그리기 위한 hist 명령도 제공한다. hist 명령은 bins 인수로 데이터 집계 구간을 받는다. 또한 반환값으로 데이터 집계 결과를 반환해주므로 이 결과를 다른 코드에서 사용할 수도 있다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.hist End of explanation X = np.random.normal(0, 1, 1024) Y = np.random.normal(0, 1, 1024) plt.scatter(X, Y); N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = np.random.rand(N) area = np.pi * (15 * np.random.rand(N))**2 plt.scatter(x, y, s=area, c=colors, alpha=0.5); Explanation: 스캐터 플롯 두 개의 데이터 집합, 예를 들면 두 벡터의 상관관계를 살펴보려면 scatter 명령으로 스캐터 플롯을 그린다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.scatter End of explanation from sklearn.datasets import load_digits digits = load_digits() X = digits.images[0] X plt.imshow(X, interpolation='nearest'); #사람이 인식하기 좋게 부드럽게 만드는 정도 plt.grid(False) methods = [None, 'none', 'nearest', 'bilinear', 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos'] fig, axes = plt.subplots(3, 6, figsize=(12, 6), subplot_kw={'xticks': [], 'yticks': []}) fig.subplots_adjust(hspace=0.3, wspace=0.05) for ax, interp_method in zip(axes.flat, methods): ax.imshow(X, interpolation=interp_method) ax.set_title(interp_method) Explanation: Imshow 지금 까지는 1개 혹은 2개의 1차원 자료에 대한 시각화를 살펴보았다. 이제는 행과 열을 가진 2차원 데이터의 시각화에 대해 알아본다. 예를 들어 화상(image) 데이터는 전형적인 2차원 자료이다. 가장 간단한 2차원 자료 시각화 방법은 imshow 명령을 써서 2차원 자료 자체를 각 위치의 명암으로 표시하는 것이다. 자료의 시각화를 돕기위해 다양한 2차원 인터폴레이션을 지원한다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.imshow End of explanation def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 -y ** 2) n = 256 x = np.linspace(-3, 3, n) y = np.linspace(-3, 3, n) XX, YY = np.meshgrid(x, y) ZZ = f(XX, YY) plt.contourf(XX, YY, ZZ, alpha=.75, cmap='jet'); plt.contour(XX, YY, ZZ, colors='black', linewidth=.5); Explanation: 컨투어 플롯 2차원 자료를 시각화하는 또다른 방법은 명암이 아닌 등고선(contour)을 사용하는 방법이다. contour 혹은 contourf 명령을 사용한다. http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.contour http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.contourf End of explanation from mpl_toolkits.mplot3d import Axes3D X = np.arange(-4, 4, 0.25) Y = np.arange(-4, 4, 0.25) XX, YY = np.meshgrid(X, Y) RR = np.sqrt(XX**2 + YY**2) ZZ = np.sin(RR) fig = plt.figure() ax = Axes3D(fig) ax.plot_surface(XX, YY, ZZ, rstride=1, cstride=1, cmap='hot'); Explanation: 3D 서피스 플롯 입력 변수가 x, y 두 개이고 출력 변수가 z 하나인 경우에는 3차원 자료가 된다. 3차원 플롯은 일반 플롯과 달리 Axes3D라는 3차원 전용 axes를 생성해야 한다. plot_wireframe, plot_surface 명령을 사용한다. http://matplotlib.org/1.5.1/mpl_toolkits/mplot3d/api.html#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_wireframe http://matplotlib.org/1.5.1/mpl_toolkits/mplot3d/api.html#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_surface End of explanation
8,201
Given the following text description, write Python code to implement the functionality described below step by step Description: Solution from Johannes Rieke and Alex Moore¶ Step1: Exercise 1 1. Step2: 2. Step3: For a = 0.5, the curve is flatter; for a = 2, the curve is steeper. 3. Picking $\mu$ = 0.1 Step4: 4. Step5: Both distributions seem to follow a Poisson distribution. Less trials show a down response, which is obvious due to the choice of $\mu$ above. However, except for the overall count, both distribution are very similar. This is also refelected in the similar mean values. From an experimental point of view, this does not make sense Step6: Picking $m_{\mu}$ = 4 and $s_{\mu}$ = 7
Python Code: from __future__ import division, print_function import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Solution from Johannes Rieke and Alex Moore¶ End of explanation mu = 0.2 sigma = 0.5 dt = 0.01 time = np.arange(0, 10, dt) for i in range(5): x = np.zeros_like(time) for i, t in enumerate(time[:-1]): x[i+1] = x[i] + mu * dt + sigma * np.sqrt(dt) * np.random.normal() plt.plot(time, x) plt.xlabel('t [s]') plt.ylabel('X') Explanation: Exercise 1 1. End of explanation mus = [-0.1, 0, 0.1, 0.2, 0.5] a = 1 percent_ups = [] for mu in mus: up_counter = 0 for trial in range(200): decision_boundary_reached = False while not decision_boundary_reached: # resimulate as long as decision boundary is found x = np.zeros_like(time) for i, t in enumerate(time[:-1]): x[i+1] = x[i] + mu * dt + sigma * np.sqrt(dt) * np.random.normal() if x[i+1] >= a: # up up_counter += 1 decision_boundary_reached = True break elif x[i+1] <= -a: # down decision_boundary_reached = True break percent_ups.append(up_counter / 200 * 100) plt.plot(mus, percent_ups, 'o') plt.xlabel(r'$\mu$') plt.ylabel('Percent of trials with "up" response') plt.ylim(0, 100) Explanation: 2. End of explanation mu = 0.1 reaction_times_up = [] reaction_times_down = [] for trial in range(2000): x = np.zeros_like(time) for i, t in enumerate(time[:-1]): x[i+1] = x[i] + mu * dt + sigma * np.sqrt(dt) * np.random.normal() if x[i+1] >= a: # up reaction_times_up.append(t) break elif x[i+1] <= -a: # down reaction_times_down.append(t) break Explanation: For a = 0.5, the curve is flatter; for a = 2, the curve is steeper. 3. Picking $\mu$ = 0.1 End of explanation fig, axes = plt.subplots(1, 2, figsize=(10, 5)) for ax, reaction_times, title in zip(axes, [reaction_times_up, reaction_times_down], ['"Up" Response', '"Down" Response']): plt.sca(ax) plt.hist(reaction_times, 30) plt.xlabel('Reaction time [s]') plt.ylabel('# of trials') plt.title(title) plt.xlim(0, 10) plt.ylim(0, 150) np.mean(reaction_times_up), np.mean(reaction_times_down) Explanation: 4. End of explanation m_mu = 1 s_mu = 1 mu = np.random.normal(m_mu, s_mu) m_mus = np.linspace(-10, 10, 10) s_mus = np.linspace(0.1, 10, 10) a = 1 percent_up_grid = np.zeros((m_mus.size, s_mus.size)) for i_m, m_mu in enumerate(m_mus): for i_s, s_mu in enumerate(s_mus): up_counter = 0 for trial in range(200): mu = np.random.normal(m_mu, s_mu) decision_boundary_reached = False while not decision_boundary_reached: # resimulate as long as decision boundary is found x = np.zeros_like(time) for i, t in enumerate(time[:-1]): x[i+1] = x[i] + mu * dt + sigma * np.sqrt(dt) * np.random.normal() if x[i+1] >= a: # up up_counter += 1 decision_boundary_reached = True break elif x[i+1] <= -a: # down decision_boundary_reached = True break percent_up_grid[i_m, i_s] = up_counter / 200 * 100 plt.pcolor(s_mus, m_mus, percent_up_grid) plt.xlabel('$s_{\mu}$') plt.ylabel('$m_{\mu}$') plt.colorbar(label='Percent of trials with "up" response') Explanation: Both distributions seem to follow a Poisson distribution. Less trials show a down response, which is obvious due to the choice of $\mu$ above. However, except for the overall count, both distribution are very similar. This is also refelected in the similar mean values. From an experimental point of view, this does not make sense: If "Up" appears more often, subjects should also be able to guess it more quickly (i.e. the mean reaction time for "Up" should be smaller). 5. End of explanation m_mu = 4 s_mu = 7 reaction_times_up = [] reaction_times_down = [] for trial in range(2000): mu = np.random.normal(m_mu, s_mu) x = np.zeros_like(time) for i, t in enumerate(time[:-1]): x[i+1] = x[i] + mu * dt + sigma * np.sqrt(dt) * np.random.normal() if x[i+1] >= a: # up reaction_times_up.append(t) break elif x[i+1] <= -a: # down reaction_times_down.append(t) break fig, axes = plt.subplots(1, 2, figsize=(10, 5)) for ax, reaction_times, title in zip(axes, [reaction_times_up, reaction_times_down], ['"Up" Response', '"Down" Response']): plt.sca(ax) plt.hist(reaction_times, 30) plt.xlabel('Reaction time [s]') plt.ylabel('# of trials') plt.title(title) plt.xlim(0, 10) plt.ylim(0, 1200) np.mean(reaction_times_up), np.mean(reaction_times_down) Explanation: Picking $m_{\mu}$ = 4 and $s_{\mu}$ = 7 End of explanation
8,202
Given the following text description, write Python code to implement the functionality described below step by step Description: Exercises Step1: Exercise 1 a. Series Given an array of data, please create a pandas Series s with a datetime index starting 2016-01-01. The index should be daily frequency and should be the same length as the data. Step2: b. Accessing Series Elements. Print every other element of the first 50 elements of series s. Find the value associated with the index 2017-02-20. Step3: c. Boolean Indexing. In the series s, print all the values between 1 and 3. Step4: Exercise 2 Step5: b. Resampling Using the resample method, upsample the daily data to monthly frequency. Use the median method so that each monthly value is the median price of all the days in that month. Take the daily data and fill in every day, including weekends and holidays, using forward-fills. Step6: Exercise 3 Step7: Exercise 4 Step8: b. Series Operations Get the additive and multiplicative returns of this series. Calculate the rolling mean with a 60 day window. Calculate the standard deviation with a 60 day window. Step9: Exercise 5 Step10: b. DataFrames Manipulation Concatenate the following two series to form a dataframe. Rename the columns to Good Numbers and Bad Numbers. Change the index to be a datetime index starting on 2016-01-01. Step11: Exercise 6 Step12: Exercise 7 Step13: b. DataFrame Manipulation (again) Concatenate these DataFrames. Fill the missing data with 0s Step14: Exercise 8
Python Code: # Useful Functions import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Exercises: Introduction to pandas By Christopher van Hoecke, Maxwell Margenot Lecture Link : https://www.quantopian.com/lectures/introduction-to-pandas IMPORTANT NOTE: This lecture corresponds to the Introduction to Pandas lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public End of explanation l = np.random.randint(1,100, size=1000) s = pd.Series(l) ## Your code goes here Explanation: Exercise 1 a. Series Given an array of data, please create a pandas Series s with a datetime index starting 2016-01-01. The index should be daily frequency and should be the same length as the data. End of explanation ## Your code goes here ## Your code goes here Explanation: b. Accessing Series Elements. Print every other element of the first 50 elements of series s. Find the value associated with the index 2017-02-20. End of explanation ## Your code goes here Explanation: c. Boolean Indexing. In the series s, print all the values between 1 and 3. End of explanation ## Your code goes here ## Your code goes here Explanation: Exercise 2 : Indexing and time series. a. Display Print the first and last 5 elements of the series s. End of explanation symbol = "CMG" start = "2012-01-01" end = "2016-01-01" prices = get_pricing(symbol, start_date=start, end_date=end, fields="price") ## Your code goes here ## Your code goes here Explanation: b. Resampling Using the resample method, upsample the daily data to monthly frequency. Use the median method so that each monthly value is the median price of all the days in that month. Take the daily data and fill in every day, including weekends and holidays, using forward-fills. End of explanation ## Your code goes here ## Your code goes here Explanation: Exercise 3 : Missing Data Replace all instances of NaN using the forward fill method. Instead of filling, remove all instances of NaN from the data. End of explanation print "Summary Statistics" ## Your code goes here Explanation: Exercise 4 : Time Series Analysis with pandas a. General Information Print the count, mean, standard deviation, minimum, 25th, 50th, and 75th percentiles, and the max of our series s. End of explanation data = get_pricing('GE', fields='open_price', start_date='2016-01-01', end_date='2017-01-01') ## Your code goes here ## Your code goes here # Rolling mean ## Your code goes here ## Your code goes here # Rolling Standard Deviation ## Your code goes here ## Your code goes here Explanation: b. Series Operations Get the additive and multiplicative returns of this series. Calculate the rolling mean with a 60 day window. Calculate the standard deviation with a 60 day window. End of explanation l = {'fifth','fourth', 'third', 'second', 'first'} dict_data = {'a' : [1, 2, 3, 4, 5], 'b' : ['L', 'K', 'J', 'M', 'Z'],'c' : np.random.normal(0, 1, 5)} ## Your code goes here Explanation: Exercise 5 : DataFrames a. Indexing Form a DataFrame out of dict_data with l as its index. End of explanation s1 = pd.Series([2, 3, 5, 7, 11, 13], name='prime') s2 = pd.Series([1, 4, 6, 8, 9, 10], name='other') ## Your code goes here ## Your code goes here ## Your code goes here Explanation: b. DataFrames Manipulation Concatenate the following two series to form a dataframe. Rename the columns to Good Numbers and Bad Numbers. Change the index to be a datetime index starting on 2016-01-01. End of explanation symbol = ["XOM", "BP", "COP", "TOT"] start = "2012-01-01" end = "2016-01-01" prices = get_pricing(symbol, start_date=start, end_date=end, fields="price") if isinstance(symbol, list): prices.columns = map(lambda x: x.symbol, prices.columns) else: prices.name = symbol # Check Type of Data for these two. prices.XOM.head() prices.loc[:, 'XOM'].head() ## Your code goes here ## Your code goes here ## Your code goes here Explanation: Exercise 6 : Accessing DataFrame elements. a. Columns Check the data type of one of the DataFrame's columns. Print the values associated with time range 2013-01-01 to 2013-01-10. End of explanation # Filter the data for prices to only print out values where # BP > 30 # XOM < 100 # BP > 30 AND XOM < 100 # The union of (BP > 30 AND XOM < 100) with TOT being non-nan ## Your code goes here # Add a column for TSLA and drop the column for XOM ## Your code goes here Explanation: Exercise 7 : Boolean Indexing a. Filtering. Filter pricing data from the last question (stored in prices) to only print values where: BP > 30 XOM < 100 The intersection of both above conditions (BP > 30 and XOM < 100) The union of the previous composite condition along with TOT having no nan values ((BP > 30 and XOM < 100) or TOT is non-NaN). Add a column for TSLA and drop the column for XOM. End of explanation # Concatenate these dataframes df_1 = get_pricing(['SPY', 'VXX'], start_date=start, end_date=end, fields='price') df_2 = get_pricing(['MSFT', 'AAPL', 'GOOG'], start_date=start, end_date=end, fields='price') ## Your code goes here # Fill GOOG missing data with 0 ## Your code goes here Explanation: b. DataFrame Manipulation (again) Concatenate these DataFrames. Fill the missing data with 0s End of explanation # Print a summary of the 'prices' times series. ## Your code goes here # Print the natural log returns of the first 10 values ## Your code goes here # Print the Muliplicative returns ## Your code goes here # Normlalize the returns and plot ## Your code goes here # Rolling mean ## Your code goes here # Rolling standard deviation ## Your code goes here # Plotting ## Your code goes here Explanation: Exercise 8 : Time Series Analysis a. Summary Print out a summary of the prices DataFrame from above. Take the log returns and print the first 10 values. Print the multiplicative returns of each company. Normalize and plot the returns from 2014 to 2015. Plot a 60 day window rolling mean of the prices. Plot a 60 day window rolling standfard deviation of the prices. End of explanation
8,203
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-1', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: TEST-INSTITUTE-1 Source ID: SANDBOX-1 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:43 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
8,204
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). Step5: <img src="image/Mean_Variance_Image.png" style="height Step6: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. Step7: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height Step9: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
Python Code: import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All modules imported.') Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts. The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported". End of explanation def download(url, file): Download file from <url> :param url: URL to file :param file: Local file path if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): Uncompress features and labels from a zip file :param file: The zip file to extract the data from features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with a docker container docker_size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.') Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). End of explanation # Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data # TODO: Implement Min-Max scaling for grayscale image data return 0.1 + (image_data * 0.8 / 255.) ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here. End of explanation %matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.') Explanation: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. End of explanation # All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # TODO: Set the features and labels tensors features = tf.placeholder(tf.float32, (None, features_count)) labels = tf.placeholder(tf.float32, (None, labels_count)) # TODO: Set the weights and biases tensors weights = tf.Variable(tf.truncated_normal((features_count, labels_count), mean=0.0, stddev=0.5)) biases = tf.Variable(tf.zeros(labels_count)) ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.') Explanation: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here. End of explanation # Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration epochs = 5 learning_rate = 0.2 # best 1st config: e = 1; l_r = 0.1; acc = 0.7760 # best 2nd config: e = 5; l_r = 0.2; acc = 0.7905 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy)) Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here. End of explanation ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict=test_feed_dict) assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy) print('Nice Job! Test Accuracy is {}'.format(test_accuracy)) Explanation: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. End of explanation
8,205
Given the following text description, write Python code to implement the functionality described below step by step Description: LDA on simon's example The LDA is a well-known probabilistic model to handle mixtures of topics in an unsupervised way. It has been applied to a large number of problems (Blei, 2012). The original paper has over 10000 citations (Blei, 2003). My goal is to see if LDA methods have a place in phylogenetics First let see if we can make it work on trees Model formalism In statistical terms a generative model is a model for randomly generating observable data. The model specifies a joint distribution over observed and latent variables. The joint distribution for the LDA model can be shown as a graphical model. There is also a generative story (see below) that can sometimes be more intuitive than the plate diagram. Step1: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$. $\alpha$ is a hyperprior for the DP over per-document topic distributions $\beta$ is the hyperprior for the DP over per-topic word distributions $\theta_{m}$ is the topic distribution for document $m$ $\phi_{k}$ is the word distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th word in document $m$ $w_{m,n}$ is the specific word The generative story for phylogenetics We are still modeling topics. However, documents become sites and words become transitions. Transitions may defined in nucleotide, amino acid or codon space. $\alpha$ is a hyperprior for the DP over per-site topic distributions $\beta$ is the hyperprior for the DP over per-topic transition distributions $\theta_{m}$ is the topic distribution for gene $m$ $\phi_{k}$ is the nucleotide transition distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th nucleotide transition in gene $m$ $w_{m,n}$ is the specific transition The generative process Choose $\theta_{m} \sim \textrm{Dir}(\alpha)$, where $m \in {1,...M}$ and $\textrm{Dir}(\alpha)$ is the Dirichlet distribtion for $\alpha$ Choose $\phi_{k} \sim \textrm{Dir}(\beta)$, where $k \in {1,...K}$ For each of the transition positions ($m$,$n$), where $n \in {1,...N}$, and $m \in {1,...M}$ Choose a topic $z_{m,n} \sim \textrm{Multinomial}(\theta_{m})$ Choose a transition $w_{m,n} \sim \textrm{Multinomial}(\phi_{m,n})$ $\phi$ is a $K*V$ Markov matrix each row of which denotes the transition distribution of a topic. Vocabulary and smoothing The vocabulary size ($V$) is the set of all types possible transitions that we want to consider. Each codon transition can be represented as codon1-codon2-feature. For example, CAC-CAT-CpG=True would represent a synonymous mutation for histidine in the vicinity of a CpG island. The reason that $\beta$ is not connected directly to $w_{n}$ is that word in documents tend to be sparse and this formulation is a smoothed version of LDA and it tends to help with deal with the large number of zero probabilities. Generate some data for an example First we specify the distributions that the sequences come from. Then we generate sequences -- we assume that each codon is governed by a specific class and we are assuming only one mutations away from root sequence Transform our sequences into codon transitions (i.e. words) ((61x61) - 61) if we ignore non-transitions
Python Code: from IPython.display import Image Image(filename='lda_plate.png') Explanation: LDA on simon's example The LDA is a well-known probabilistic model to handle mixtures of topics in an unsupervised way. It has been applied to a large number of problems (Blei, 2012). The original paper has over 10000 citations (Blei, 2003). My goal is to see if LDA methods have a place in phylogenetics First let see if we can make it work on trees Model formalism In statistical terms a generative model is a model for randomly generating observable data. The model specifies a joint distribution over observed and latent variables. The joint distribution for the LDA model can be shown as a graphical model. There is also a generative story (see below) that can sometimes be more intuitive than the plate diagram. End of explanation vocabulary = [] for cdn1 in sim.codons: for cdn2 in sim.codons: if cdn1 == cdn2: continue vocabulary.append(cdn1+"-"+cdn2) print 'vocabulary: ', len(vocabulary) transitions = np.zeros((N,M-1),).astype(str) # transition relative to root for i in range(sequences.shape[0]): for j in range(sequences.shape[1]): if j == sequences.shape[1] - 1: continue if sequences[i,0] == sequences[i,j+1]: transitions[i,j] = '-' else: transitions[i,j] = sequences[i,0]+"-"+sequences[i,j+1] print transitions # convert words into vector vocab = set([]) for w in range(transitions.shape[0]): posTransitions = transitions[w,:] for t in posTransitions: if t != '-': vocab.update([t]) vocab = list(vocab) print vocab ## documents are positions in alignment data = [] for w in range(transitions.shape[0]): posTransitions = transitions[w,:] document = [] for v in vocab: document.append(len(np.where(posTransitions == v)[0])) data.append(document) print document data = np.array(data) import numpy as np import pymc as pm K = 3 # number of topics V = len(vocab) # number of words D = 5 # number of documents #data = np.array([[1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0]]) alpha = np.ones(K) beta = np.ones(V) theta = pm.Container([pm.CompletedDirichlet("theta_%s" % i, pm.Dirichlet("ptheta_%s" % i, theta=alpha)) for i in range(D)]) phi = pm.Container([pm.CompletedDirichlet("phi_%s" % k, pm.Dirichlet("pphi_%s" % k, theta=beta)) for k in range(K)]) Wd = [len(doc) for doc in data] z = pm.Container([pm.Categorical('z_%i' % d, p = theta[d], size=Wd[d], value=np.random.randint(K, size=Wd[d])) for d in range(D)]) # cannot use p=phi[z[d][i]] here since phi is an ordinary list while z[d][i] is stochastic w = pm.Container([pm.Categorical("w_%i_%i" % (d,i), p = pm.Lambda('phi_z_%i_%i' % (d,i), lambda z=z[d][i], phi=phi: phi[z]), value=data[d][i], observed=True) for d in range(D) for i in range(Wd[d])]) model = pm.Model([theta, phi, z, w]) #mcmc = pm.MCMC(model) #mcmc.sample(100) M = pm.MCMC(model) M.sample(5000,burn=500) pm.Matplot.plot(M) Explanation: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$. $\alpha$ is a hyperprior for the DP over per-document topic distributions $\beta$ is the hyperprior for the DP over per-topic word distributions $\theta_{m}$ is the topic distribution for document $m$ $\phi_{k}$ is the word distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th word in document $m$ $w_{m,n}$ is the specific word The generative story for phylogenetics We are still modeling topics. However, documents become sites and words become transitions. Transitions may defined in nucleotide, amino acid or codon space. $\alpha$ is a hyperprior for the DP over per-site topic distributions $\beta$ is the hyperprior for the DP over per-topic transition distributions $\theta_{m}$ is the topic distribution for gene $m$ $\phi_{k}$ is the nucleotide transition distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th nucleotide transition in gene $m$ $w_{m,n}$ is the specific transition The generative process Choose $\theta_{m} \sim \textrm{Dir}(\alpha)$, where $m \in {1,...M}$ and $\textrm{Dir}(\alpha)$ is the Dirichlet distribtion for $\alpha$ Choose $\phi_{k} \sim \textrm{Dir}(\beta)$, where $k \in {1,...K}$ For each of the transition positions ($m$,$n$), where $n \in {1,...N}$, and $m \in {1,...M}$ Choose a topic $z_{m,n} \sim \textrm{Multinomial}(\theta_{m})$ Choose a transition $w_{m,n} \sim \textrm{Multinomial}(\phi_{m,n})$ $\phi$ is a $K*V$ Markov matrix each row of which denotes the transition distribution of a topic. Vocabulary and smoothing The vocabulary size ($V$) is the set of all types possible transitions that we want to consider. Each codon transition can be represented as codon1-codon2-feature. For example, CAC-CAT-CpG=True would represent a synonymous mutation for histidine in the vicinity of a CpG island. The reason that $\beta$ is not connected directly to $w_{n}$ is that word in documents tend to be sparse and this formulation is a smoothed version of LDA and it tends to help with deal with the large number of zero probabilities. Generate some data for an example First we specify the distributions that the sequences come from. Then we generate sequences -- we assume that each codon is governed by a specific class and we are assuming only one mutations away from root sequence Transform our sequences into codon transitions (i.e. words) ((61x61) - 61) if we ignore non-transitions End of explanation
8,206
Given the following text description, write Python code to implement the functionality described below step by step Description: Gas Streaming in Disks Step2: Initialize the data First we need to define a function that tells us the speed of the gas at a given distance from the center of the star or galaxy. We consider only three simple cases here, always based on the balance of gravitation and centrifugal force in a spherical mass distribution Step3: Plotting the Rotation Curve Step4: This curve of velocity as function of radius is called a Rotation Curve, and extracting such a curve from an observation is crucial to understanding the mass distribution within a galaxy, or the mass of the young star at the center of the disk. We are assuming the gas is on circular orbits, which turns out is not always correct for galaxies. However, for this experiment we will keep that assumption. Step5: Backwards Projection This is where we take a point in the sky, and deproject back where in the galaxy this point came from and compute the velocity and projected velocity. The big advantage is the simplicity of computing the observable at each picked point in the sky. The big drawback is that the deprojection may not be trivial in cases where the model is not simple, e.g. non-circular motion and/or non-planar disks. Since we have a simple model here, let's take this approach. The so-called forward projection we would need to use some extra steps that only add to the complexity. Step7: Although we have defined a function velocity to compute the rotation velocity at any radius, this function cannot easily compute from a numpy array, as we just created on a grid on the sky. Thus we need a convenience function to do just that. You could also try and modify the velocity function so it takes a numpy array as input, and return a numpy !!!
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import math Explanation: Gas Streaming in Disks: circular orbit approach The gas streaming around a young star, or in a galactic disk is dominated by gravity. So we can simply compute the orbits of a point mass around a star, or in the more complex potential of a galactic disk, where we actually want to discover the mass distribution from the gas (or star) streaming. If we assume the mass distribution is spherical, we know circular orbits are the simple solution to gas flow or periodic orbits in such a potential. This will allow us to predict the velocity field we are observing in galaxies such as NGC 6503. End of explanation def velocity(radius, model='galaxy'): describe the streaming velocity as function of radius in or around an object such as a star or a galaxy. We usually define the velocity to be 1 at a radius of 1. if model == 'star': # A star has a keplerian rotation curve. The planets around our sun obey this law. if radius == 0.0: return 0.0 else: return 1.0/np.sqrt(radius) elif model == 'galaxy': # Most disk galaxies have a flat rotation curve with a linear slope in the center. if radius > 1.0: # flat rotation curve outside radius 1.0 return 1.0 else: # solid body inside radius 1.0, linearly rising rotation curve return radius elif model == 'plummer': # A plummer sphere was an early 1900s description of clusters, and is also not # a bad description for the inner portions of a galaxy. You can also view it # as a hybrid and softened version of the 'star' and 'galaxy' described above. # Note: not quite 1 at 1 yet # return radius / (1+radius*radius)**0.75 return radius / (0.5+0.5*radius*radius)**0.75 else: return 0.0 #model = 'star' #model = 'galaxy' model = 'plummer' rad = np.arange(0.0,4.0,0.05) vel = np.zeros(len(rad)) # this also works: vel = rad * 0.0 for i in range(len(rad)): vel[i] = velocity(rad[i],model) print("First, peak and Last value:",vel[0],vel.max(),vel[-1]) Explanation: Initialize the data First we need to define a function that tells us the speed of the gas at a given distance from the center of the star or galaxy. We consider only three simple cases here, always based on the balance of gravitation and centrifugal force in a spherical mass distribution: $$ { v^2 \over r } = {{ G M(<r) } \over r^2} $$ or $$ v = \sqrt{ {G M(<r) } \over r} $$ Of course this implies (and that's what we eventually want to do) that for a giving rotation curve, $v$, we can find out the mass distribution: $$ G M(<r) = v^2 r $$ End of explanation plt.plot(rad,vel) plt.xlabel("Radius") plt.ylabel("Velocity") plt.title("Rotation Curve (%s)" % model); Explanation: Plotting the Rotation Curve End of explanation # set the inclination of the disk with the line of sigh inc = 60 # (0 means face-on, 90 means edge-on) # some helper variables cosi = math.cos(inc*math.pi/180.0) sini = math.sin(inc*math.pi/180.0) # radius of the disk, and steps in radius r0 = 4.0 dr = 0.1 Explanation: This curve of velocity as function of radius is called a Rotation Curve, and extracting such a curve from an observation is crucial to understanding the mass distribution within a galaxy, or the mass of the young star at the center of the disk. We are assuming the gas is on circular orbits, which turns out is not always correct for galaxies. However, for this experiment we will keep that assumption. End of explanation dr = 0.5 x = np.arange(-r0,r0,dr) y = np.arange(-r0,r0,dr) xx,yy = np.meshgrid(x,y) # helper variables for interpolations rr = np.sqrt(xx*xx+(yy/cosi)**2) if r0/dr < 20: plt.scatter(xx,yy) else: print("not plotting too many gridpoints/dimension",r0/dr) Explanation: Backwards Projection This is where we take a point in the sky, and deproject back where in the galaxy this point came from and compute the velocity and projected velocity. The big advantage is the simplicity of computing the observable at each picked point in the sky. The big drawback is that the deprojection may not be trivial in cases where the model is not simple, e.g. non-circular motion and/or non-planar disks. Since we have a simple model here, let's take this approach. The so-called forward projection we would need to use some extra steps that only add to the complexity. End of explanation def velocity2d(rad2d, model): convenient helper function to take a 2d array of radii and return the same-shaped velocities (ny,nx) = rad2d.shape vel2d = rad2d.copy() # could also do np.zeros(nx*ny).reshape(ny,nx) for y in range(ny): for x in range(nx): vel2d[y,x] = velocity(rad2d[y,x],model) return vel2d vv = velocity2d(rr,model) vvmasked = np.ma.masked_where(rr>r0,vv) vobs = vvmasked * xx / rr * sini print("V_max:",vobs.max()) vmax = 1 vmax = vobs.max() if vmax > 0: plt.imshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax) #plt.matshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax) else: plt.imshow(vobs,origin=['Lower']) plt.colorbar() Explanation: Although we have defined a function velocity to compute the rotation velocity at any radius, this function cannot easily compute from a numpy array, as we just created on a grid on the sky. Thus we need a convenience function to do just that. You could also try and modify the velocity function so it takes a numpy array as input, and return a numpy !!! End of explanation
8,207
Given the following text description, write Python code to implement the functionality described below step by step Description: Relaunched Hedge Funds This program is a replication of Stata Script Version 0.2 Step1: Prepare Data Step2: Analysis Merge managers with the start/end dates Step3: Find First End Date for each PersonID. Variable first_end_date is defined as the performance ending date of the first fund of each person ID. Step6: Main Program (takes about 15 mins on server) Check the pair-wise duration gaps of each funds by Person ID
Python Code: import pandas as pd from datetime import timedelta # ****************** Program Settings ****************** Folder = "" # Location of program scripts Data = "temp/" # Location to which temporary files are generated DataSource = "data/" # Location of the original data files (ASCII) Gap_Days = 60 # To be quantify as a gap, the holidays that a fund manager takes must be at least 60 days. # If you would like to allow overlapping funds, change this to a negative number. # ****************************************************** Explanation: Relaunched Hedge Funds This program is a replication of Stata Script Version 0.2 End of explanation # Load manager details data_manager = pd.read_stata(DataSource + 'PeopleDetails.dta') data_manager = data_manager[data_manager.PersonTypeID==1] variables_to_keep = "ProductReference PersonID First Last JobTitle Address1 Address2 Address3 CityName StateName Zip CountryName".split() data_manager = data_manager[variables_to_keep] data_manager.head() # Extract Inception / PerformnaceEndData date data_dates = pd.read_stata(DataSource + 'ProductDetails.dta') variables_to_keep = "ProductReference InceptionDate PerformanceEndDate".split() data_dates = data_dates[variables_to_keep] data_dates.head() Explanation: Prepare Data End of explanation # Inner join - non-matches are excluded data_merged = data_manager.merge(data_dates, on='ProductReference', how='inner') data_merged = data_merged.sort(columns=['PersonID', 'PerformanceEndDate', 'InceptionDate', 'ProductReference']) data_merged.head() Explanation: Analysis Merge managers with the start/end dates End of explanation # Create a GroupBy object grouped = data_merged[['PersonID', 'PerformanceEndDate']].groupby('PersonID', as_index=False, axis=0) # these will aplit the DataFrame on its index (rows). grouped.groups print(grouped.get_group(4)) grouped.min().head() # Find the smallest value in the end date for each PersonID transformed = grouped.min() transformed['first_end_date'] = transformed['PerformanceEndDate'] transformed = transformed.drop('PerformanceEndDate', axis=1) transformed.head() Explanation: Find First End Date for each PersonID. Variable first_end_date is defined as the performance ending date of the first fund of each person ID. End of explanation # Merge back to the main dataset data_main = data_merged.merge(transformed, how='outer', left_on='PersonID', right_on='PersonID') data_main = data_main.sort(columns=['PersonID', 'InceptionDate', 'PerformanceEndDate', 'ProductReference']) ################################################ # WARNING: DEBUG CODE -- NEEDS TO BE DISABLED # data_main = data_main[:2000:] # data_main = data_main[data_main.PersonID==799] ################################################ def find_gaps(person_panel): '''The function finds the number of relaunched hedge funds. A gap is defined as there exists a fund, of which the PerformanceEndDate is before the inception date of all funds that were incepted later than this fund. Returns the number of gaps; and all of the Fund ID which is proceeding each gap. ''' gaps_number = 0 fund_ID_preceed_gap = [] # Reset index from 0 person_panel = person_panel.reset_index(drop=True) # print("Her data panel is as below:") # print(person_panel) for i, i_row in person_panel.iterrows(): # Reset criteria status for a new i_row criteria_backward = True criteria_forward = True criteria_exist_after = False for j, j_row in person_panel.iterrows(): if i==j: # Skip continue # print('Comparison now made for row: ', i, j) # Days to define the gap is NOT yet incorporated. # Criteria 1 - Looking backward: check if i_end>j_end for all j of which j_inc<i_inc # For all funds earlier than i # j: Inc-----End # i: Inc------------End if j_row.InceptionDate<i_row.InceptionDate: criteria_backward *= check_backward(i_row, j_row) # Criteria 2 - Looking forward: check if i_end<j_inc for all j of which j_inc>i_inc # For all funds later than i # i: Inc---------End # |***GAP***| # j: Inc--------End if j_row.InceptionDate>=i_row.InceptionDate: criteria_forward *= check_forward(i_row, j_row) # Criteria 3 - There must be funds incepted after fund i if j_row.InceptionDate>=i_row.InceptionDate: criteria_exist_after = True # If Criteria 1,2,3 are all satisfied # Fund i is the fund proceeding a gap if criteria_backward==True and criteria_forward==True and criteria_exist_after==True: fund_ID_preceed_gap.append(i_row.ProductReference) gaps_number += 1 return (gaps_number, fund_ID_preceed_gap) def check_backward(i_row, j_row): Compares the PerformanceEndDate for i_row and j_row. It returns True if i_end>j_end; It returns False otherwise. if i_row.PerformanceEndDate >= j_row.PerformanceEndDate: return True else: return False def check_forward(i_row, j_row): Compares the PerformanceEndDate for i_row to the InceptionDate of j_row. Returns True if i_end<j_inc; Returns False otherwise. global Gap_Days # Find the global variable Gap_Days gap_days = timedelta(days=Gap_Days) if i_row.PerformanceEndDate + gap_days <= j_row.InceptionDate: return True else: return False grouped_by_person = data_main.groupby('PersonID', as_index=False, axis=0) grouped_by_person.groups # A new dict to store number of gaps found for each person gaps = {} fund_IDs = {} for person_id, person_panel in grouped_by_person: # print("The person's PersonID is :", person_id) gaps_number, fund_ID_preceed_gap = find_gaps(person_panel[['ProductReference', 'InceptionDate', 'PerformanceEndDate']]) gaps[person_id] = gaps_number fund_IDs[person_id] = fund_ID_preceed_gap # Transform two dict into DataFrame data_gaps = pd.DataFrame.from_dict(gaps, orient='index') data_gaps.columns = ['number_of_gaps'] data_gaps['PersonID'] = data_gaps.index data_gaps = data_gaps.reset_index(drop=True) data_gaps.head() data_gap_fund_IDs = pd.DataFrame.from_dict(fund_IDs, orient='index') data_gap_fund_IDs.columns = ['fund_IDs_proceeding_1st_gap', 'fund_IDs_proceeding_2nd_gap'] data_gap_fund_IDs['PersonID'] = data_gap_fund_IDs.index data_gap_fund_IDs = data_gap_fund_IDs.reset_index(drop=True) data_gap_fund_IDs.head() # Merge with the main dataset data_output = data_main.merge(data_gaps, how='outer', left_on='PersonID', right_on='PersonID') data_output = data_output.merge(data_gap_fund_IDs, how='outer', left_on='PersonID', right_on='PersonID') data_output.head() # Output datafiles data_output.to_excel('data_output.xlsx') data_output.to_stata('data_output.dta', convert_dates={13:'td', 14:'td', 15:'td'}) Explanation: Main Program (takes about 15 mins on server) Check the pair-wise duration gaps of each funds by Person ID End of explanation
8,208
Given the following text description, write Python code to implement the functionality described below step by step Description: String operations Step1: Q1. Concatenate x1 and x2. Step2: Q2. Repeat x three time element-wise. Step3: Q3-1. Capitalize the first letter of x element-wise.<br/> Q3-2. Lowercase x element-wise.<br/> Q3-3. Uppercase x element-wise.<br/> Q3-4. Swapcase x element-wise.<br/> Q3-5. Title-case x element-wise.<br/> Step4: Q4. Make the length of each element 20 and the string centered / left-justified / right-justified with paddings of _. Step5: Q5. Encode x in cp500 and decode it again. Step6: Q6. Insert a space between characters of x. Step7: Q7-1. Remove the leading and trailing whitespaces of x element-wise.<br/> Q7-2. Remove the leading whitespaces of x element-wise.<br/> Q7-3. Remove the trailing whitespaces of x element-wise. Step8: Q8. Split the element of x with spaces. Step9: Q9. Split the element of x to multiple lines. Step10: Q10. Make x a numeric string of 4 digits with zeros on its left. Step11: Q11. Replace "John" with "Jim" in x. Step12: Comparison Q12. Return x1 == x2, element-wise. Step13: Q13. Return x1 != x2, element-wise. Step14: String information Q14. Count the number of "l" in x, element-wise. Step15: Q15. Count the lowest index of "l" in x, element-wise. Step16: Q16-1. Check if each element of x is composed of digits only.<br/> Q16-2. Check if each element of x is composed of lower case letters only.<br/> Q16-3. Check if each element of x is composed of upper case letters only. Step17: Q17. Check if each element of x starts with "hi".
Python Code: from __future__ import print_function import numpy as np author = "kyubyong. https://github.com/Kyubyong/numpy_exercises" np.__version__ Explanation: String operations End of explanation x1 = np.array(['Hello', 'Say'], dtype=np.str) x2 = np.array([' world', ' something'], dtype=np.str) Explanation: Q1. Concatenate x1 and x2. End of explanation x = np.array(['Hello ', 'Say '], dtype=np.str) Explanation: Q2. Repeat x three time element-wise. End of explanation x = np.array(['heLLo woRLd', 'Say sOmething'], dtype=np.str) capitalized = ... lowered = ... uppered = ... swapcased = ... titlecased = ... print("capitalized =", capitalized) print("lowered =", lowered) print("uppered =", uppered) print("swapcased =", swapcased) print("titlecased =", titlecased) Explanation: Q3-1. Capitalize the first letter of x element-wise.<br/> Q3-2. Lowercase x element-wise.<br/> Q3-3. Uppercase x element-wise.<br/> Q3-4. Swapcase x element-wise.<br/> Q3-5. Title-case x element-wise.<br/> End of explanation x = np.array(['hello world', 'say something'], dtype=np.str) centered = ... left = ... right = ... print("centered =", centered) print("left =", left) print("right =", right) Explanation: Q4. Make the length of each element 20 and the string centered / left-justified / right-justified with paddings of _. End of explanation x = np.array(['hello world', 'say something'], dtype=np.str) encoded = ... decoded = ... print("encoded =", encoded) print("decoded =", decoded) Explanation: Q5. Encode x in cp500 and decode it again. End of explanation x = np.array(['hello world', 'say something'], dtype=np.str) Explanation: Q6. Insert a space between characters of x. End of explanation x = np.array([' hello world ', '\tsay something\n'], dtype=np.str) stripped = ... lstripped = ... rstripped = ... print("stripped =", stripped) print("lstripped =", lstripped) print("rstripped =", rstripped) Explanation: Q7-1. Remove the leading and trailing whitespaces of x element-wise.<br/> Q7-2. Remove the leading whitespaces of x element-wise.<br/> Q7-3. Remove the trailing whitespaces of x element-wise. End of explanation x = np.array(['Hello my name is John'], dtype=np.str) Explanation: Q8. Split the element of x with spaces. End of explanation x = np.array(['Hello\nmy name is John'], dtype=np.str) Explanation: Q9. Split the element of x to multiple lines. End of explanation x = np.array(['34'], dtype=np.str) Explanation: Q10. Make x a numeric string of 4 digits with zeros on its left. End of explanation x = np.array(['Hello nmy name is John'], dtype=np.str) Explanation: Q11. Replace "John" with "Jim" in x. End of explanation x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str) x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str) Explanation: Comparison Q12. Return x1 == x2, element-wise. End of explanation x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str) x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str) Explanation: Q13. Return x1 != x2, element-wise. End of explanation x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str) Explanation: String information Q14. Count the number of "l" in x, element-wise. End of explanation x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str) Explanation: Q15. Count the lowest index of "l" in x, element-wise. End of explanation x = np.array(['Hello', 'I', 'am', '20', 'years', 'old'], dtype=np.str) out1 = ... out2 = ... out3 = ... print("Digits only =", out1) print("Lower cases only =", out2) print("Upper cases only =", out3) Explanation: Q16-1. Check if each element of x is composed of digits only.<br/> Q16-2. Check if each element of x is composed of lower case letters only.<br/> Q16-3. Check if each element of x is composed of upper case letters only. End of explanation x = np.array(['he', 'his', 'him', 'his'], dtype=np.str) Explanation: Q17. Check if each element of x starts with "hi". End of explanation
8,209
Given the following text description, write Python code to implement the functionality described below step by step Description: metachars . any char \w any alphanumeric (a-z, A-Z, 0-9, _) \s any whitespace char (" _, \t, \n) \S any nonwhitespace \d any digit (0-9) . searches for an actual period Step1: define your own character classes inside your regular expression, write [aeiou] Step2: metacharacters ^ beginning of string $ end of string \b word boundary Step3: aside Step4: metacharacters 3 Step5: more metacharacters
Python Code: #subject lines that have dates, e.g. 12/01/99 [line for line in subjects if re.search("\d\d/\d\d/\d\d", line)] Explanation: metachars . any char \w any alphanumeric (a-z, A-Z, 0-9, _) \s any whitespace char (" _, \t, \n) \S any nonwhitespace \d any digit (0-9) . searches for an actual period End of explanation [line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]", line)] [line for line in subjects if re.search("F[wW]:", line)] Explanation: define your own character classes inside your regular expression, write [aeiou] End of explanation [line for line in subjects if res.search("^[Nn]ew [Yy]ork", line)] [line for line in subjects if re.search(r"\boil\b", line)] Explanation: metacharacters ^ beginning of string $ end of string \b word boundary End of explanation x = "this is \na test" print(x) x = "this is\t\t\tanother test" print(x) normal = "hello\nthere" raw = r"hello\nthere" print("normal:", normal) print("raw:", raw) Explanation: aside: metacharacters and escape characters \n new line \t tab \ single backslash (python interprets these) End of explanation [line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)] Explanation: metacharacters 3: quantifiers * match zero or more times {n} matches exactly n times {n,m} matches at least n times, but no more than m times {n,} matches at least n times, but maybe infinite times + match at least once ({1,}) ? match one time or zero times [line for line in subjects if re.search(r"^R string matches regular expression if at the first line, you encounter ....... End of explanation all_subjects = open("enronsubjects.txt").read() all_subjects[:1000] #looking for domain names [line for line in subjectts if re.search](r"\b\w+\.(?:com|net|org)\b", line) #re.findall(r"\b\w+\.(?:com|net|org)\b", all_subjects) #"will you pass teh pepper?" re.search "yes" #"will you pass the pepper?" re.findall "yes, here it is" *passes pepper* Explanation: more metacharacters: alternation ....... capturing read teh whole corpus in as one big string End of explanation
8,210
Given the following text description, write Python code to implement the functionality described below step by step Description: Multi-objective memetic approach In this third tutorial we consider an example with two dimensional input data and we approach its solution using a multi-objective approach where, aside the loss, we consider the formula complexity as a second objective. We will use a memetic approach to learn the model parameters while evolution will shape the model itself. Eventually you will learn Step1: 1 - The data Step2: 2 - The symbolic regression problem Step3: 3 - The search algorithm Step4: 4 - The search Step5: 5 - Inspecting the non dominated front Step6: 6 - Lets have a look to the log content
Python Code: # Some necessary imports. import dcgpy import pygmo as pg # Sympy is nice to have for basic symbolic manipulation. from sympy import init_printing from sympy.parsing.sympy_parser import * init_printing() # Fundamental for plotting. from matplotlib import pyplot as plt %matplotlib inline Explanation: Multi-objective memetic approach In this third tutorial we consider an example with two dimensional input data and we approach its solution using a multi-objective approach where, aside the loss, we consider the formula complexity as a second objective. We will use a memetic approach to learn the model parameters while evolution will shape the model itself. Eventually you will learn: How to instantiate a multi-objective symbolic regression problem. How to use a memetic multi-objective approach to find suitable models for your data End of explanation # We load our data from some available ones shipped with dcgpy. # In this particular case we use the problem sinecosine from the paper: # Vladislavleva, Ekaterina J., Guido F. Smits, and Dick Den Hertog. # "Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic # programming." IEEE Transactions on Evolutionary Computation 13.2 (2008): 333-349. X, Y = dcgpy.generate_sinecosine() from mpl_toolkits.mplot3d import Axes3D # And we plot them as to visualize the problem. fig = plt.figure() ax = fig.add_subplot(111, projection='3d') _ = ax.scatter(X[:,0], X[:,1], Y[:,0]) Explanation: 1 - The data End of explanation # We define our kernel set, that is the mathematical operators we will # want our final model to possibly contain. What to choose in here is left # to the competence and knowledge of the user. A list of kernels shipped with dcgpy # can be found on the online docs. The user can also define its own kernels (see the corresponding tutorial). ss = dcgpy.kernel_set_double(["sum", "diff", "mul", "sin", "cos"]) # We instantiate the symbolic regression optimization problem # Note how we specify to consider one ephemeral constant via # the kwarg n_eph. We also request 100 kernels with a linear # layout (this allows for the construction of longer expressions) and # we set the level back to 101 (in an attempt to skew the search towards # simple expressions) udp = dcgpy.symbolic_regression( points = X, labels = Y, kernels=ss(), rows = 1, cols = 100, n_eph = 1, levels_back = 101, multi_objective=True) prob = pg.problem(udp) print(udp) Explanation: 2 - The symbolic regression problem End of explanation # We instantiate here the evolutionary strategy we want to use to # search for models. Note we specify we want the evolutionary operators # to be applied also to the constants via the kwarg *learn_constants* uda = dcgpy.momes4cgp(gen = 250, max_mut = 4) algo = pg.algorithm(uda) algo.set_verbosity(10) Explanation: 3 - The search algorithm End of explanation # We use a population of 100 individuals pop = pg.population(prob, 100) # Here is where we run the actual evolution. Note that the screen output # will show in the terminal (not on your Jupyter notebook in case # you are using it). Note you will have to run this a few times before # solving the problem entirely. pop = algo.evolve(pop) Explanation: 4 - The search End of explanation # Compute here the non dominated front. ndf = pg.non_dominated_front_2d(pop.get_f()) # Inspect the front and print the proposed expressions. print("{: >20} {: >30}".format("Loss:", "Model:"), "\n") for idx in ndf: x = pop.get_x()[idx] f = pop.get_f()[idx] a = parse_expr(udp.prettier(x))[0] print("{: >20} | {: >30}".format(str(f[0]), str(a)), "|") # Lets have a look to the non dominated fronts in the final population. ax = pg.plot_non_dominated_fronts(pop.get_f()) _ = plt.xlabel("loss") _ = plt.ylabel("complexity") _ = plt.title("Non dominate fronts") Explanation: 5 - Inspecting the non dominated front End of explanation # Here we get the log of the latest call to the evolve log = algo.extract(dcgpy.momes4cgp).get_log() gen = [it[0] for it in log] loss = [it[2] for it in log] compl = [it[4] for it in log] # And here we plot, for example, the generations against the best loss _ = plt.plot(gen, loss) _ = plt.title('last call to evolve') _ = plt.xlabel('generations') _ = plt.ylabel('loss') Explanation: 6 - Lets have a look to the log content End of explanation
8,211
Given the following text description, write Python code to implement the functionality described below step by step Description: From http Step1: Array operations are very similar to that of the Python list. For example, the following code snippet creates a Python list and then converts it to a NumPy array Step2: In this case, array1 is known as a rank 1 (one dimensional) array. You can print out the array as usual using the print() function Step3: You can print out the shape of the array using the shape property Step4: The shape property returns a tuple containing the dimension of the array. In the above example, array1 is a 1-dimensional array of five items. Just like Python list, you can access items in the NumPy array using indexing as well as slicing Step5: You can also pass in a list containing the index of the items you want to extract to the array Step6: The following code snippet shows how you can create a two-dimensional array Step7: <h1>Creating and Initializing Arrays using NumPy</h1> NumPy contains a number of helper functions that make it easy to initialize arrays with some default values. For example, if you want to create an array containing all zeroes, you can use the zeros() function with a number indicating the size of the array Step8: If you want to create a rank 2 array, simply pass in a tuple Step9: To initialize the array to some other values other than zeroes, use the full() function Step10: In linear algebra, you often need to deal with an identity matrix, and you can create this in NumPy easily with the eye() function Step11: And if you need to populate an array with some random values, you can use the random.random() function to generate random values between 0.0 and 1.0 Step12: Finally, you can create a range of values from 0 to n-1 using the arange() function Step13: <h1>Boolean Array Indexing</h1> One of the many useful features of the NumPy array is its support for array boolean indexing. Consider the following example Step14: You first specify the condition, testing for even numbers, and then assign it to a variable Step15: The even_nums variable is now a NumPy array, containing a collection of Boolean values. If you print it out, you’ll see just that Step16: The True values indicate that the particular item is an even number. Using this Boolean array, you can now use it as an index to the numbers array Step17: The above statements could be written succinctly like this
Python Code: import numpy as np Explanation: From http://www.codemag.com/article/1611081 <h1>NumPy Array Basics</h1> In NumPy, an array is of type ndarray (n-dimensional array). A NumPy array is an array of homogeneous values (all of the same type), and all items occupy a contiguous block of memory. To use NumPy, you first need to import the numpy package: End of explanation l1 = [1,2,3,4,5] array1 = np.array(l1) # rank 1 array Explanation: Array operations are very similar to that of the Python list. For example, the following code snippet creates a Python list and then converts it to a NumPy array: End of explanation print (array1) # [1 2 3 4 5] Explanation: In this case, array1 is known as a rank 1 (one dimensional) array. You can print out the array as usual using the print() function: End of explanation print (array1.shape) # (5,) Explanation: You can print out the shape of the array using the shape property: End of explanation print ('array1:', array1) # [1 2 3 4 5] print ('array1.shape: ', array1.shape) # (5,) print ('array1[0]:', array1[0]) # 1 print ('array1[1]:', array1[1]) # 2 print ('array1[1:3]:', array1[1:3]) # [2 3] print ('array1[:-2]:', array1[:-2]) # [1 2 3] print ('array1[3:]', array1[3:]) # [4 5] Explanation: The shape property returns a tuple containing the dimension of the array. In the above example, array1 is a 1-dimensional array of five items. Just like Python list, you can access items in the NumPy array using indexing as well as slicing: End of explanation print ('array1[[2,3]]:', array1[[2,3]]) # [3,4] Explanation: You can also pass in a list containing the index of the items you want to extract to the array: End of explanation l2 = [6,7,8,9,0] array2 = np.array([l1,l2]) # rank 2 array print ('array2:', array2) ''' [[1 2 3 4 5] [6 7 8 9 0]] ''' print ('shape:', array2.shape) # (2,5) - 2 rows and # 5 columns print ('array2[0,0]:', array2[0,0]) # 1 print ('array2[0,1]:', array2[0,1]) # 2 print ('array2[1,0]:', array2[1,0]) # 6 Explanation: The following code snippet shows how you can create a two-dimensional array: End of explanation a1 = np.zeros(2) # array of rank 1 with all 0s print ('a1.shape:', a1.shape) # (2,) print ('a1[0]:', a1[0]) # 0.0 print ('a1[1]:', a1[1]) # 0.0 Explanation: <h1>Creating and Initializing Arrays using NumPy</h1> NumPy contains a number of helper functions that make it easy to initialize arrays with some default values. For example, if you want to create an array containing all zeroes, you can use the zeros() function with a number indicating the size of the array: End of explanation a2 = np.zeros((2,3)) # array of rank 2 with all 0s; # 2 rows and 3 columns print ('a2.shape:', a2.shape) # (2,3) print ('a2:', a2) Explanation: If you want to create a rank 2 array, simply pass in a tuple: End of explanation a3 = np.full((2,3), 8.0) # array of rank 2 # with all 8s print ('a3:', a3) Explanation: To initialize the array to some other values other than zeroes, use the full() function: End of explanation a4 = np.eye(4) # 4x4 identity matrix print ('a4:', a4) Explanation: In linear algebra, you often need to deal with an identity matrix, and you can create this in NumPy easily with the eye() function: End of explanation a5 = np.random.random((2,4)) # populate a rank 2 # array (2 rows # 4 columns) with # random values print ('a5:', a5) Explanation: And if you need to populate an array with some random values, you can use the random.random() function to generate random values between 0.0 and 1.0: End of explanation a6 = np.arange(10) # creates a range from 0 to 9 print ('a6:', a6) # [0 1 2 3 4 5 6 7 8 9] Explanation: Finally, you can create a range of values from 0 to n-1 using the arange() function: End of explanation nums = np.array([23,45,78,89,23,11,22]) Explanation: <h1>Boolean Array Indexing</h1> One of the many useful features of the NumPy array is its support for array boolean indexing. Consider the following example: You have a list of numbers and you need to retrieve all of the even numbers from the list. Using Python’s list, you need to iterate through all items in the list and perform a check individually. Using NumPy array, however, things are much easier. Look at the following example: End of explanation even_nums = nums % 2 == 0 Explanation: You first specify the condition, testing for even numbers, and then assign it to a variable: End of explanation print (even_nums) Explanation: The even_nums variable is now a NumPy array, containing a collection of Boolean values. If you print it out, you’ll see just that: End of explanation print (nums[even_nums]) Explanation: The True values indicate that the particular item is an even number. Using this Boolean array, you can now use it as an index to the numbers array: End of explanation print (nums[nums % 2 == 0]) Explanation: The above statements could be written succinctly like this: End of explanation
8,212
Given the following text description, write Python code to implement the functionality described below step by step Description: <div class="alert alert-block alert-info" style="margin-top Step1: <a id="ref0"></a> <h2> Logistic Function </h2> Step2: Create a tensor ranging from -100 to 100 Step3: Create a sigmoid object Step4: Apply the element-wise function Sigmoid with the object Step5: Plot the results Step6: Apply the element-wise Sigmoid from the function module and plot the results Step7: <a id="ref0"></a> <h2>Build a Logistic Regression Using nn.Sequential</h2> Create a 1x1 tensor where x represents one data sample with one dimension, and 2x1 tensor X represents two data samples of one dimension Step8: Create a logistic regression object with the <code>nn.Sequential</code> model with a one-dimensional input Step9: The object is represented in the following diagram Step10: Make a prediction with one sample Step11: Calling the object performed the following operation Step12: Calling the object performed the following operation Step13: Create a logistic regression object with the <code>nn.Sequentia</code>l model with a two-dimensional input Step14: The object will apply the Sigmoid function to the output of the linear function as shown in the following diagram Step15: Make a prediction with one sample Step16: The operation is represented in the following diagram Step17: The operation is represented in the following diagram Step18: Create a 1x1 tensor where x represents one data sample with one dimension, and 3x1 tensor X represents one data sample of one dimension Step19: Create a model to predict one dimension Step20: In this case, the parameters are randomly insulated. You can view them the following ways Step21: Make a prediction with multiple samples Step22: Make a prediction with multiple samples Step23: Create a logistic regression object with a function with two inputs Step24: Create a 1x2 tensor where x represents one data sample with one dimension, and 3x2 tensor X represents one data sample of one dimension Step25: Make a prediction with one sample Step26: Make a prediction with multiple samples
Python Code: import torch.nn as nn import torch import matplotlib.pyplot as plt Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a> <img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center"> <h1 align=center><font size = 5>Logistic Regression</font></h1> In this lab, you will cover logistic regression using Pytorch. # Table of Contents <div class="alert alert-block alert-info" style="margin-top: 20px"> <li><a href="#ref0">Logistic Function</a></li> <li><a href="#ref1">Build a Logistic Regression Using nn.Sequential</a></li> <li><a href="#ref2">Build Custom Modules</a></li> <br> <p></p> Estimated Time Needed: <strong>15 min</strong> </div> <hr> Import the following modules: End of explanation torch.manual_seed(2) Explanation: <a id="ref0"></a> <h2> Logistic Function </h2> End of explanation z=torch.arange(-100,100,0.1).view(-1, 1) z Explanation: Create a tensor ranging from -100 to 100: End of explanation sig=nn.Sigmoid() Explanation: Create a sigmoid object: End of explanation yhat=sig(z) Explanation: Apply the element-wise function Sigmoid with the object: End of explanation plt.plot(z.numpy(),yhat.numpy()) plt.xlabel('z') plt.ylabel('yhat') Explanation: Plot the results: End of explanation yhat=torch.sigmoid(z) plt.plot(z.numpy(),yhat.numpy()) Explanation: Apply the element-wise Sigmoid from the function module and plot the results: End of explanation x=torch.tensor([[1.0]]) X=torch.tensor([[1.0],[100]]) print('x=',x) print('X=',X) Explanation: <a id="ref0"></a> <h2>Build a Logistic Regression Using nn.Sequential</h2> Create a 1x1 tensor where x represents one data sample with one dimension, and 2x1 tensor X represents two data samples of one dimension: End of explanation model=nn.Sequential(nn.Linear(1,1),nn.Sigmoid()) Explanation: Create a logistic regression object with the <code>nn.Sequential</code> model with a one-dimensional input: End of explanation print("list(model.parameters()):\n ", list(model.parameters())) print( "\nmodel.state_dict():\n ",model.state_dict()) Explanation: The object is represented in the following diagram: <img src = "https://ibm.box.com/shared/static/8yjc6pyix9ga1x9eeog49u38i59uctog.png" width = 800, align = "center"> In this case, the parameters are randomly insulated. You can view them the following ways: End of explanation yhat=model(x) yhat Explanation: Make a prediction with one sample: End of explanation yhat=model(X) yhat Explanation: Calling the object performed the following operation: <img src = "https://ibm.box.com/shared/static/dwzrbmnhpmq6foy9fxtdhqt1samfdd0d.png" width = 400, align = "center"> Make a prediction with multiple samples: End of explanation x=torch.tensor([[1.0,1.0]]) X=torch.tensor([[1.0,1.0],[1.0,2.0],[1.0,3.0]]) print('x=',x) print('X=',X) Explanation: Calling the object performed the following operation: Create a 1x2 tensor where x represents one data sample with one dimension, and 2x3 tensor X represents one data sample of two dimension: End of explanation model=nn.Sequential(nn.Linear(2,1),nn.Sigmoid()) Explanation: Create a logistic regression object with the <code>nn.Sequentia</code>l model with a two-dimensional input: End of explanation print("list(model.parameters()):\n ", list(model.parameters())) print( "\nmodel.state_dict():\n ",model.state_dict()) Explanation: The object will apply the Sigmoid function to the output of the linear function as shown in the following diagram: <img src = "https://ibm.box.com/shared/static/nul06ttn4w40ogahvs9risc34suuymmg.png" width = 800, align = "center"> In this case, the parameters are randomly insulated. You can view them the following ways: End of explanation yhat=model(x) yhat Explanation: Make a prediction with one sample: End of explanation yhat=model(X) yhat Explanation: The operation is represented in the following diagram: <img src = "https://ibm.box.com/shared/static/pnarlh41gl7epmy2f8ob3l58pd0zh9wq.png" width = 500, align = "center"> Make a prediction with multiple samples: End of explanation class logistic_regression(nn.Module): def __init__(self,n_inputs): super(logistic_regression,self).__init__() self.linear=nn.Linear(n_inputs,1) def forward(self,x): yhat=torch.sigmoid(self.linear(x)) return yhat Explanation: The operation is represented in the following diagram: <img src = "https://ibm.box.com/shared/static/1rpau4ggzepzxzu01p2j4506d5kvobbj.png" width = 800, align = "center"> <a id="ref2"></a> <h2> Build Custom Modules</h2> In this section, you will build a custom model. The model or object function is identical to using nn.Sequential. Create a logistic regression custom module: End of explanation x=torch.tensor([[1.0]]) X=torch.tensor([[-100],[0],[100.0]]) print('x=',x) print('X=',X) Explanation: Create a 1x1 tensor where x represents one data sample with one dimension, and 3x1 tensor X represents one data sample of one dimension: End of explanation model=logistic_regression(1) Explanation: Create a model to predict one dimension: End of explanation print("list(model.parameters()):\n ", list(model.parameters())) print( "\nmodel.state_dict():\n ",model.state_dict()) Explanation: In this case, the parameters are randomly insulated. You can view them the following ways: End of explanation yhat=model(x) yhat Explanation: Make a prediction with multiple samples: End of explanation yhat=model(X) yhat Explanation: Make a prediction with multiple samples: End of explanation model=logistic_regression(2) Explanation: Create a logistic regression object with a function with two inputs: End of explanation x=torch.tensor([[1.0,2.0]]) X=torch.tensor([[100,-100],[0.0,0.0],[-100,100]]) print('x=',x) print('X=',X) Explanation: Create a 1x2 tensor where x represents one data sample with one dimension, and 3x2 tensor X represents one data sample of one dimension: End of explanation yhat=model(x) yhat Explanation: Make a prediction with one sample: End of explanation yhat=model(X) yhat Explanation: Make a prediction with multiple samples: End of explanation
8,213
Given the following text description, write Python code to implement the functionality described below step by step Description: FullAdder - Combinational Circuits This notebook walks through the implementation of a basic combinational circuit, a full adder. This example introduces many of the features of Magma including circuits, wiring, operators, and the type system. Start by importing magma and mantle. magma is the core system which implements circuits and the methods to compose them, and mantle is a library of useful circuits. Step1: A full adder has three single bit inputs, and returns the sum and the carry. The sum is the exclusive or of the 3 bits, the carry is 1 if any two of the inputs bits are 1. Here is a schematic of a full adder circuit (from logisim). <img src="images/full_adder_logisim.png" width="500"/> We start by defining a magma combinational function that implements a full adder. The full adder function takes three single bit inputs (type m.Bit) and returns two single bit outputs as a tuple. The first element of tuple is the sum, the second element is the carry. Note that the arguments and return values of the functions have type annotations using Python 3's typing syntax. We compute the sum and carry using standard Python bitwise operators &amp;, |, and ^. Step2: We can test our combinational function to verify that our implementation behaves as expected fault. We'll use the fault.PythonTester which will simulate the circuit using magma's Python simulator. Step3: combinational functions are polymorphic over Python and magma types. If the function is called with magma values, it will produce a circuit instance, wire up the inputs, and return references to the outputs. Otherwise, it will invoke the function in Python. For example, we can use the Python function to verify the circuit simulation. Step4: Circuits Now that we have an implementation of full_adder as a combinational function, we'll use it to construct a magma Circuit. A Circuit in magma corresponds to a module in verilog. This example shows using the combinational function inside a circuit definition, as opposed to using the Python implementation shown before. Step5: First, notice that the FullAdder is a subclass of Circuit. All magma circuits are classes in python. Second, the function IO creates the interface to the circuit. The arguments toIO are keyword arguments. The key is the name of the argument in the circuit, and the value is its type. In this circuit, all the inputs and outputs have Magma type Bit. We also qualify each type as an input or an output using the functions In and Out. Note that when we call the python function fulladder it is passed magma values not standard python values. In the previous cell, we tested fulladder with standard python ints, while in this case, the values passed to the Python fulladder function are magma values of type Bit. The Python bitwise operators for Magma types are overloaded to automatically create subcircuits to compute logical functions. fulladder returns two values. These values are assigned to the python variables O and COUT. Remember that assigning to a Python variable sets the variable to refer to the object. magma values are Python objects, so assigning an object to a variable creates a reference to that magma value. In order to complete the definition of the circuit, O and COUT need to be wired to the outputs in the interface. The python @= operator is overloaded to perform wiring. Let's inspect the circuit definition by printing the __repr__. Step6: We see that it has created an instance of the full_adder combinational function and wired up the interface. We can also inspect the contents of the full_adder circuit definition. Notice that it has lowered the Python operators into a structural representation of the primitive logicoperations. Step7: We can also inspect the code generated by the m.circuit.combinational decorator by looking in the .magma directory for a file named .magma/full_adder.py. When using m.circuit.combinational, magma will generate a file matching the name of the decorated function. You'll notice that the generated code introduces an extra temporary variable (this is an artifact of the SSA pass that magma runs to handle if/else statements). Step8: In the code above, a mux is imported and named phi. If the combinational circuit contains any if-then-else constructs, they will be transformed into muxes. Note also the m.wire function. m.wire(O0, io.I0) is equivalent to io.O0 @= O0. Staged testing with Fault fault is a python package for testing magma circuits. By default, fault is quiet, so we begin by enabling logging using the built-in logging module Step9: Earlier in the notebook, we showed an example using fault.PythonTester to simulate a circuit. This uses an interactive programming model where test actions are immediately dispatched to the underlying simulator (which is why we can perform assertions on the simulation values in Python. fault also provides a staged metaprogramming environment built upon the Tester class. Using the staged environment means values are not returned immediately to Python. Instead, the Python test code records a sequence of actions that are compiled and run in a later stage. A Tester is instantiated with a magma circuit. Step10: An instance of a Tester has an attribute .circuit that enables the user to record test actions. For example, inputs to a circuit can be poked by setting the attribute corresponding to the input port name. Step11: fault's default Tester provides the semantics of a cycle accurate simulator, so, unlike verilog, pokes do not create events that trigger computation. Instead, these poke values are staged, and the propogation of their effect occurs when the user calls the eval action. Step12: To assert that the output of the circuit is equal to a value, we use the expect method that are defined on the attributes corresponding to circuit output ports Step13: Because fault is a staged programming environment, the above actions are not executed until we have advanced to the next stage. In the first stage, the user records test actions (e.g. poke, eval, expect). In the second stage, the test is compiled and run using a target runtime. Here's examples of running the test using magma's python simulator, the coreir c++ simulator, and verilator. Step14: The tester also provides the same convenient __call__ interface we saw before. Step15: Generate Verilog Magma's default compiler will generate verilog using CoreIR Step16: Generate CoreIR We can also inspect the intermediate CoreIR used in the generation process. Step17: Here's an example of running a CoreIR pass on the intermediate representation.
Python Code: import magma as m import mantle Explanation: FullAdder - Combinational Circuits This notebook walks through the implementation of a basic combinational circuit, a full adder. This example introduces many of the features of Magma including circuits, wiring, operators, and the type system. Start by importing magma and mantle. magma is the core system which implements circuits and the methods to compose them, and mantle is a library of useful circuits. End of explanation @m.circuit.combinational def full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit): return A ^ B ^ C, A & B | B & C | C & A # sum, carry Explanation: A full adder has three single bit inputs, and returns the sum and the carry. The sum is the exclusive or of the 3 bits, the carry is 1 if any two of the inputs bits are 1. Here is a schematic of a full adder circuit (from logisim). <img src="images/full_adder_logisim.png" width="500"/> We start by defining a magma combinational function that implements a full adder. The full adder function takes three single bit inputs (type m.Bit) and returns two single bit outputs as a tuple. The first element of tuple is the sum, the second element is the carry. Note that the arguments and return values of the functions have type annotations using Python 3's typing syntax. We compute the sum and carry using standard Python bitwise operators &amp;, |, and ^. End of explanation import fault tester = fault.PythonTester(full_adder) assert tester(1, 0, 0) == (1, 0), "Failed" assert tester(0, 1, 0) == (1, 0), "Failed" assert tester(1, 1, 0) == (0, 1), "Failed" assert tester(1, 0, 1) == (0, 1), "Failed" assert tester(1, 1, 1) == (1, 1), "Failed" print("Success!") Explanation: We can test our combinational function to verify that our implementation behaves as expected fault. We'll use the fault.PythonTester which will simulate the circuit using magma's Python simulator. End of explanation assert tester(1, 0, 0) == full_adder(1, 0, 0), "Failed" assert tester(0, 1, 0) == full_adder(0, 1, 0), "Failed" assert tester(1, 1, 0) == full_adder(1, 1, 0), "Failed" assert tester(1, 0, 1) == full_adder(1, 0, 1), "Failed" assert tester(1, 1, 1) == full_adder(1, 1, 1), "Failed" print("Success!") Explanation: combinational functions are polymorphic over Python and magma types. If the function is called with magma values, it will produce a circuit instance, wire up the inputs, and return references to the outputs. Otherwise, it will invoke the function in Python. For example, we can use the Python function to verify the circuit simulation. End of explanation class FullAdder(m.Circuit): io = m.IO(I0=m.In(m.Bit), I1=m.In(m.Bit), CIN=m.In(m.Bit), O=m.Out(m.Bit), COUT=m.Out(m.Bit)) O, COUT = full_adder(io.I0, io.I1, io.CIN) io.O @= O io.COUT @= COUT Explanation: Circuits Now that we have an implementation of full_adder as a combinational function, we'll use it to construct a magma Circuit. A Circuit in magma corresponds to a module in verilog. This example shows using the combinational function inside a circuit definition, as opposed to using the Python implementation shown before. End of explanation print(repr(FullAdder)) Explanation: First, notice that the FullAdder is a subclass of Circuit. All magma circuits are classes in python. Second, the function IO creates the interface to the circuit. The arguments toIO are keyword arguments. The key is the name of the argument in the circuit, and the value is its type. In this circuit, all the inputs and outputs have Magma type Bit. We also qualify each type as an input or an output using the functions In and Out. Note that when we call the python function fulladder it is passed magma values not standard python values. In the previous cell, we tested fulladder with standard python ints, while in this case, the values passed to the Python fulladder function are magma values of type Bit. The Python bitwise operators for Magma types are overloaded to automatically create subcircuits to compute logical functions. fulladder returns two values. These values are assigned to the python variables O and COUT. Remember that assigning to a Python variable sets the variable to refer to the object. magma values are Python objects, so assigning an object to a variable creates a reference to that magma value. In order to complete the definition of the circuit, O and COUT need to be wired to the outputs in the interface. The python @= operator is overloaded to perform wiring. Let's inspect the circuit definition by printing the __repr__. End of explanation print(repr(full_adder.circuit_definition)) Explanation: We see that it has created an instance of the full_adder combinational function and wired up the interface. We can also inspect the contents of the full_adder circuit definition. Notice that it has lowered the Python operators into a structural representation of the primitive logicoperations. End of explanation with open(".magma/full_adder.py") as f: print(f.read()) Explanation: We can also inspect the code generated by the m.circuit.combinational decorator by looking in the .magma directory for a file named .magma/full_adder.py. When using m.circuit.combinational, magma will generate a file matching the name of the decorated function. You'll notice that the generated code introduces an extra temporary variable (this is an artifact of the SSA pass that magma runs to handle if/else statements). End of explanation import logging logging.basicConfig(level=logging.INFO) import fault Explanation: In the code above, a mux is imported and named phi. If the combinational circuit contains any if-then-else constructs, they will be transformed into muxes. Note also the m.wire function. m.wire(O0, io.I0) is equivalent to io.O0 @= O0. Staged testing with Fault fault is a python package for testing magma circuits. By default, fault is quiet, so we begin by enabling logging using the built-in logging module End of explanation tester = fault.Tester(FullAdder) Explanation: Earlier in the notebook, we showed an example using fault.PythonTester to simulate a circuit. This uses an interactive programming model where test actions are immediately dispatched to the underlying simulator (which is why we can perform assertions on the simulation values in Python. fault also provides a staged metaprogramming environment built upon the Tester class. Using the staged environment means values are not returned immediately to Python. Instead, the Python test code records a sequence of actions that are compiled and run in a later stage. A Tester is instantiated with a magma circuit. End of explanation tester.circuit.I0 = 1 tester.circuit.I1 = 1 tester.circuit.CIN = 1 Explanation: An instance of a Tester has an attribute .circuit that enables the user to record test actions. For example, inputs to a circuit can be poked by setting the attribute corresponding to the input port name. End of explanation tester.eval() Explanation: fault's default Tester provides the semantics of a cycle accurate simulator, so, unlike verilog, pokes do not create events that trigger computation. Instead, these poke values are staged, and the propogation of their effect occurs when the user calls the eval action. End of explanation tester.circuit.O.expect(1) tester.circuit.COUT.expect(1) Explanation: To assert that the output of the circuit is equal to a value, we use the expect method that are defined on the attributes corresponding to circuit output ports End of explanation # compile_and_run throws an exception if the test fails tester.compile_and_run("verilator") Explanation: Because fault is a staged programming environment, the above actions are not executed until we have advanced to the next stage. In the first stage, the user records test actions (e.g. poke, eval, expect). In the second stage, the test is compiled and run using a target runtime. Here's examples of running the test using magma's python simulator, the coreir c++ simulator, and verilator. End of explanation O, COUT = tester(1, 0, 0) tester.expect(O, 1) tester.expect(COUT, 0) tester.compile_and_run("verilator") Explanation: The tester also provides the same convenient __call__ interface we saw before. End of explanation m.compile("build/FullAdder", FullAdder, inline=True) %cat build/FullAdder.v Explanation: Generate Verilog Magma's default compiler will generate verilog using CoreIR End of explanation %cat build/FullAdder.json Explanation: Generate CoreIR We can also inspect the intermediate CoreIR used in the generation process. End of explanation !coreir -i build/FullAdder.json -p instancecount Explanation: Here's an example of running a CoreIR pass on the intermediate representation. End of explanation
8,214
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. Step1: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise Step2: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. Step3: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise Step4: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
Python Code: %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME') # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() Explanation: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME') # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME') # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME') # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7)) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14)) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28)) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='SAME', activation=tf.nn.relu) # Now 28x28x32 logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation
8,215
Given the following text description, write Python code to implement the functionality described below step by step Description: Add2 Circuit Now let's build a 2-bit adder using full_adder. We'll use a simple ripple carry adder design by connecting the carry out of one full adder to the carry in of the next full adder. The resulting adder will accept as input a carry in, and generate a final carry out. Here's a logisim diagram of the circuit we will construct Step2: Although we are making an 2-bit adder, we do this using a for loop that can be generalized to construct an n-bit adder. To use a for loop inside combinational, we use the ast_tools package's macro support. These loop_unroll macro will expand the for loop before passing the function to m.circuit.combinational. Each time through the for loop we call full adder. Calling an circuit instance has the effect of wiring up the arguments to the inputs of the circuit. That is, O, COUT = full_adder(I0, I1, CIN) is equivalent to m.wire(IO, full_adder.I0) m.wire(I1, full_adder.I1) m.wire(CIN, full_adder.CIN) O = full_adder.O COUT = full_adder.COUT The outputs of the circuit are returned. Inside this loop we append single bit outputs from the full adders to the Python list O. We also set the CIN of the next full adder to the COUT of the previous instance. Finally, we then convert the list O to a UInt[n]. In addition to Bits[n], magma also has built in types UInt[n] and SInt[n] to represent unsigned and signed ints. magma also has type conversion functions bits, uint, and sint to convert between different types. In this example, m.uint(C) converts the list of bits to a UInt[len(C)]. Add Generator One question you may be asking yourself, is how can this code be generalized to produce an n-bit adder. We do this by creating an add Generator. A Generator is a Python class that defines a static generate method which takes parameters and returns a circuit class. Calling the generator with different parameter values will create and instantiate different circuits. The power of magma results from being to use all the features of Python to create powerful hardware generators. Here is the code Step3: To generate a Circuit from a Generator, we can directly call the generate static method. Step4: Let's inspected the generated code Step5: We can instantiate a Generator using the standard object syntax, which will implicitly call the generate method based on teh parameters, and return an instance of the generated Circuit. By default, this logic will cache definitions based on the generator parameters. Step6: Here's an example of using the convenience add function which handles the Generator instantiation for us
Python Code: import ast_tools from ast_tools.transformers.loop_unroller import unroll_for_loops from ast_tools.passes import begin_rewrite, end_rewrite, loop_unroll @m.circuit.combinational def full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit): return A ^ B ^ C, A & B | B & C | C & A # sum, carry @m.circuit.combinational @end_rewrite() @loop_unroll() @begin_rewrite() def _add(I0: m.Bits[2], I1: m.Bits[2], CIN: m.Bit) -> (m.Bits[2], m.Bit): O = [] COUT = io.CIN for i in ast_tools.macros.unroll(range(2)): Oi, COUT = full_adder(io.I0[i], io.I1[i], COUT) O.append(Oi) return m.uint(O), COUT print(repr(_add.circuit_definition)) Explanation: Add2 Circuit Now let's build a 2-bit adder using full_adder. We'll use a simple ripple carry adder design by connecting the carry out of one full adder to the carry in of the next full adder. The resulting adder will accept as input a carry in, and generate a final carry out. Here's a logisim diagram of the circuit we will construct: <img src="logisim/adder.png" width="500"/> End of explanation class Add(m.Generator): @staticmethod def generate(width: int): T = m.UInt[width] @m.circuit.combinational @end_rewrite() @loop_unroll() @begin_rewrite() def _add(I0: T, I1: T, CIN: m.Bit) -> (T, m.Bit): O = [] COUT = io.CIN for i in ast_tools.macros.unroll(range(width)): Oi, COUT = full_adder(io.I0[i], io.I1[i], COUT) O.append(Oi) return m.uint(O), COUT return _add def add(i0, i1, cin): We define a convenience function that instantiates the add generator for us based on the width of the inputs. if len(i0) != len(i1): raise TypeError("add arguments must have same length") if not isinstance(cin, m.Bit): raise TypeError("add cin must be a Bit") if (not isinstance(i0, m.UInt) and not isinstance(i1, m.UInt)): raise TypeError("add expects UInt inputs") return Add(len(i0))(i0, i1, cin) Explanation: Although we are making an 2-bit adder, we do this using a for loop that can be generalized to construct an n-bit adder. To use a for loop inside combinational, we use the ast_tools package's macro support. These loop_unroll macro will expand the for loop before passing the function to m.circuit.combinational. Each time through the for loop we call full adder. Calling an circuit instance has the effect of wiring up the arguments to the inputs of the circuit. That is, O, COUT = full_adder(I0, I1, CIN) is equivalent to m.wire(IO, full_adder.I0) m.wire(I1, full_adder.I1) m.wire(CIN, full_adder.CIN) O = full_adder.O COUT = full_adder.COUT The outputs of the circuit are returned. Inside this loop we append single bit outputs from the full adders to the Python list O. We also set the CIN of the next full adder to the COUT of the previous instance. Finally, we then convert the list O to a UInt[n]. In addition to Bits[n], magma also has built in types UInt[n] and SInt[n] to represent unsigned and signed ints. magma also has type conversion functions bits, uint, and sint to convert between different types. In this example, m.uint(C) converts the list of bits to a UInt[len(C)]. Add Generator One question you may be asking yourself, is how can this code be generalized to produce an n-bit adder. We do this by creating an add Generator. A Generator is a Python class that defines a static generate method which takes parameters and returns a circuit class. Calling the generator with different parameter values will create and instantiate different circuits. The power of magma results from being to use all the features of Python to create powerful hardware generators. Here is the code: End of explanation from fault import PythonTester Add2 = Add.generate(2) add2 = PythonTester(Add2) print(add2(1,2,0)[0] == 3) assert add2(1, 2, 0) == (3, 0), "Failed" print("Success!") Explanation: To generate a Circuit from a Generator, we can directly call the generate static method. End of explanation m.compile("build/Add2", Add2, inline=True) %cat build/Add2.v !coreir -i build/Add2.json -p instancecount Explanation: Let's inspected the generated code End of explanation class Main(m.Circuit): io = m.IO(I0=m.In(m.UInt[3]), I1=m.In(m.UInt[3]), CIN=m.In(m.Bit), O=m.Out(m.UInt[3]), COUT=m.Out(m.Bit)) O, COUT = Add(3)(io.I0, io.I1, io.CIN) io.O @= O io.COUT @= COUT print(repr(Main)) Explanation: We can instantiate a Generator using the standard object syntax, which will implicitly call the generate method based on teh parameters, and return an instance of the generated Circuit. By default, this logic will cache definitions based on the generator parameters. End of explanation class Main(m.Circuit): io = m.IO(I0=m.In(m.UInt[3]), I1=m.In(m.UInt[3]), CIN=m.In(m.Bit), O=m.Out(m.UInt[3]), COUT=m.Out(m.Bit)) O, COUT = add(io.I0, io.I1, io.CIN) io.O @= O io.COUT @= COUT print(repr(Main)) Explanation: Here's an example of using the convenience add function which handles the Generator instantiation for us End of explanation
8,216
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this Step4: Affine layer Step5: Affine layer Step6: ReLU layer Step7: ReLU layer Step8: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass Step9: Loss layers Step10: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. Step11: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. Step12: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. Step15: Inline question Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. Step17: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules Step19: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. Step20: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
Python Code: # As usual, a bit of setup from __future__ import print_function import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in list(data.items()): print(('%s: ' % k, v.shape)) Explanation: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this: ```python def layer_forward(x, w): Receive inputs x and weights w # Do some computations ... z = # ... some intermediate value # Do some more computations ... out = # the output cache = (x, w, z, out) # Values we need to compute gradients return out, cache ``` The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this: ```python def layer_backward(dout, cache): Receive derivative of loss with respect to outputs and cache, and compute derivative with respect to inputs. # Unpack cache values x, w, z, out = cache # Use values in cache to compute derivatives dx = # Derivative of loss with respect to x dw = # Derivative of loss with respect to w return dx, dw ``` After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures. In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks. End of explanation # Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around 1e-9. print('Testing affine_forward function:') print('difference: ', rel_error(out, correct_out)) Explanation: Affine layer: foward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done you can test your implementaion by running the following: End of explanation # Test the affine_backward function np.random.seed(231) x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be around 1e-10 print('Testing affine_backward function:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) Explanation: Affine layer: backward Now implement the affine_backward function and test your implementation using numeric gradient checking. End of explanation # Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be around 5e-8 print('Testing relu_forward function:') print('difference: ', rel_error(out, correct_out)) Explanation: ReLU layer: forward Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following: End of explanation np.random.seed(231) x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 3e-12 print('Testing relu_backward function:') print('dx error: ', rel_error(dx_num, dx)) Explanation: ReLU layer: backward Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking: End of explanation from cs231n.layer_utils import affine_relu_forward, affine_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) print('Testing affine_relu_forward:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) Explanation: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass: End of explanation np.random.seed(231) num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print('Testing svm_loss:') print('loss: ', loss) print('dx error: ', rel_error(dx_num, dx)) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print('\nTesting softmax_loss:') print('loss: ', loss) print('dx error: ', rel_error(dx_num, dx)) Explanation: Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the following: End of explanation np.random.seed(231) N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-3 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print('Testing initialization ... ') W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right' print('Testing test-time forward pass ... ') model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray( [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096], [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143], [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass' print('Testing training loss (no regularization)') y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 print(loss) assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss' model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss' for reg in [0.0, 0.7]: print('Running numeric gradient check with reg = ', reg) model.reg = reg loss, grads = model.loss(X, y) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) Explanation: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. End of explanation model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ############################################################################## solver = Solver(model, data, update_rule='sgd', optim_config={ 'learning_rate': 1e-4, }, lr_decay=0.95, num_epochs=10, batch_size=100, print_every=100,verbose=False) solver.train() print(solver.best_val_acc) ############################################################################## # END OF YOUR CODE # ############################################################################## # Run this cell to visualize training loss and train / val accuracy plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show() Explanation: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. End of explanation np.random.seed(231) N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print('Running check with reg = ', reg) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) Explanation: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. End of explanation # TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 4e-2 learning_rate = 1e-3 model = FullyConnectedNet([100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', lr_decay=0.95, optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. End of explanation # TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-3 weight_scale = 8e-5 model = FullyConnectedNet([100, 100, 100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=100, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. End of explanation from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.asarray([ [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789], [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526], [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263], [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]]) expected_velocity = np.asarray([ [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158], [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105], [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053], [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]]) print('next_w error: ', rel_error(next_w, expected_next_w)) print('velocity error: ', rel_error(expected_velocity, config['velocity'])) Explanation: Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: [FILL THIS IN] Update rules So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD. SGD+Momentum Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent. Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8. End of explanation num_train = 4000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} for update_rule in ['sgd', 'sgd_momentum']: print('running with ', update_rule) model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': 1e-2, }, verbose=True) solvers[update_rule] = solver solver.train() print() plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in list(solvers.items()): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. End of explanation # Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmsprop N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'cache': cache} next_w, _ = rmsprop(w, dw, config=config) expected_next_w = np.asarray([ [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247], [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774], [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447], [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]]) expected_cache = np.asarray([ [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321], [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377], [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936], [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]]) print('next_w error: ', rel_error(expected_next_w, next_w)) print('cache error: ', rel_error(expected_cache, config['cache'])) # Test Adam implementation; you should see errors around 1e-7 or less from cs231n.optim import adam N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5} next_w, _ = adam(w, dw, config=config) expected_next_w = np.asarray([ [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977], [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929], [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969], [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]]) expected_v = np.asarray([ [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,], [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,], [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,], [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]]) expected_m = np.asarray([ [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474], [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316], [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158], [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]]) print('next_w error: ', rel_error(expected_next_w, next_w)) print('v error: ', rel_error(expected_v, config['v'])) print('m error: ', rel_error(expected_m, config['m'])) Explanation: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012). [2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015. End of explanation learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']: print('running with ', update_rule) model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': learning_rates[update_rule] }, verbose=True) solvers[update_rule] = solver solver.train() print() plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in list(solvers.items()): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules: End of explanation best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. # ################################################################################ # from __future__ import print_function best_val_acc = 0 dropouts = [0, 0.3, 0.6, 0.9] regs = [0, 0.3, 0.6, 0.9] weight_scales = [5e-4, 5e-3, 5e-2] hidden_dims = [[100,100,100,100,100],[50,50,50,50,50],[100,100,100],[50,50,50]] for hidden_dim in hidden_dims: for dropout in dropouts: for reg in regs: for weight_scale in weight_scales: model = FullyConnectedNet(hidden_dim, weight_scale=weight_scale, dropout=dropout, reg=reg,use_batchnorm=True) solver = Solver(model, data, num_epochs=5, batch_size=1000, lr_decay=0.95, update_rule='adam', optim_config={ 'learning_rate': 1e-3 },verbose=False) solver.train() val_acc = solver.best_val_acc if val_acc > best_val_acc: print(best_val_acc) best_val_acc = val_acc best_model = model ################################################################################ # END OF YOUR CODE # ################################################################################ Explanation: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. End of explanation y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1) y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1) print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean()) print('Test set accuracy: ', (y_test_pred == data['y_test']).mean()) Explanation: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. End of explanation
8,217
Given the following text description, write Python code to implement the functionality described below step by step Description: Why the proper Z function fails to show convergence We will here investigate why the function called "properZ" fails to give convergence. Initialize Step1: The function called "proper Z" (as it )
Python Code: %matplotlib notebook from IPython.display import display from sympy import init_printing from sympy import S, Eq, Limit from sympy import sin, cos, tanh, pi from sympy import symbols from boutdata.mms import x, z init_printing() Explanation: Why the proper Z function fails to show convergence We will here investigate why the function called "properZ" fails to give convergence. Initialize End of explanation Lx=symbols('Lx') # We multiply with cos(6*pi*x/(2*Lx)) in order to give it a modulation, and to get a non-zero value at the boundary s = 0.15 c = 50 w = 30 f = ((1/2) - (1/2)*(tanh(s*(x-(c - (w/2))))))*cos(6*pi*x/(2*Lx))*sin(2*z) display(Eq(symbols('f'),f)) theLimit = Limit(f,x,0,dir='+') display(Eq(theLimit, theLimit.doit())) theLimit = Limit(f,x,0,dir='-') display(Eq(theLimit, theLimit.doit())) Explanation: The function called "proper Z" (as it ) End of explanation
8,218
Given the following text description, write Python code to implement the functionality described below step by step Description: Removing Duplicates from a Sequence while Maintaining Order Problem You want to eliminate the duplicate values in a sequence, but preserve the order of the remaining items. Solution If the values in the sequence are hashable, the problem can be easily solved using a set and a generator. Dedup list Step1: Dedup dict with key This only works if the items in the sequence are hashable. If you are trying to eliminate duplicates in a sequence of unhashable types (such as dicts), you can make a slight change to this recipe
Python Code: def dedupe(items): seen = set() for item in items: if item not in seen: yield item seen.add(item) a = [1, 5, 2, 1, 9, 1, 5, 10] list(dedupe(a)) Explanation: Removing Duplicates from a Sequence while Maintaining Order Problem You want to eliminate the duplicate values in a sequence, but preserve the order of the remaining items. Solution If the values in the sequence are hashable, the problem can be easily solved using a set and a generator. Dedup list End of explanation def dedupe(items, key=None): seen = set() for item in items: val = item if key is None else key(item) if val not in seen: yield item seen.add(val) a = [ {'x':1, 'y':2}, {'x':1, 'y':3}, {'x':1, 'y':2}, {'x':2, 'y':4}] print(a) print(list(dedupe(a, key=lambda a: (a['x'],a['y'])))) print(list(dedupe(a, key=lambda a: a['x']))) Explanation: Dedup dict with key This only works if the items in the sequence are hashable. If you are trying to eliminate duplicates in a sequence of unhashable types (such as dicts), you can make a slight change to this recipe End of explanation
8,219
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploring data Names of group members // Put your names here! Goals of this assignment The purpose of this assignment is to explore data using visualization and statistics. Section 1 The file datafile_1.csv contains a three-dimensional dataset and associated uncertainty in the data. Read the data file into numpy arrays and visualize it using two new types of plots Step1: Section 2 Now, we're going to experiment with data exploration. You have two data files to examine Step3: In the cell below, describe some of the conclusions that you've drawn from the data you have just explored! // put your thoughts here. Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
Python Code: # put your code here, and add additional cells as necessary. Explanation: Exploring data Names of group members // Put your names here! Goals of this assignment The purpose of this assignment is to explore data using visualization and statistics. Section 1 The file datafile_1.csv contains a three-dimensional dataset and associated uncertainty in the data. Read the data file into numpy arrays and visualize it using two new types of plots: 2D plots of the various combinations of dimensions (x-y, x-z, y-z), including error bars (using the pyplot errorbar() method). Try plotting using symbols instead of lines, and make the error bars a different color than the points themselves. 3D plots of all three dimensions at the same time using the mplot3d toolkit - in particular, look at the scatter() method. Hints: Look at the documentation for numpy's loadtxt() method - in particular, what do the parameters skiprows, comments, and unpack do? If you set up the 3D plot as described above, you can adjust the viewing angle with the command ax.view_init(elev=ANGLE1,azim=ANGLE2), where ANGLE1 and ANGLE2 are in degrees. End of explanation # put your code here, and add additional cells as necessary. Explanation: Section 2 Now, we're going to experiment with data exploration. You have two data files to examine: GLB.Ts.csv, which contains mean global air temperature from 1880 through the present day (retrieved from the NASA GISS surface temperature website, "Global-mean monthly, seasonal, and annual means, 1880-present"). Each row in the data file contains the year, monthly global average, yearly global average, and seasonal global average. See this file for clues as to what the columns mean. bintanja2008.txt, which is a reconstruction of the global surface temperature, deep-sea temperature, ice volume, and relative sea level for the last 3 million years. This data comes from the National Oceanic and Atmospheric Administration's National Climatic Data Center website, and can be found here. Some important notes: These data files are slightly modified versions of those on the website - they have been altered to remove some characters that don't play nicely with numpy (letters with accents), and symbols for missing data have been replaced with 'NaN', or "Not a Number", which numpy knows to ignore. No actual data has been changed. In the file GLB.Ts.csv, the temperature units are in 0.01 degrees Celsius difference from the reference period 1950-1980 - in other words, the number 40 corresponds to a difference of +0.4 degrees C compared to the average temperature between 1950 and 1980. (This means you'll have to renormalize your values by a factor of 100.) In the file bintanja2008.txt, column 9, "Global sea level relative to present," is in confusing units - more positive values actually correspond to lower sea levels than less positive values. You may want to multiply column 9 by -1 in order to get more sensible values. There are many possible ways to examine this data. First, read both data files into numpy arrays - it's fine to load them into a single combined multi-dimensional array if you want, or split the data into multiple arrays. We'll then try a few things: For both datasets, make some plots of the raw data, particularly as a function of time. What do you see? How is the data "shaped"? Is there periodicity? Do some simple data analysis. What are the minimum, maximum, and mean values of the various quantities? (You may have problems with NaN - see nanmin and similar methods) If you calculate some sort of average for annual temperature in GLB.Ts.csv (say, the average temperature smoothed over 10 years), how might you characterize the yearly variability? Try plotting the smoothed value along with the raw data and show how they differ. There are several variables in the file bintanja2008.txt - try plotting multiple variables as a function of time together using the pyplot subplot functionality (and some more complicated subplot examples for further help). Do they seem to be related in some way? (Hint: plot surface temperature, deep sea temperature, ice volume, and sea level, and zoom in from 3 Myr to ~100,000 years) What about plotting the non-time quantities in bintanja2008.txt versus each other (i.e., surface temperature vs. ice volume or sea level) - do you see correlations? End of explanation from IPython.display import HTML HTML( <iframe src="https://goo.gl/forms/Jg6Mxb0ZTvwiSe4R2?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> ) Explanation: In the cell below, describe some of the conclusions that you've drawn from the data you have just explored! // put your thoughts here. Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! End of explanation
8,220
Given the following text description, write Python code to implement the functionality described below step by step Description: Cheatsheet for Decision Tree Classification Algorithm Start at the root node as parent node Split the parent node at the feature a to minimize the sum of the child node impurities (maximize information gain) Assign training samples to new child nodes Stop if leave nodes are pure or early stopping criteria is satisfied, else repeat steps 1 and 2 for each new child node Stopping Rules a maximal node depth is reached splitting a note does not lead to an information gain Criterion Splitting criterion Step1: Gini Impurity $$I_G(t) = \sum_{i =1}^{C}p(i \mid t) \big(1-p(i \mid t)\big)$$ Step2: Misclassification Error $$I_M(t) = 1 - max{{p_i}}$$ Step3: Comparison
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline def entropy(p): return - p*np.log2(p) - (1 - p)*np.log2((1 - p)) x = np.arange(0.0, 1.0, 0.01) ent = [entropy(p) if p != 0 else None for p in x] plt.plot(x, ent) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=1.0, linewidth=1, color='k', linestyle='--') plt.ylabel('Entropy') plt.show() Explanation: Cheatsheet for Decision Tree Classification Algorithm Start at the root node as parent node Split the parent node at the feature a to minimize the sum of the child node impurities (maximize information gain) Assign training samples to new child nodes Stop if leave nodes are pure or early stopping criteria is satisfied, else repeat steps 1 and 2 for each new child node Stopping Rules a maximal node depth is reached splitting a note does not lead to an information gain Criterion Splitting criterion: Information Gain (IG), sum of node impurities Objective function: Maximize IG at each split, eqiv. minimize the the impurity criterion Information Gain (IG) Examples below are given for binary splits. $$IG(D_{p}, a) = I(D_{p}) - \frac{N_{left}}{N_p} I(D_{left}) - \frac{N_{right}}{N_p} I(D_{right})$$ $IG$: Information Gain $a$: feature to perform the split $N_p$: number of samples in the parent node $N_{left}$: number of samples in the left child node $N_{right}$: number of samples in the right child node $I$: impurity $D_{p}$: training subset of the parent node $D_{left}$: training subset of the left child node $D_{right}$: training subset of the right child node Impurity (I) Indices Entropy The entropy is defined as $$I_H(t) = - \sum_{i =1}^{C} p(i \mid t) \;log_2 \,p(i \mid t)$$ for all non-empty classes ($p(i \mid t) \neq 0$), where $p(i \mid t)$ is the proportion (or frequency or probability) of the samples that belong to class $i$ for a particular node $t$; $C$ is the number of unique class labels. The entropy is therefore 0 if all samples at a node belong to the same class, and the entropy is maximal if we have an uniform class distribution. For example, in a binary class setting, the entropy is 0 if $p(i =1 \mid t) =1$ or $p(i =0 \mid t) =1$. And if the classes are distributed uniformly with $p(i =1 \mid t) = 0.5$ and $p(i =0 \mid t) =0.5$ the entropy is 1 (maximal), which we can visualize by plotting the entropy for binary class setting below. End of explanation def gini(p): return (p)*(1 - (p)) + (1-p)*(1 - (1-p)) x = np.arange(0.0, 1.0, 0.01) plt.plot(x, gini(x)) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=0.5, linewidth=1, color='k', linestyle='--') plt.ylabel('Gini Impurity') plt.show() Explanation: Gini Impurity $$I_G(t) = \sum_{i =1}^{C}p(i \mid t) \big(1-p(i \mid t)\big)$$ End of explanation def error(p): return 1 - np.max([p, 1-p]) x = np.arange(0.0, 1.0, 0.01) err = [error(i) for i in x] plt.plot(x, err) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=0.5, linewidth=1, color='k', linestyle='--') plt.ylabel('Misclassification Error') plt.show() Explanation: Misclassification Error $$I_M(t) = 1 - max{{p_i}}$$ End of explanation fig = plt.figure() ax = plt.subplot(111) for i, lab in zip([ent, gini(x), err], ['Entropy', 'Gini Impurity', 'Misclassification Error']): line, = ax.plot(x, i, label=lab) ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15), ncol=3, fancybox=True, shadow=False) ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--') ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--') plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.ylabel('Impurity Index') plt.tight_layout() plt.show() Explanation: Comparison End of explanation
8,221
Given the following text description, write Python code to implement the functionality described below step by step Description: Learning to tokenize in Vision Transformers Authors Step1: Hyperparameters Please feel free to change the hyperparameters and check your results. The best way to develop intuition about the architecture is to experiment with it. Step2: Load and prepare the CIFAR-10 dataset Step3: Data augmentation The augmentation pipeline consists of Step4: Note that image data augmentation layers do not apply data transformations at inference time. This means that when these layers are called with training=False they behave differently. Refer to the documentation for more details. Positional embedding module A Transformer architecture consists of multi-head self attention layers and fully-connected feed forward networks (MLP) as the main components. Both these components are permutation invariant Step5: MLP block for Transformer This serves as the Fully Connected Feed Forward block for our Transformer. Step6: TokenLearner module The following figure presents a pictorial overview of the module (source). The TokenLearner module takes as input an image-shaped tensor. It then passes it through multiple single-channel convolutional layers extracting different spatial attention maps focusing on different parts of the input. These attention maps are then element-wise multiplied to the input and result is aggregated with pooling. This pooled output can be trated as a summary of the input and has much lesser number of patches (8, for example) than the original one (196, for example). Using multiple convolution layers helps with expressivity. Imposing a form of spatial attention helps retain relevant information from the inputs. Both of these components are crucial to make TokenLearner work, especially when we are significantly reducing the number of patches. Step7: Transformer block Step8: ViT model with the TokenLearner module Step9: As shown in the TokenLearner paper, it is almost always advantageous to include the TokenLearner module in the middle of the network. Training utility Step10: Train and evaluate a ViT with TokenLearner
Python Code: import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa from datetime import datetime import matplotlib.pyplot as plt import numpy as np import math Explanation: Learning to tokenize in Vision Transformers Authors: Aritra Roy Gosthipaty, Sayak Paul (equal contribution)<br> Date created: 2021/12/10<br> Last modified: 2021/12/15<br> Description: Adaptively generating a smaller number of tokens for Vision Transformers. Introduction Vision Transformers (Dosovitskiy et al.) and many other Transformer-based architectures (Liu et al., Yuan et al., etc.) have shown strong results in image recognition. The following provides a brief overview of the components involved in the Vision Transformer architecture for image classification: Extract small patches from input images. Linearly project those patches. Add positional embeddings to these linear projections. Run these projections through a series of Transformer (Vaswani et al.) blocks. Finally, take the representation from the final Transformer block and add a classification head. If we take 224x224 images and extract 16x16 patches, we get a total of 196 patches (also called tokens) for each image. The number of patches increases as we increase the resolution, leading to higher memory footprint. Could we use a reduced number of patches without having to compromise performance? Ryoo et al. investigate this question in TokenLearner: Adaptive Space-Time Tokenization for Videos. They introduce a novel module called TokenLearner that can help reduce the number of patches used by a Vision Transformer (ViT) in an adaptive manner. With TokenLearner incorporated in the standard ViT architecture, they are able to reduce the amount of compute (measured in FLOPS) used by the model. In this example, we implement the TokenLearner module and demonstrate its performance with a mini ViT and the CIFAR-10 dataset. We make use of the following references: Official TokenLearner code Image Classification with ViTs on keras.io TokenLearner slides from NeurIPS 2021 Setup We need to install TensorFlow Addons to run this example. To install it, execute the following: shell pip install tensorflow-addons Imports End of explanation # DATA BATCH_SIZE = 256 AUTO = tf.data.AUTOTUNE INPUT_SHAPE = (32, 32, 3) NUM_CLASSES = 10 # OPTIMIZER LEARNING_RATE = 1e-3 WEIGHT_DECAY = 1e-4 # TRAINING EPOCHS = 20 # AUGMENTATION IMAGE_SIZE = 48 # We will resize input images to this size. PATCH_SIZE = 6 # Size of the patches to be extracted from the input images. NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2 # ViT ARCHITECTURE LAYER_NORM_EPS = 1e-6 PROJECTION_DIM = 128 NUM_HEADS = 4 NUM_LAYERS = 4 MLP_UNITS = [ PROJECTION_DIM * 2, PROJECTION_DIM, ] # TOKENLEARNER NUM_TOKENS = 4 Explanation: Hyperparameters Please feel free to change the hyperparameters and check your results. The best way to develop intuition about the architecture is to experiment with it. End of explanation # Load the CIFAR-10 dataset. (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() (x_train, y_train), (x_val, y_val) = ( (x_train[:40000], y_train[:40000]), (x_train[40000:], y_train[40000:]), ) print(f"Training samples: {len(x_train)}") print(f"Validation samples: {len(x_val)}") print(f"Testing samples: {len(x_test)}") # Convert to tf.data.Dataset objects. train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_ds = train_ds.shuffle(BATCH_SIZE * 100).batch(BATCH_SIZE).prefetch(AUTO) val_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_ds = val_ds.batch(BATCH_SIZE).prefetch(AUTO) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)) test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO) Explanation: Load and prepare the CIFAR-10 dataset End of explanation data_augmentation = keras.Sequential( [ layers.Rescaling(1 / 255.0), layers.Resizing(INPUT_SHAPE[0] + 20, INPUT_SHAPE[0] + 20), layers.RandomCrop(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip("horizontal"), ], name="data_augmentation", ) Explanation: Data augmentation The augmentation pipeline consists of: Rescaling Resizing Random cropping (fixed-sized or random sized) Random horizontal flipping End of explanation def position_embedding( projected_patches, num_patches=NUM_PATCHES, projection_dim=PROJECTION_DIM ): # Build the positions. positions = tf.range(start=0, limit=num_patches, delta=1) # Encode the positions with an Embedding layer. encoded_positions = layers.Embedding( input_dim=num_patches, output_dim=projection_dim )(positions) # Add encoded positions to the projected patches. return projected_patches + encoded_positions Explanation: Note that image data augmentation layers do not apply data transformations at inference time. This means that when these layers are called with training=False they behave differently. Refer to the documentation for more details. Positional embedding module A Transformer architecture consists of multi-head self attention layers and fully-connected feed forward networks (MLP) as the main components. Both these components are permutation invariant: they're not aware of feature order. To overcome this problem we inject tokens with positional information. The position_embedding function adds this positional information to the linearly projected tokens. End of explanation def mlp(x, dropout_rate, hidden_units): # Iterate over the hidden units and # add Dense => Dropout. for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x Explanation: MLP block for Transformer This serves as the Fully Connected Feed Forward block for our Transformer. End of explanation def token_learner(inputs, number_of_tokens=NUM_TOKENS): # Layer normalize the inputs. x = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(inputs) # (B, H, W, C) # Applying Conv2D => Reshape => Permute # The reshape and permute is done to help with the next steps of # multiplication and Global Average Pooling. attention_maps = keras.Sequential( [ # 3 layers of conv with gelu activation as suggested # in the paper. layers.Conv2D( filters=number_of_tokens, kernel_size=(3, 3), activation=tf.nn.gelu, padding="same", use_bias=False, ), layers.Conv2D( filters=number_of_tokens, kernel_size=(3, 3), activation=tf.nn.gelu, padding="same", use_bias=False, ), layers.Conv2D( filters=number_of_tokens, kernel_size=(3, 3), activation=tf.nn.gelu, padding="same", use_bias=False, ), # This conv layer will generate the attention maps layers.Conv2D( filters=number_of_tokens, kernel_size=(3, 3), activation="sigmoid", # Note sigmoid for [0, 1] output padding="same", use_bias=False, ), # Reshape and Permute layers.Reshape((-1, number_of_tokens)), # (B, H*W, num_of_tokens) layers.Permute((2, 1)), ] )( x ) # (B, num_of_tokens, H*W) # Reshape the input to align it with the output of the conv block. num_filters = inputs.shape[-1] inputs = layers.Reshape((1, -1, num_filters))(inputs) # inputs == (B, 1, H*W, C) # Element-Wise multiplication of the attention maps and the inputs attended_inputs = ( attention_maps[..., tf.newaxis] * inputs ) # (B, num_tokens, H*W, C) # Global average pooling the element wise multiplication result. outputs = tf.reduce_mean(attended_inputs, axis=2) # (B, num_tokens, C) return outputs Explanation: TokenLearner module The following figure presents a pictorial overview of the module (source). The TokenLearner module takes as input an image-shaped tensor. It then passes it through multiple single-channel convolutional layers extracting different spatial attention maps focusing on different parts of the input. These attention maps are then element-wise multiplied to the input and result is aggregated with pooling. This pooled output can be trated as a summary of the input and has much lesser number of patches (8, for example) than the original one (196, for example). Using multiple convolution layers helps with expressivity. Imposing a form of spatial attention helps retain relevant information from the inputs. Both of these components are crucial to make TokenLearner work, especially when we are significantly reducing the number of patches. End of explanation def transformer(encoded_patches): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(encoded_patches) # Multi Head Self Attention layer 1. attention_output = layers.MultiHeadAttention( num_heads=NUM_HEADS, key_dim=PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2) # MLP layer 1. x4 = mlp(x3, hidden_units=MLP_UNITS, dropout_rate=0.1) # Skip connection 2. encoded_patches = layers.Add()([x4, x2]) return encoded_patches Explanation: Transformer block End of explanation def create_vit_classifier(use_token_learner=True, token_learner_units=NUM_TOKENS): inputs = layers.Input(shape=INPUT_SHAPE) # (B, H, W, C) # Augment data. augmented = data_augmentation(inputs) # Create patches and project the pathces. projected_patches = layers.Conv2D( filters=PROJECTION_DIM, kernel_size=(PATCH_SIZE, PATCH_SIZE), strides=(PATCH_SIZE, PATCH_SIZE), padding="VALID", )(augmented) _, h, w, c = projected_patches.shape projected_patches = layers.Reshape((h * w, c))( projected_patches ) # (B, number_patches, projection_dim) # Add positional embeddings to the projected patches. encoded_patches = position_embedding( projected_patches ) # (B, number_patches, projection_dim) encoded_patches = layers.Dropout(0.1)(encoded_patches) # Iterate over the number of layers and stack up blocks of # Transformer. for i in range(NUM_LAYERS): # Add a Transformer block. encoded_patches = transformer(encoded_patches) # Add TokenLearner layer in the middle of the # architecture. The paper suggests that anywhere # between 1/2 or 3/4 will work well. if use_token_learner and i == NUM_LAYERS // 2: _, hh, c = encoded_patches.shape h = int(math.sqrt(hh)) encoded_patches = layers.Reshape((h, h, c))( encoded_patches ) # (B, h, h, projection_dim) encoded_patches = token_learner( encoded_patches, token_learner_units ) # (B, num_tokens, c) # Layer normalization and Global average pooling. representation = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(encoded_patches) representation = layers.GlobalAvgPool1D()(representation) # Classify outputs. outputs = layers.Dense(NUM_CLASSES, activation="softmax")(representation) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=outputs) return model Explanation: ViT model with the TokenLearner module End of explanation def run_experiment(model): # Initialize the AdamW optimizer. optimizer = tfa.optimizers.AdamW( learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) # Compile the model with the optimizer, loss function # and the metrics. model.compile( optimizer=optimizer, loss="sparse_categorical_crossentropy", metrics=[ keras.metrics.SparseCategoricalAccuracy(name="accuracy"), keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"), ], ) # Define callbacks checkpoint_filepath = "/tmp/checkpoint" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor="val_accuracy", save_best_only=True, save_weights_only=True, ) # Train the model. _ = model.fit( train_ds, epochs=EPOCHS, validation_data=val_ds, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy, top_5_accuracy = model.evaluate(test_ds) print(f"Test accuracy: {round(accuracy * 100, 2)}%") print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%") Explanation: As shown in the TokenLearner paper, it is almost always advantageous to include the TokenLearner module in the middle of the network. Training utility End of explanation vit_token_learner = create_vit_classifier() run_experiment(vit_token_learner) Explanation: Train and evaluate a ViT with TokenLearner End of explanation
8,222
Given the following text description, write Python code to implement the functionality described below step by step Description: In the last chapter, our tests failed. This time we'll go about fixing them. Our First Django App, and Our First Unit Test Django encourages you to structure your code into apps Step1: Unit Tests, and How They Differ from Functional Tests The difference boils down to Step2: Django has helpfully suggested we use a special version of TestCase, which it provides. It’s an augmented version of the standard unittest.TestCase, with some additional Django-specific features, which we’ll discover over the next few chapters. You’ve already seen that the TDD cycle involves starting with a test that fails, then writing code to get it to pass. Well, before we can even get that far, we want to know that the unit test we’re writing will definitely be run by our automated test runner, whatever it is. In the case of functional_tests.py, we’re running it directly, but this file made by Django is a bit more like magic. So, just to make sure, let’s make a deliberately silly failing test Step3: Run our new django test Step4: Everything seems to be working! (This would be a good time to commit!) $ git status # should show you lists/ is untracked $ git add lists $ git diff --staged # will show you the diff that you're about to commit $ git commit -m "Add app for lists, with deliberately failing unit test" Django's MVC, URLs, and View Functions Django is broadly structured along a classic Model-View-Controller (MVC) pattern. Well, broadly. It definitely does have models, but its views are more like a controller, and it’s the templates that are actually the view part, but the general idea is there. If you’re interested, you can look up the finer points of the discussion in the Django FAQs. Irrespective of any of that, like any web server, Django’s main job is to decide what to do when a user asks for a particular URL on our site. Django’s workflow goes something like this Step5: What’s going on here? What function is that? It’s the view function we’re going to write next, which will actually return the HTML we want. You can see from the import that we’re planning to store it in lists/views.py. and resolve is the function Django uses internally to resolve URLs, and find what view function they should map to. We’re checking that resolve, when called with “/”, the root of the site, finds a function called home_page. So, what do you think will happen when we run the tests? Step6: It’s a very predictable and uninteresting error Step8: we interpret the traceback as telling us that, when trying to resolve “/”, Django raised a 404 error—in other words, Django can’t find a URL mapping for “/”. Let’s help it out. urls.py Django uses a file called urls.py to define how URLs map to view functions. There’s a main urls.py for the whole site in the superlists/superlists folder. Let’s go take a look Step10: The first example entry has the regular expression ^$, which means an empty string—could this be the same as the root of our site, which we’ve been testing with “/”? Let’s find out—what happens if we include it? Step11: That’s progress! We’re no longer getting a 404. The message is slightly cryptic, but the unit tests have actually made the link between the URL / and the home_page = None in lists/views.py, and are now complaining that home_page is a NoneType. And that gives us a justification for changing it from being None to being an actual function. Every single code change is driven by the tests! Step12: Hooray! Our first ever unit test pass! That’s so momentous that I think it’s worthy of a commit Step13: What’s going on in this new test? We create an HttpRequest object, which is what Django will see when a user’s browser asks for a page. We pass it to our home_page view, which gives us a response. You won’t be surprised to hear that this object is an instance of a class called HttpResponse. Then, we assert that the .content of the response—which is the HTML that we send to the user—has certain properties. & 5. We want it to start with an &lt;html&gt; tag which gets closed at the end. Notice that response.content is raw bytes, not a Python string, so we have to use the b'' syntax to compare them. More info is available in Django’s Porting to Python 3 docs. And we want a &lt;title&gt; tag somewhere in the middle, with the words "To-Do lists" in it—because that’s what we specified in our functional test. Once again, the unit test is driven by the functional test, but it’s also much closer to the actual code—we’re thinking like programmers now. Let’s run the unit tests now and see how we get on Step14: The Unit-Test/Code Cycle We can start to settle into the TDD unit-test/code cycle now
Python Code: #%cd ../examples/superlists/ # Make a new app called lists #!python3 manage.py startapp lists !tree . Explanation: In the last chapter, our tests failed. This time we'll go about fixing them. Our First Django App, and Our First Unit Test Django encourages you to structure your code into apps: the theory is that one project can have many apps, you can use third-party apps developed by other people, and you might even reuse one of your own apps in a different project … although I admit I’ve never actually managed it myself! Still, apps are a good way to keep your code organised. Let’s start an app for our to-do lists: End of explanation # %load lists/tests.py from django.test import TestCase # Create your tests here. Explanation: Unit Tests, and How They Differ from Functional Tests The difference boils down to: * Functional tests test from the perspective of the user * Unit tests test from the point of view of the developer The TDD approach I’m following wants our application to be covered by both types of test. Our workflow will look a bit like this: We start by writing a functional test, describing the new functionality from the user’s point of view. Once we have a functional test that fails, we start to think about how to write code that can get it to pass (or at least to get past its current failure). We now use one or more unit tests to define how we want our code to behave—the idea is that each line of production code we write should be tested by (at least) one of our unit tests. Once we have a failing unit test, we write the smallest amount of application code we can, just enough to get the unit test to pass. We may iterate between steps 2 and 3 a few times, until we think the functional test will get a little further. Now we can rerun our functional tests and see if they pass, or get a little further. That may prompt us to write some new unit tests, and some new code, and so on. Functional tests should help you build an application with the right functionality, and guarantee you never accidentally break it. Unit tests should help you to write code that’s clean and bug free. Unit Testing in Django Let’s see how to write a unit test for our home page view. Open up the new file at lists/tests.py, and you’ll see something like this: End of explanation %%writefile lists/tests.py from django.test import TestCase class SmokeTest(TestCase): def test_bad_maths(self): self.assertEqual(1 + 1, 3) Explanation: Django has helpfully suggested we use a special version of TestCase, which it provides. It’s an augmented version of the standard unittest.TestCase, with some additional Django-specific features, which we’ll discover over the next few chapters. You’ve already seen that the TDD cycle involves starting with a test that fails, then writing code to get it to pass. Well, before we can even get that far, we want to know that the unit test we’re writing will definitely be run by our automated test runner, whatever it is. In the case of functional_tests.py, we’re running it directly, but this file made by Django is a bit more like magic. So, just to make sure, let’s make a deliberately silly failing test: End of explanation !python3 manage.py test Explanation: Run our new django test End of explanation %%writefile lists/tests.py from django.core.urlresolvers import resolve from django.test import TestCase from lists.views import home_page #1 class HomePageTest(TestCase): def test_root_url_resolves_to_home_page_view(self): found = resolve('/') #2 self.assertEqual(found.func, home_page) #3 Explanation: Everything seems to be working! (This would be a good time to commit!) $ git status # should show you lists/ is untracked $ git add lists $ git diff --staged # will show you the diff that you're about to commit $ git commit -m "Add app for lists, with deliberately failing unit test" Django's MVC, URLs, and View Functions Django is broadly structured along a classic Model-View-Controller (MVC) pattern. Well, broadly. It definitely does have models, but its views are more like a controller, and it’s the templates that are actually the view part, but the general idea is there. If you’re interested, you can look up the finer points of the discussion in the Django FAQs. Irrespective of any of that, like any web server, Django’s main job is to decide what to do when a user asks for a particular URL on our site. Django’s workflow goes something like this: An HTTP request comes in for a particular URL. Django uses some rules to decide which view function should deal with the request (this is referred to as resolving the URL). The view function processes the request and returns an HTTP response. So we want to test two things: Can we resolve the URL for the root of the site (“/”) to a particular view function we’ve made? Can we make this view function return some HTML which will get the functional test to pass? Let’s start with the first. Open up lists/tests.py, and change our silly test to something like this: End of explanation !python3 manage.py test Explanation: What’s going on here? What function is that? It’s the view function we’re going to write next, which will actually return the HTML we want. You can see from the import that we’re planning to store it in lists/views.py. and resolve is the function Django uses internally to resolve URLs, and find what view function they should map to. We’re checking that resolve, when called with “/”, the root of the site, finds a function called home_page. So, what do you think will happen when we run the tests? End of explanation %%writefile lists/views.py from django.shortcuts import render # Create your views here. home_page = None !python3 manage.py test Explanation: It’s a very predictable and uninteresting error: we tried to import something we haven’t even written yet. But it’s still good news—for the purposes of TDD, an exception which was predicted counts as an expected failure. Since we have both a failing functional test and a failing unit test, we have the Testing Goat’s full blessing to code away. At Last! We Actually Write Some Application Code! It is exciting isn’t it? Be warned, TDD means that long periods of anticipation are only defused very gradually, and by tiny increments. Especially since we’re learning and only just starting out, we only allow ourselves to change (or add) one line of code at a time—and each time, we make just the minimal change required to address the current test failure. I’m being deliberately extreme here, but what’s our current test failure? We can’t import home_page from lists.views? OK, let’s fix that—and only that. In lists/views.py: End of explanation # %load superlists/urls.py superlists URL Configuration The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.8/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Add an import: from blog import urls as blog_urls 2. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls)) from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^admin/', include(admin.site.urls)), ] Explanation: we interpret the traceback as telling us that, when trying to resolve “/”, Django raised a 404 error—in other words, Django can’t find a URL mapping for “/”. Let’s help it out. urls.py Django uses a file called urls.py to define how URLs map to view functions. There’s a main urls.py for the whole site in the superlists/superlists folder. Let’s go take a look: End of explanation %%writefile superlists/urls.py superlists URL Configuration The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.8/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Add an import: from blog import urls as blog_urls 2. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls)) from django.conf.urls import include, url from django.contrib import admin from lists import views urlpatterns = [ url(r'^$', views.home_page, name='home'), #url(r'^admin/', include(admin.site.urls)), ] !python3 manage.py test Explanation: The first example entry has the regular expression ^$, which means an empty string—could this be the same as the root of our site, which we’ve been testing with “/”? Let’s find out—what happens if we include it? End of explanation %%writefile lists/views.py from django.shortcuts import render # Create your views here. def home_page(): pass !python3 manage.py test Explanation: That’s progress! We’re no longer getting a 404. The message is slightly cryptic, but the unit tests have actually made the link between the URL / and the home_page = None in lists/views.py, and are now complaining that home_page is a NoneType. And that gives us a justification for changing it from being None to being an actual function. Every single code change is driven by the tests! End of explanation %%writefile lists/tests.py from django.core.urlresolvers import resolve from django.test import TestCase from django.http import HttpRequest from lists.views import home_page class HomePageTest(TestCase): def test_root_url_resolves_to_home_page_view(self): found = resolve('/') self.assertEqual(found.func, home_page) def test_home_page_returns_correct_html(self): request = HttpRequest() #1 response = home_page(request) #2 self.assertTrue(response.content.startswith(b'<html>')) #3 self.assertIn(b'<title>To-Do lists</title>', response.content) #4 self.assertTrue(response.content.endswith(b'</html>')) #5 Explanation: Hooray! Our first ever unit test pass! That’s so momentous that I think it’s worthy of a commit: $ git diff # should show changes to urls.py, tests.py, and views.py $ git commit -am "First unit test and url mapping, dummy view" Unit Testing a View On to writing a test for our view, so that it can be something more than a do-nothing function, and instead be a function that returns a real response with HTML to the browser. Open up lists/tests.py, and add a new test method. I’ll explain each bit: End of explanation !python3 manage.py test Explanation: What’s going on in this new test? We create an HttpRequest object, which is what Django will see when a user’s browser asks for a page. We pass it to our home_page view, which gives us a response. You won’t be surprised to hear that this object is an instance of a class called HttpResponse. Then, we assert that the .content of the response—which is the HTML that we send to the user—has certain properties. & 5. We want it to start with an &lt;html&gt; tag which gets closed at the end. Notice that response.content is raw bytes, not a Python string, so we have to use the b'' syntax to compare them. More info is available in Django’s Porting to Python 3 docs. And we want a &lt;title&gt; tag somewhere in the middle, with the words "To-Do lists" in it—because that’s what we specified in our functional test. Once again, the unit test is driven by the functional test, but it’s also much closer to the actual code—we’re thinking like programmers now. Let’s run the unit tests now and see how we get on: End of explanation %%writefile lists/views.py from django.shortcuts import render from django.http import HttpResponse # Create your views here. def home_page(request): return HttpResponse('<html><title>To-Do lists</title></html>') !python3 manage.py test Explanation: The Unit-Test/Code Cycle We can start to settle into the TDD unit-test/code cycle now: In the terminal, run the unit tests and see how they fail. In the editor, make a minimal code change to address the current test failure. And repeat! The more nervous we are about getting our code right, the smaller and more minimal we make each code change—the idea is to be absolutely sure that each bit of code is justified by a test. It may seem laborious, but once you get into the swing of things, it really moves quite fast—so much so that, at work, we usually keep our code changes microscopic even when we’re confident we could skip ahead. Let’s see how fast we can get this cycle going: Minimal code change: lists/views.py. python def home_page(request): pass Tests: self.assertTrue(response.content.startswith(b'&lt;html&gt;')) AttributeError: 'NoneType' object has no attribute 'content' Code—we use django.http.HttpResponse, as predicted: lists/views.py. ```python from django.http import HttpResponse Create your views here. def home_page(request): return HttpResponse() Tests again: self.assertTrue(response.content.startswith(b'<html>')) AssertionError: False is not true Code again: lists/views.py. def home_page(request): return HttpResponse('<html>') ``` Tests: AssertionError: b'&lt;title&gt;To-Do lists&lt;/title&gt;' not found in b'&lt;html&gt;' Code: lists/views.py. python def home_page(request): return HttpResponse('&lt;html&gt;&lt;title&gt;To-Do lists&lt;/title&gt;') Tests—almost there? ``` self.assertTrue(response.content.endswith(b'</html>')) AssertionError: False is not true Come on, one last effort: lists/views.py.python def home_page(request): return HttpResponse('<html><title>To-Do lists</title></html>') Surely? $ python3 manage.py test Creating test database for alias 'default'... .. Ran 2 tests in 0.001s OK Destroying test database for alias 'default'... ``` Failed? What? Oh, it’s just our little reminder? Yes? Yes! We have a web page! Ahem. Well, I thought it was a thrilling end to the chapter. You may still be a little baffled, perhaps keen to hear a justification for all these tests, and don’t worry, all that will come, but I hope you felt just a tinge of excitement near the end there. Just a little commit to calm down, and reflect on what we’ve covered: $ git diff # should show our new test in tests.py, and the view in views.py $ git commit -am "Basic view now returns minimal HTML" That was quite a chapter! Why not try typing git log, possibly using the --oneline flag, for a reminder of what we got up to: $ git log --oneline a6e6cc9 Basic view now returns minimal HTML 450c0f3 First unit test and url mapping, dummy view ea2b037 Add app for lists, with deliberately failing unit test [...] Not bad—we covered: Starting a Django app The Django unit test runner *The difference between FTs and unit tests Django URL resolving and urls.py Django view functions, request and response objects And returning basic HTML End of explanation
8,223
Given the following text description, write Python code to implement the functionality described below step by step Description: Sveučilište u Zagrebu<br> Fakultet elektrotehnike i računarstva Strojno učenje <a href="http Step1: Sadržaj Step2: Linearno zavisne varijable imaju $\rho$ blizu $1$ ili $-1$. Međutim, nelinearno zavisne varijable mogu imati $\rho$ blizu nule! Statistička nezavisnost Varijable $X$ i $Y$ su nezavisne akko Step3: Kategorička ("multinomijalna") razdioba Varijabla koja poprima jednu (i samo jednu) od $K$ mogućih vrijednosti $\mathbf{x}=(x_1,x_2,\dots,x_K)^\mathrm{T}$ je binaran vektor indikatorskih varijabli vektor 1-od-K one-hot encoding Vjerojatnosti pojedinih vrijednosti Step4: $P( \mu -\sigma\leq X \leq \mu + \sigma) = 0.68$ $P( \mu -2\sigma\leq X \leq \mu + 2\sigma) = 0.95$ $P( \mu -3\sigma\leq X \leq \mu + 3\sigma) = 0.99.7$ Step5: Multivarijatna Gaussova razdioba \begin{equation} p(\mathbf{X}=\mathbf{x}|\boldsymbol{\mu},\mathbf{\Sigma}) = \frac{1}{(2\pi)^{n/2}|\mathbf{\Sigma}|^{1/2}} \exp\Big{-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})^\mathrm{T}\mathbf{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu})\Big} \end{equation} $\mathbf{\Sigma}$ mora biti pozitivno definitna. Tada (1) matrica je nesingularna i ima inverz te (2) determinanta joj je pozitivna Kvadratna forma Step6: Procjena parametara Ideja Step7: Očekivanje procjenitelja Step8: $\mathbb{E}[\hat{\mu}]=\mu$, tj. $\hat{\mu}$ je nepristran procjenitelj srednje vrijednosti Međutim, $\mathbb{E}[\hat{\sigma}^2] \neq \sigma^2$, tj. $\hat{\sigma}^2$ nije nepristran procjenitelj varijance! $$ \mathbb{E}[\hat{\sigma}^2] = \frac{N-1}{N}\sigma^2 $$ Pristranost od $\hat{\sigma}^2$ je $$ b(\hat{\sigma}^2) = \frac{N-1}{N}\sigma^2-\sigma^2 = -\frac{\sigma^2}{N} $$ Procjenitelj podcjenjuje (engl. underestimates) pravu varijancu! Nepristran procjenitelj varijance Step9: Očekivanje procjenitelja Step10: Kako izvesti procjenitelj za neku teorijsku distribuciju (Bernoullijevu, Gaussovu, ...)? Tri vrste procjenitelja Step11: MLE Nalazi $\boldsymbol{\theta}$ koji maksimiziraju funkciju izglednosti Step12: MLE za multivarijatnu Gaussovu razdiobu \begin{align} \ln\mathcal{L}(\boldsymbol{\mu},\boldsymbol{\Sigma}|\mathcal{D}) &= \ln\prod_{i=1}^N p(\mathbf{x}^{(i)}|\boldsymbol{\mu},\boldsymbol{\Sigma})\ &= -\frac{n N}{2}\ln(2\pi)-\frac{N}{2}|\boldsymbol{\Sigma}| -\frac{1}{2}\sum_{i=1}^N(\boldsymbol{x}^{(i)}-\boldsymbol{\mu})^\mathrm{T}\boldsymbol{\Sigma}^{-1}(\mathbf{x}^{(i)}-\boldsymbol{\mu}) \end{align} \begin{align} \nabla\ln\mathcal{L}(\boldsymbol{\mu},\boldsymbol{\Sigma} | \mathcal{D})&=0\ \vdots\ \hat{\boldsymbol{\mu}}\mathrm{ML} &= \frac{1}{N}\sum{i=1}^N\mathbf{x}^{(i)}\ \hat{\boldsymbol{\Sigma}}\mathrm{ML} &= \frac{1}{N}\sum{i=1}^N (\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}\mathrm{ML})(\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}\mathrm{ML})^\mathrm{T} \end{align} Step13: Procjenitelj MAP MLE lako dovodi do prenaučenosti modela Npr. za skup primjera za koji $\forall x^{(i)}\in \mathcal{D}. x^{(i)}=0$, procjena je $\hat{\mu}_\mathrm{ML}=0$ Ideja
Python Code: import scipy as sp import scipy.stats as stats import matplotlib.pyplot as plt from numpy.random import normal %pylab inline Explanation: Sveučilište u Zagrebu<br> Fakultet elektrotehnike i računarstva Strojno učenje <a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a> Ak. god. 2015./2016. Bilježnica 3: Osnove vjerojatnosti i statistike (c) 2015 Jan Šnajder <i>Verzija: 0.8 (2015-11-01)</i> End of explanation from scipy import stats X = sp.random.random(100) Y0 = sp.random.random(100) noise = stats.norm.rvs(size=100) Y1 = X + 0.2 * noise Y2 = 3 * Y1 Y3 = -Y1 Y4 = 1 - (X - 0.5)**2 + 0.05 * noise for Y in [Y0, Y1, Y2, Y3, Y4]: plt.scatter(X,Y, label="r = %.3f" % stats.pearsonr(X, Y)[0]) plt.legend() plt.show() Explanation: Sadržaj: Vjerojatnost Očekivanje, varijanca i kovarijanca Statistička nezavisnost Matrica kovarijacije Teorijske razdiobe Procjena parametara Procjenitelj MLE Procjenitelj MAP Vjerojatnost $X$ je slučajna varijabla, ${x_i}$ su njezine vrijednosti Pojednostavljenje notacije: $$P(X=x) \equiv P(x)$$ $P(x_i)\geq 0$, $\sum_i P(x_i)=1$ Distribucija (razdioba) vjerojatnosti Zajednička (engl. joint) distribucija nad ${X,Y}$: $$P(X=x,Y=y)\equiv P(x,y)$$ Kontinuirana slučajna varijabla: funkcija gustoće vjerojatnosti (PDF): \begin{eqnarray} p(x) & \geq 0\ \int_{-\infty}^{\infty} p(x)\,\textrm{d}x &= 1\ P(a\leq X\leq b) &= \int_a^b p(x)\,\mathrm{d}x \end{eqnarray} Dva pravila teorije vjerojatnosti (1) Pravilo zbroja $$P(x)=\sum_y P(x,y)$$ (Marginalna vjerojatnost varijable $X$) Uvjetna vjerojatnost: $$ P(y|x) = \frac{P(x,y)}{P(x)} $$ (2) Pravilo umnoška $$P(x,y) = P(y|x) P(x) = P(x|y) P(y)$$ Izvedena pravila Bayesovo pravilo $$ P(y|x) = \frac{P(x|y)P(y)}{P(x)} = \frac{P(x|y)P(y)}{\sum_y P(x,y)} = \frac{P(x|y)P(y)}{\sum_y P(x|y)P(y)} $$ Pravilo lanca (engl. chain rule) $$P(x,y,z) = P(x) P(y|x) P(z|x,y)$$ Općenito: $$ \begin{align} P(x_1,\dots,x_n) &= P(x_1)P(x_2|x_1)P(x_3|x_1,x_2)\cdots P(x_n|x_1,\dots,x_{n-1})\ &= \prod_{k=1}^n P(x_k|x_1,\dots,x_{k-1}) \end{align} $$ Očekivanje, varijanca i kovarijanca Očekivanje slučajne varijable: \begin{equation} \mathbb{E}[X]=\sum_x x P(x) \end{equation} $$ \mathbb{E}[X]=\int_{-\infty}^{\infty} x\,p(x)\,\mathrm{d}x $$ Očekivanje funkcije slučajne varijable: \begin{equation} \mathbb{E}[f]=\sum_x f(x) P(x) \end{equation} Vrijedi: \begin{align} \mathbb{E}[aX+b] &= a\mathbb{E}[X]+b\qquad (a,b\in\mathbb{R})\ \mathbb{E}[X+Y] &= \mathbb{E}[X] + \mathbb{E}[Y] \end{align} Varijanca slučajne varijable: \begin{equation} \mathrm{Var}(X) = \sigma_X^2 = \mathbb{E}[(X-\mathbb{E}[X])^2] = \mathbb{E}[X^2] - \mathbb{E}[X]^2 \end{equation} \begin{equation} \mathrm{Var}(a X) = \mathbb{E}\big[(a X)^2\big] - \mathbb{E}[a X]^2 = a^2\mathbb{E}[X^2] - a^2\mathbb{E}[X]^2 = a^2\mathrm{Var}(X) \end{equation} Kovarijanca slučajnih varijabli: \begin{align} \mathrm{Cov}(X,Y) &= \sigma_{X,Y} = \mathbb{E}\big[(X-\mathbb{E}[X])(Y-\mathbb{E}[Y])\big] = \mathbb{E}[XY] - \mathbb{E}[X]\mathbb{E}[Y]\ \mathrm{Cov}(X,Y) &=\mathrm{Cov}(Y, X)\ \mathrm{Cov}(X,X) &=\mathrm{Var}(X) =\sigma^2_X\ \end{align} Pearsonov koeficijent korelacije (linearna zavisnost): $$ \rho_{X,Y} = \frac{\mathrm{Cov}(X,Y)}{\sigma_X\sigma_Y} $$ $\rho_{X,Y}\in[-1,+1]$ End of explanation mu = 0.3 p = stats.bernoulli(mu) xs = sp.array([0,1]) for x in xs: plt.plot(x, p.pmf(x), 'bo', ms=8, label='bernoulli pmf') plt.vlines(x, 0, p.pmf(x), colors='b', lw=5, alpha=0.5) plt.xlim(xmin=-1, xmax=2) plt.ylim(ymax=1) plt.show() X = p.rvs(size=100); X sp.mean(X) sp.var(X) xs = linspace(0,1) plt.plot(xs, xs * (1-xs)); Explanation: Linearno zavisne varijable imaju $\rho$ blizu $1$ ili $-1$. Međutim, nelinearno zavisne varijable mogu imati $\rho$ blizu nule! Statistička nezavisnost Varijable $X$ i $Y$ su nezavisne akko: $$ P(X,Y) = P(X) P(Y) $$ ili $$ P(X|Y) = P(X) \qquad \text{i} \qquad P(Y|X) = P(Y) $$ Znanje o ishodu varijable $Y$ ne utječe na vjerojatnost ishoda varijable $X$ (i obrnuto) Za nezavisne varijable $X$ i $Y$ vrijedi: $$ \begin{align} \mathbb{E}[XY] &= \mathbb{E}[X]\, \mathbb{E}[Y]\ \mathrm{Var}(X+Y) &= \mathrm{Var}(X) + \mathrm{Var}(Y)\ \mathrm{Cov}(X, Y) &= \rho_{X,Y} = 0 \end{align} $$ Nezavisne varijable su nekorelirane, ali obrat općenito ne vrijedi: nelinarno zavisne varijable mogu imati nisku korelaciju Varijable $X$ i $Y$ su uvjetno nezavisne uz danu varijablu Z, što označavamo kao $X\bot Y|Z$, akko $$ P(X|Y,Z) = P(X|Z) $$ ili $$ P(X,Y|Z) = P(X|Z) P(Y|Z) $$ Jednom kada nam je poznat ishod varijable $Z$, znanje o ishodu varijable $Y$ ne utječe na ishod varijable $X$ (i obrnuto) Npr.: $X = \textrm{'Student je primljen na FER'}$ $Y = \textrm{'Student je primljen na PMF-MO'}$ $P(Y|X) \neq P(Y)$ (varijable nisu nezavisne) $Z = \textrm{'Student je sudjelovao na matematičkim natjecanjima'}$ $X\bot Y|Z$ $P(Y|X,Z) = P(Y|Z)$ Matrica kovarijacije $\mathbf{X} = (X_1,\dots,X_n)$ je $n$-dimenzijski slučajan vektor Matrica kovarijacije $\Sigma$: $$ \Sigma_{ij} = \mathrm{Cov}(X_i, X_j) = \mathbb{E}\big[(X_i-\mathbb{E}[X_i])(X_j-\mathbb{E}[X_j])\big] $$ Matrično: $$ \begin{align} \Sigma &= \begin{pmatrix} \mathrm{Var}(X_1) & \mathrm{Cov}(X_1,X_2) & \dots & \mathrm{Cov}(X_1, X_n)\ \mathrm{Cov}(X_2, X_1) & \mathrm{Var}(X_2) & \dots & \mathrm{Cov}(X_2, X_n)\ \vdots & \vdots & \ddots & \vdots \ \mathrm{Cov}(X_n, X_1) & \mathrm{Cov}(X_n, X_2) & \dots & \mathrm{Var}(X_n)\ \end{pmatrix} \end{align} $$ Simetrična matrica! Ekvivalentno: \begin{equation} \Sigma = \mathbb{E}\Big[(\textbf{X}-\mathbb{E}[\textbf{X}])(\textbf{X}-\mathbb{E}[\textbf{X}])^{\mathrm{T}}\Big] \end{equation} Ako su $X_1...X_n$ međusobno nezavisne, onda $\Sigma = \mathrm{diag}(\sigma_i^2)$ Ako $\sigma^2_i = \sigma^2$, onda $\Sigma = \sigma^2 \mathbf{I}$ (izotropna kovarijanca) Teorijske razdiobe Diskretna značajka: Jednodimenzijska: Binarna: Bernoullijeva razdioba Viševrijednosna: Kategorička (multinomijalna) razdioba Višedimenzijska: Konkatenirani vektor binarnih/viševrijednosnih varijabli Kontinuirana značajka: Jednodimenzijska: univarijatna normalna (Gaussova) razdioba Višedimenzijska: multivarijatna normalna (Gaussova) razdioba Bernoullijeva razdioba \begin{equation} P(X=x | \mu)= \begin{cases} \mu & \text{ako $X=1$}\ 1-\mu & \text{inače} \end{cases} \qquad= \mu^{x}(1-\mu)^{1-x} \end{equation} \begin{eqnarray} \mathbb{E}[X] &=& \mu\ \mathrm{Var}(X) &=& \mu(1-\mu) \end{eqnarray} End of explanation xs = sp.linspace(-5, 5) for s in range(1, 5): plt.plot(xs, stats.norm.pdf(xs, 0, s), label='$\sigma=%d$' % s) plt.legend() plt.show() Explanation: Kategorička ("multinomijalna") razdioba Varijabla koja poprima jednu (i samo jednu) od $K$ mogućih vrijednosti $\mathbf{x}=(x_1,x_2,\dots,x_K)^\mathrm{T}$ je binaran vektor indikatorskih varijabli vektor 1-od-K one-hot encoding Vjerojatnosti pojedinih vrijednosti: $\boldsymbol{\mu}=(\mu_1,\dots,\mu_K)^\mathrm{T}$, $\sum_k \mu_k=1$, $\mu_k\geq 0$ \begin{equation} P(\mathbf{X}=\mathbf{x} | \boldsymbol{\mu}) = \prod_{k=1}^K \mu_k^{x_k} \end{equation} Npr. $X=x_3\quad \Rightarrow\quad \mathbf{x} = (0,0,1,0)$ $\boldsymbol{\mu} = (0.2, 0.3, 0.4, 0.1)$ $P\big(X = (0,0,1,0)\big) = \prod_{k=1}^4 \mu_k^{x_k} = 1\cdot 1\cdot \mu_3\cdot 1 = \mu_3 = 0.4$ Gaussova razdioba \begin{equation} p(X=x|\mu,\sigma^2) = \frac{1}{\sqrt{2\pi}\sigma}\exp\Big{-\frac{(x-\mu)^2}{2\sigma^2}\Big} \end{equation} \begin{align} \mathbb{E}[X] =& \mu\ \mathrm{Var}(X) =& \sigma^2 \end{align} End of explanation print stats.norm.cdf(1, 0, 1) - stats.norm.cdf(-1, 0, 1) print stats.norm.cdf(2, 0, 1) - stats.norm.cdf(-2, 0, 1) print stats.norm.cdf(3, 0, 1) - stats.norm.cdf(-3, 0, 1) p = stats.norm(loc=5, scale=3) X = p.rvs(size=30); X sp.mean(X) sp.var(X) Explanation: $P( \mu -\sigma\leq X \leq \mu + \sigma) = 0.68$ $P( \mu -2\sigma\leq X \leq \mu + 2\sigma) = 0.95$ $P( \mu -3\sigma\leq X \leq \mu + 3\sigma) = 0.99.7$ End of explanation mu = [0, 1] covm = sp.array([[1, 1], [1, 3]]) p = stats.multivariate_normal(mu, covm) print covm x = np.linspace(-2, 2) y = np.linspace(-2, 2) X, Y = np.meshgrid(x, y) XY = np.dstack((X,Y)) plt.contour(X, Y, p.pdf(XY)); covm1 = sp.array([[1, 0], [0, 5]]) print covm1 plt.contour(X, Y, stats.multivariate_normal.pdf(XY, mean=mu, cov=covm1 )); covm2 = sp.array([[5, 0], [0, 5]]) print covm2 plt.contour(X, Y, stats.multivariate_normal.pdf(XY, mean=mu, cov=covm2 )); plt.contour(X, Y, stats.multivariate_normal.pdf(XY, mean=mu, cov=[[1,0],[0,1]] )); from scipy import linalg x00 = sp.array([0,0]) x01 = sp.array([0,1]) x10 = sp.array([1,0]) x11 = sp.array([1,1]) linalg.norm(x00 - x01, ord=2) linalg.norm(x00 - x10, ord=2) linalg.norm(x00 - x11, ord=2) sqrt(sp.dot((x00 - x11),(x00 - x11))) def mahalanobis(x1, x2, covm): return sqrt(sp.dot(sp.dot((x1 - x2), linalg.inv(covm)), (x1 - x2))) # ili: from scipy.spatial.distance import mahalanobis covm1 = sp.array([[1, 0], [0, 5]]) mahalanobis(x00, x01, covm1) mahalanobis(x00, x10, covm1) mahalanobis(x00, x11, covm1) mahalanobis(x00, x11, sp.eye(2)) Explanation: Multivarijatna Gaussova razdioba \begin{equation} p(\mathbf{X}=\mathbf{x}|\boldsymbol{\mu},\mathbf{\Sigma}) = \frac{1}{(2\pi)^{n/2}|\mathbf{\Sigma}|^{1/2}} \exp\Big{-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})^\mathrm{T}\mathbf{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu})\Big} \end{equation} $\mathbf{\Sigma}$ mora biti pozitivno definitna. Tada (1) matrica je nesingularna i ima inverz te (2) determinanta joj je pozitivna Kvadratna forma: $\Delta^2 = (\mathbf{x}-\boldsymbol{\mu})^\mathrm{T}\mathbf{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu})$ je Mahalanobisova udaljenost između $\mathbf{x}$ i $\boldsymbol{\mu}$. \begin{align} \mathbb{E}[\mathbf{X}] =& \boldsymbol{\mu}\ \mathrm{Cov}(X_i, X_j) =& \mathbf{\Sigma}_{ij} \end{align} End of explanation X = stats.norm.rvs(size=10, loc=0, scale=1) # mean=0, stdev=var=1 sp.mean(X) Explanation: Procjena parametara Ideja: na temelju slučajnog uzorka izračunati procjenu (estimaciju) parametra teorijske razdiobe Neka je $(X_1,X_2,\dots,X_n)$ uzorak ($n$-torka slučajnih varijabli koje su iid) Slučajna varijabla $\Theta=g(X_1,X_2,\dots,X_n)$ naziva se statistika Statistika $\Theta$ je procjenitelj (estimator) parametra populacije $\theta$ Vrijednost procjenitelja $\hat{\theta} = g(x_1,x_2,\dots,x_n)$ naziva se procjena Procjenitelj je slučajna varijable, dakle ima očekivanje i varijancu [Slika: pristranost i varijanca procjenitelja] Procjenitelj $\Theta$ je nepristran procjenitelj (engl. unbiased estimator) parametra $\theta$ akko $$ \mathbb{E}[\Theta]=\theta $$ Pristranost procjenitelj (engl. estimator bias): $$ b_\theta(\Theta) = \mathbb{E}[\Theta]-\theta $$ Primjer: Procjenitelji srednje vrijednosti i varijance $X$ je slučajna varijabla sa $x\in\mathbb{R}$. Označimo $\mathbb{E}[X] = \mu$ (srednja vrijednost) i $\mathrm{Var}(X)=\sigma^2$ (varijanca) $\mu$ i $\sigma^2$ su parametri populacije i oni su nam nepoznati Parametre $\mu$ i $\sigma^2$ možemo ih procijeniti na temelju uzorka ${x^{(i)}}_{i=1}^N$ pomoću procjenitelja Za procjenitelje možemo upotrijebiti bilo koje statistike. Npr. $$ \hat{\mu}=\frac{1}{N}\sum_i x^{(i)}\qquad \hat{\sigma}^2 = \frac{1}{N}\sum_{i=1}^N (x^{(i)}-\hat{\mu})^2 $$ Q: Jesu li ovo dobri procjenitelji? (Jesu li nepristrani?) $\mathbb{E}[\hat{\mu}]=\mu$ ? $\mathbb{E}[\hat{\sigma}^2] = \sigma^2$ ? End of explanation mean = 0 n = 10 N = 10000 for i in range(N): X = stats.norm.rvs(size=n) mean += sp.sum(X) / len(X) mean / N Explanation: Očekivanje procjenitelja: End of explanation def st_dev(X): n = len(X) mean = sp.sum(X) / n s = 0 for i in range(len(X)): s += (X[i] - mean)**2 return s / n X = stats.norm.rvs(size=10, loc=0, scale=1) # mean=0, stdev=var=1 st_dev(X) Explanation: $\mathbb{E}[\hat{\mu}]=\mu$, tj. $\hat{\mu}$ je nepristran procjenitelj srednje vrijednosti Međutim, $\mathbb{E}[\hat{\sigma}^2] \neq \sigma^2$, tj. $\hat{\sigma}^2$ nije nepristran procjenitelj varijance! $$ \mathbb{E}[\hat{\sigma}^2] = \frac{N-1}{N}\sigma^2 $$ Pristranost od $\hat{\sigma}^2$ je $$ b(\hat{\sigma}^2) = \frac{N-1}{N}\sigma^2-\sigma^2 = -\frac{\sigma^2}{N} $$ Procjenitelj podcjenjuje (engl. underestimates) pravu varijancu! Nepristran procjenitelj varijance: $$ \hat{\sigma}^2_{\text{nepr.}} = \frac{1}{N-1}\sum_{i=1}^N (x^{(i)}-\hat{\mu})^2 $$ End of explanation stdev = 0 n = 10 N = 10000 for i in range(N): X = stats.norm.rvs(size=n) stdev += st_dev(X) stdev / N stdev = 0 n = 10 N = 10000 for i in range(N): X = stats.norm.rvs(size=n) stdev += st_dev(X) stdev / N Explanation: Očekivanje procjenitelja: End of explanation def likelihood(mu, m, N): return mu**m * (1 - mu)**(N - m) xs = linspace(0,1) plt.plot(xs, likelihood(xs, 8, 10)); xs = linspace(0,1) plt.plot(xs, likelihood(xs, 5, 10)); xs = linspace(0,1) plt.plot(xs, likelihood(xs, 10, 10)); Explanation: Kako izvesti procjenitelj za neku teorijsku distribuciju (Bernoullijevu, Gaussovu, ...)? Tri vrste procjenitelja: (1) Procjenitelj najveće izglednosti (engl. maximum likelihood estimator, MLE) (2) Procjenitelj maximum aposteriori (MAP) (3) Bayesovski procjenitelj (engl. Bayesian estimator) Procjenitelj MLE Skup neoznačenih primjera $\mathcal{D}={\mathbf{x}^{(i)}}_{i=1}^N$ koji su iid $$ \mathbf{x}^{(i)} \sim p(\mathbf{x} | \boldsymbol{\theta}) $$ MLE određuje najizglednije parametre $\boldsymbol{\theta}$: to su oni parametri koji izvlačenje uzorka $\mathcal{D}$ čine najvjerojatnijim $$ p(\mathcal{D} | \boldsymbol{\theta}) = p(\mathbf{x}^{(1)},\dots,\mathbf{x}^{(N)} | \mathbf{\theta}) = \prod_{i=1}^N p(\mathbf{x}^{(i)} | \mathbf{\theta})\ \equiv \color{red}{\mathcal{L}(\boldsymbol{\theta} | \mathcal{D})} $$ NB: Druga jednakost vrijedi uz pretpostavku iid Funkcija izglednosti $\mathcal{L} : \boldsymbol{\theta}\mapsto p(\mathcal{D} | \boldsymbol{\theta})$ parametrima pridjeljuje vjerojatnost $\mathcal{L}$ nije PDF! Općenito ne vrijedi $\int_{\boldsymbol{\theta}} \mathcal{L}(\boldsymbol{\theta}|\mathcal{D})\,\mathrm{d}\boldsymbol{\theta}=1$. Primjer: Izglednost Bernoullijeve varijable $\mathcal{D} \equiv$ 10 bacanja novčića ($N=10$) Glava (H) 8 puta, pismo (T) 2 puta $\mu$ je vjerojatnost da dobijem H $P(X=x | \mu)= \mu^{x}(1-\mu)^{1-x}$ End of explanation p = stats.norm(5, 2) X = sort(p.rvs(30)) plt.scatter(X, sp.zeros(len(X))); mean_mle = sp.mean(X); mean_mle var_mle = np.var(X, axis=0, ddof=1); var_mle p_mle = stats.norm(mean_mle, sqrt(var_mle)) plt.scatter(X, p_mle.pdf(X)) plt.plot(X, p.pdf(X), c='gray'); plt.plot(X, p_mle.pdf(X), c='blue', linewidth=2) plt.vlines(X, 0, p_mle.pdf(X), colors='b', lw=2, alpha=0.2) plt.show() Explanation: MLE Nalazi $\boldsymbol{\theta}$ koji maksimiziraju funkciju izglednosti: $$ \hat{\boldsymbol{\theta}}{\mathrm{ML}} = \mathrm{argmax}{\boldsymbol{\theta}} \mathcal{L}(\boldsymbol{\theta}|\mathcal{D}) $$ Analitički je jednostavnije maksimizirati log-izglednost: $$ \ln\mathcal{L}(\boldsymbol{\theta} | \mathcal{D}) \ = \ln p(\mathcal{D} | \boldsymbol{\theta}) = \ln \prod_{i=1}^N p(\mathbf{x}^{(i)} | \boldsymbol{\theta}) = \sum_{i=1}^N\ln p(\mathbf{x}^{(i)} | \boldsymbol{\theta}) $$ $$ \hat{\boldsymbol{\theta}}{\mathrm{ML}} = \mathrm{argmax}{\boldsymbol{\theta}} \big(\ln \mathcal{L}\big(\boldsymbol{\theta}|\mathcal{D})\big) $$ Ako je moguće, maksimizaciju provodimo analitički, inače je provodimo iterativnim metodama MLE za Bernoullijevu razdiobu (parametar: $\mu$) \begin{align} \ln\mathcal{L}(\mu | \mathcal{D}) &= \ln\prod_{i=1}^N P(x | \mu) = \ln\prod_{i=1}^N \mu^{x^{(i)}}(1-\mu)^{1-x^{(i)}}\ &=\sum_{i=1}^N x^{(i)}\ln \mu + \Big(N-\sum_{i=1}^N x^{(i)}\Big)\ln(1-\mu) \end{align} $$ \frac{\mathrm{d}\,{\ln\mathcal{L}}}{\mathrm{d}\mu} = \frac{1}{\mu}\sum_{i=1}^N x^{(i)} - \frac{1}{1-\mu}\Big(N-\sum_{i=1}^N x^{(i)}\Big) = 0 $$ \begin{equation} \Rightarrow\quad \hat{\mu}\mathrm{ML} = \frac{1}{N}\sum{i=1}^N x^{(i)} \end{equation} MLE za Bernoullijevu razdiobu je ustvari relativna frekvencija Vrijedi $\mathbb{E}(\mu_\mathrm{ML})=\mathbb{E}[X]=\mu$, pa je ovo je nepristran procjenitelj MLE za kategoričku razdiobu (parametri: $\mu_k$) \begin{align} \ln\mathcal{L}(\boldsymbol{\mu} | \mathcal{D}) = \ln\prod_{i=1}^N P(\mathbf{x}^{(i)} | \boldsymbol{\mu}) = \ln\prod_{i=1}^N \color{red}{\prod_{k=1}^K \mu_k^{x_k^{(i)}}} = \sum_{k=1}^K \sum_{i=1}^N x_k^{(i)} \ln \mu_k \end{align} Izraz treba maksimizirati prema $\mu_k$ uz ograničenje $\sum_{k=1}^K\mu_k=1$. Primjenom metode Lagrangeovih multiplikatora dobivamo: $$ \hat{\mu}{k,\mathrm{ML}} = \frac{1}{N}\sum{i=1}^N x_k^{(i)} = \frac{N_k}{N} $$ $N_k$ je broj nastupanja k-te vrijednosti MLE za Gaussovu razdiobu (parametri: $\mu, \sigma^2$) \begin{align} \ln\mathcal{L}(\mu,\sigma^2 | \mathcal{D}) &= \ln\prod_{i=1}^N \frac{1}{\sqrt{2\pi}\sigma}\exp\Big{-\frac{(x^{(i)}-\mu)^2}{2\sigma^2}\Big} \ &= -\frac{N}{2}\ln(2\pi) - N\ln\sigma - \frac{\sum_i(x^{(i)}-\mu)^2}{2\sigma^2}\ \end{align} \begin{align} \nabla\ln\mathcal{L}(\mu,\sigma^2 | \mathcal{D})&=0\ \vdots\ \hat{\mu}\mathrm{ML} &= \frac{1}{N}\sum{i=1}^N x^{(i)}\ \hat{\sigma}^2_\mathrm{ML} &= \frac{1}{N}\sum_{i=1}^N(x^{(i)}-\hat{\mu}_\mathrm{ML})^2 \end{align} NB: Procjenitelj $\hat{\sigma}^2_\mathrm{ML}$ je pristran! MLE ne mora nužno biti nepristran! End of explanation mu = [3, 2] covm = sp.array([[5, 2], [2, 10]]) p = stats.multivariate_normal(mu, covm) x = np.linspace(-10, 10) y = np.linspace(-10, 10) X, Y = np.meshgrid(x, y) XY = np.dstack((X,Y)) plt.contour(X, Y, p.pdf(XY)) plt.show() D = p.rvs(100) plt.contour(X, Y, p.pdf(XY), cmap='binary', alpha=0.5) plt.scatter(D[:,0], D[:,1]) plt.show() mean_mle = sp.mean(D, axis=0); mean_mle cov_mle = 0 s = 0 for x in D: s += sp.outer(x - mean_mle, x - mean_mle) cov_mle = s / len(D) cov_mle sp.cov(D, rowvar=0, bias=0) p_mle = stats.multivariate_normal(mean_mle, cov_mle) plt.contour(X, Y, p_mle.pdf(XY)); plt.contour(X, Y, p.pdf(XY), cmap='binary', alpha=0.5) plt.scatter(D[:,0], D[:,1], c='gray', alpha=0.5) plt.contour(X, Y, p_mle.pdf(XY), cmap='Blues', linewidths=2); Explanation: MLE za multivarijatnu Gaussovu razdiobu \begin{align} \ln\mathcal{L}(\boldsymbol{\mu},\boldsymbol{\Sigma}|\mathcal{D}) &= \ln\prod_{i=1}^N p(\mathbf{x}^{(i)}|\boldsymbol{\mu},\boldsymbol{\Sigma})\ &= -\frac{n N}{2}\ln(2\pi)-\frac{N}{2}|\boldsymbol{\Sigma}| -\frac{1}{2}\sum_{i=1}^N(\boldsymbol{x}^{(i)}-\boldsymbol{\mu})^\mathrm{T}\boldsymbol{\Sigma}^{-1}(\mathbf{x}^{(i)}-\boldsymbol{\mu}) \end{align} \begin{align} \nabla\ln\mathcal{L}(\boldsymbol{\mu},\boldsymbol{\Sigma} | \mathcal{D})&=0\ \vdots\ \hat{\boldsymbol{\mu}}\mathrm{ML} &= \frac{1}{N}\sum{i=1}^N\mathbf{x}^{(i)}\ \hat{\boldsymbol{\Sigma}}\mathrm{ML} &= \frac{1}{N}\sum{i=1}^N (\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}\mathrm{ML})(\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}\mathrm{ML})^\mathrm{T} \end{align} End of explanation TODO xs = sp.linspace(0,1) beta = stats.beta(1,1) plt.plot(xs,stats.beta.pdf(xs,1,1), label='a=1,b=1') plt.plot(xs,stats.beta.pdf(xs,2,2), label='a=2,b=2') plt.plot(xs,stats.beta.pdf(xs,4,2), label='a=4,b=2') plt.plot(xs,stats.beta.pdf(xs,2,4), label='a=2,b=4') plt.legend() plt.show() Explanation: Procjenitelj MAP MLE lako dovodi do prenaučenosti modela Npr. za skup primjera za koji $\forall x^{(i)}\in \mathcal{D}. x^{(i)}=0$, procjena je $\hat{\mu}_\mathrm{ML}=0$ Ideja: nisu sve vrijednosti za $\mu$ jednako vjerojatne! Definiramo apriornu razdiobu parametra $p(\boldsymbol{\theta})$ i zatim maksimiziramo aposteriornu vjerojatnost: $$ p(\boldsymbol{\theta}|\mathcal{D}) = \frac{p(\mathcal{D}|\boldsymbol{\theta}) P(\boldsymbol{\theta})} {p(\mathcal{D})} $$ MLE: $$ \hat{\boldsymbol{\theta}} = \mathrm{argmax}_{\boldsymbol{\theta}}\ \mathcal{L}(\boldsymbol{\theta}|\mathcal{D}) $$ MAP: $$ \hat{\mathbf{\theta}}\mathrm{MAP} = \mathrm{argmax}{\boldsymbol{\theta}} \ p(\boldsymbol{\theta}|\mathcal{D}) = p(\mathcal{D}|\boldsymbol{\theta})\,\color{red}{p(\boldsymbol{\theta})} $$ Procjenitelj MAP za Bernoullijevu varijablu End of explanation
8,224
Given the following text description, write Python code to implement the functionality described below step by step Description: Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model Step1: 2. Read in the hanford.csv file in the data/ folder Step2: <img src="../../images/hanford_variables.png"></img> 3. Calculate the basic descriptive statistics on the data Step3: 4. Find a reasonable threshold to say exposure is high and recode the data Step4: 5. Create a logistic regression model Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
Python Code: import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import numpy as np from sklearn.linear_model import LogisticRegression Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model End of explanation df = pd.read_csv('hanford.csv') Explanation: 2. Read in the hanford.csv file in the data/ folder End of explanation df.describe() iqr = df.quantile(q=0.75) - df.quantile(q=0.25) iqr ual = df.quantile(q=0.75) + (iqr * 1.5) ual lal = df.quantile(q=0.25) - (iqr * 1.5) lal df.plot(kind='scatter', x='Exposure', y='Mortality') Explanation: <img src="../../images/hanford_variables.png"></img> 3. Calculate the basic descriptive statistics on the data End of explanation for value in df['Exposure']: if value < ual['Exposure']: print(value) # Find new reasonable threshold! # Choosing 6 df['high_exposure'] = df['Exposure'].apply(lambda x:1 if x>6 else 0) df # dataset = df[['Mortality']].join([pd.get_dummies(df['Exposure'],prefix="Exposure"),df.high_exposure]) # dataset Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data End of explanation from sklearn.linear_model import LogisticRegression lm = LogisticRegression() x = np.asarray(df[['Mortality']]) y = np.asarray(df['high_exposure']) lm = lm.fit(x,y) lm.score(x,y) lm.coef_ lm.intercept_ plt.plot(x,lm.coef_[0]*x+lm.intercept_[0]) Explanation: 5. Create a logistic regression model End of explanation df['high_mortality'] = df['Mortality'].apply(lambda x:1 if x>150 else 0) lm2 = LogisticRegression() x2 = np.asarray(df[['Exposure']]) y2 = np.asarray(df['high_mortality']) lm2 = lm2.fit(x2,y2) lm2.predict(50) # According to the prediction the mortality rate is high at an exposure level of 50. Explanation: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50 End of explanation
8,225
Given the following text description, write Python code to implement the functionality described below step by step Description: 선형 회귀 분석의 기초 결정론적 모형은 그냥 함수를 찾는 것. 간단한 함수부터 시작을 한다. 간단한 함수는 선형식을 의미하는 듯 선형 회귀 분석은 부호, 크기, 관계 등을 알려주기 때문에 불안전하다는 단점에도 불구하고 잘 쓰이고 있다. 비선형회귀분석의 문제점으로는 overfitting 현상이 발생한다는 점. 그리고 방법도 너무 많다는 점 cross validation이란 x를 남겨두는 것. 함수 검사를 위해서. 진짜 시험을 남겨두는 것과 같은 원리. 적어도 3개 이상 남겨둔다. 회귀 분석(regression analysis)은 입력 자료(독립 변수) $x$와 이에 대응하는 출력 자료(종속 변수) $y$간의 관계를 정량화 하기 위한 작업이다. 회귀 분석에는 결정론적 모형(Deterministic Model)과 확률적 모형(Probabilistic Model)이 있다. 결정론적 모형은 단순히 독립 변수 $x$에 대해 대응하는 종속 변수 $y$를 계산하는 함수를 만드는 과정이다. $$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$ 여기에서 $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ 는 모형 계수 추정을 위한 과거 자료이다. 만약 함수가 선형 함수이면 선형 회귀 분석(linear regression analysis)이라고 한다. $$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$ Augmentation(증가 개념) 일반적으로 회귀 분석에 앞서 다음과 같이 상수항을 독립 변수에 포함하는 작업이 필요할 수 있다. 이를 feature augmentation이라고 한다. $$ x_i = \begin{bmatrix} x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} \rightarrow x_{i,a} = \begin{bmatrix} 1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} $$ augmentation을 하게 되면 모든 원소가 1인 벡터를 feature matrix 에 추가된다. $$ X = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1D} \ x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots \ x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} \rightarrow X_a = \begin{bmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1D} \ 1 & x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots & \vdots \ 1 & x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} $$ augmentation을 하면 가중치 벡터(weight vector)도 차원이 증가하여 전체 수식이 다음과 같이 단순화 된다. $$ w_0 + w_1 x_1 + w_2 x_2 \begin{bmatrix} 1 & x_1 & x_2 \end{bmatrix} \begin{bmatrix} w_0 \ w_1 \ w_2 \end{bmatrix} = x_a^T w $$ Step1: OLS (Ordinary Least Squares) OLS는 가장 기본적인 결정론적 회귀 방법으로 Residual Sum of Squares(RSS)를 최소화하는 가중치 벡터 값을 미분을 통해 구한다. Residual 잔차 $$ e_i = {y}_i - x_i^T w $$ Stacking (Vector Form) $$ e = {y} - Xw $$ Residual Sum of Squares (RSS) $$\begin{eqnarray} \text{RSS} &=& \sum (y_i - \hat{y}_i)^2 \ &=& \sum e_i^2 = e^Te \ &=& (y - Xw)^T(y - Xw) \ &=& y^Ty - 2y^T X w + w^TX^TXw \end{eqnarray}$$ Minimize using Gradient $$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$ $$ X^TX w = X^T y $$ $$ w = (X^TX)^{-1} X^T y $$ 여기에서 그레디언트를 나타내는 다음 식을 Normal equation 이라고 한다. $$ X^T y - X^TX w = 0 $$ Normal equation 에서 잔차에 대한 다음 특성을 알 수 있다. $$ X^T (y - X w ) = X^T e = 0 $$ bias는 상수항이자 y절편. 사이킷은 내부에 있고 stats모델에서는 명령어 하나 불러야 한다? Step2: scikit-learn 패키지를 사용한 선형 회귀 분석 sklearn 패키지를 사용하여 선형 회귀 분석을 하는 경우에는 linear_model 서브 패키지의 LinearRegression 클래스를 사용한다. http Step3: Boston Housing Price Step4: statsmodels 를 사용한 선형 회귀 분석 실제로는 선형회귀분석의 경우 이 모델을 사용한다. statsmodels 패키지에서는 OLS 클래스를 사용하여 선형 회귀 분석을 실시한다. http Step5: DF는 ndarray의 리스트 리스트 of 벡터 ndarray는 다 된다 list of list는 리스트 안에 리스트 Dep. Variable은 우리가 구할 값 target은 라벨 No. Observations은 샘플수 Df Model은 parameter-1 std err는 coef의 +- err 수치 가장 먼저 P>|t| 이거부터. 이게 중요. 0인지 아닌지. 0이면 살리고 아니면 죽일 가능성이 높다. Prob(Omnibus) = 0.471이면 그냥 정규분포다. Cond. No가 10000 이하면 괜찮아 그 다음 봐야 할 것은 coef. coef_ Step6: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다. Step7: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다. plot_fit(results, exog_idx) Plot fit against one regressor. abline_plot([intercept, ...]) Plots a line given an intercept and slope. influence_plot(results[, ...]) Plot of influence in regression. plot_leverage_resid2(results) Plots leverage statistics vs. plot_partregress(endog, ...) Plot partial regression for a single regressor. plot_ccpr(results, exog_idx) Plot CCPR against one regressor. plot_regress_exog(results, ...) Plot regression results against one regressor.
Python Code: from sklearn.datasets import make_regression bias = 100 X0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1) X = np.hstack([np.ones_like(X0), X0]) X[:5] Explanation: 선형 회귀 분석의 기초 결정론적 모형은 그냥 함수를 찾는 것. 간단한 함수부터 시작을 한다. 간단한 함수는 선형식을 의미하는 듯 선형 회귀 분석은 부호, 크기, 관계 등을 알려주기 때문에 불안전하다는 단점에도 불구하고 잘 쓰이고 있다. 비선형회귀분석의 문제점으로는 overfitting 현상이 발생한다는 점. 그리고 방법도 너무 많다는 점 cross validation이란 x를 남겨두는 것. 함수 검사를 위해서. 진짜 시험을 남겨두는 것과 같은 원리. 적어도 3개 이상 남겨둔다. 회귀 분석(regression analysis)은 입력 자료(독립 변수) $x$와 이에 대응하는 출력 자료(종속 변수) $y$간의 관계를 정량화 하기 위한 작업이다. 회귀 분석에는 결정론적 모형(Deterministic Model)과 확률적 모형(Probabilistic Model)이 있다. 결정론적 모형은 단순히 독립 변수 $x$에 대해 대응하는 종속 변수 $y$를 계산하는 함수를 만드는 과정이다. $$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$ 여기에서 $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ 는 모형 계수 추정을 위한 과거 자료이다. 만약 함수가 선형 함수이면 선형 회귀 분석(linear regression analysis)이라고 한다. $$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$ Augmentation(증가 개념) 일반적으로 회귀 분석에 앞서 다음과 같이 상수항을 독립 변수에 포함하는 작업이 필요할 수 있다. 이를 feature augmentation이라고 한다. $$ x_i = \begin{bmatrix} x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} \rightarrow x_{i,a} = \begin{bmatrix} 1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} $$ augmentation을 하게 되면 모든 원소가 1인 벡터를 feature matrix 에 추가된다. $$ X = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1D} \ x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots \ x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} \rightarrow X_a = \begin{bmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1D} \ 1 & x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots & \vdots \ 1 & x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} $$ augmentation을 하면 가중치 벡터(weight vector)도 차원이 증가하여 전체 수식이 다음과 같이 단순화 된다. $$ w_0 + w_1 x_1 + w_2 x_2 \begin{bmatrix} 1 & x_1 & x_2 \end{bmatrix} \begin{bmatrix} w_0 \ w_1 \ w_2 \end{bmatrix} = x_a^T w $$ End of explanation y = y.reshape(len(y), 1) w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y) print("bias:", bias) print("coef:", coef) print("w:\n", w) w = np.linalg.lstsq(X, y)[0] w xx = np.linspace(np.min(X0) - 1, np.max(X0) + 1, 1000) XX = np.vstack([np.ones(xx.shape[0]), xx.T]).T yy = np.dot(XX, w) plt.scatter(X0, y) plt.plot(xx, yy, 'r-') plt.show() Explanation: OLS (Ordinary Least Squares) OLS는 가장 기본적인 결정론적 회귀 방법으로 Residual Sum of Squares(RSS)를 최소화하는 가중치 벡터 값을 미분을 통해 구한다. Residual 잔차 $$ e_i = {y}_i - x_i^T w $$ Stacking (Vector Form) $$ e = {y} - Xw $$ Residual Sum of Squares (RSS) $$\begin{eqnarray} \text{RSS} &=& \sum (y_i - \hat{y}_i)^2 \ &=& \sum e_i^2 = e^Te \ &=& (y - Xw)^T(y - Xw) \ &=& y^Ty - 2y^T X w + w^TX^TXw \end{eqnarray}$$ Minimize using Gradient $$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$ $$ X^TX w = X^T y $$ $$ w = (X^TX)^{-1} X^T y $$ 여기에서 그레디언트를 나타내는 다음 식을 Normal equation 이라고 한다. $$ X^T y - X^TX w = 0 $$ Normal equation 에서 잔차에 대한 다음 특성을 알 수 있다. $$ X^T (y - X w ) = X^T e = 0 $$ bias는 상수항이자 y절편. 사이킷은 내부에 있고 stats모델에서는 명령어 하나 불러야 한다? End of explanation from sklearn.datasets import load_diabetes diabetes = load_diabetes() dfX_diabetes = pd.DataFrame(diabetes.data, columns=["X%d" % (i+1) for i in range(np.shape(diabetes.data)[1])]) dfy_diabetes = pd.DataFrame(diabetes.target, columns=["target"]) df_diabetes0 = pd.concat([dfX_diabetes, dfy_diabetes], axis=1) df_diabetes0.tail(3) from sklearn.linear_model import LinearRegression model_diabets = LinearRegression().fit(diabetes.data, diabetes.target) print(model_diabets.coef_) print(model_diabets.intercept_) predictions = model_diabets.predict(diabetes.data) plt.scatter(diabetes.target, predictions) plt.xlabel("prediction") plt.ylabel("target") plt.show() mean_abs_error = (np.abs(((diabetes.target - predictions) / diabetes.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) sk.metrics.median_absolute_error(diabetes.target, predictions) sk.metrics.mean_squared_error(diabetes.target, predictions) Explanation: scikit-learn 패키지를 사용한 선형 회귀 분석 sklearn 패키지를 사용하여 선형 회귀 분석을 하는 경우에는 linear_model 서브 패키지의 LinearRegression 클래스를 사용한다. http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html 입력 인수 fit_intercept : 불리언, 옵션 상수상 추가 여부 normalize : 불리언, 옵션 회귀 분석전에 정규화 여부 속성 coef_ : 추정된 가중치 벡터 intercept_ : 추정된 상수항 Diabetes Regression End of explanation from sklearn.datasets import load_boston boston = load_boston() dfX_boston = pd.DataFrame(boston.data, columns=boston.feature_names) dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"]) df_boston0 = pd.concat([dfX_boston, dfy_boston], axis=1) df_boston0.tail(3) model_boston = LinearRegression().fit(boston.data, boston.target) print(model_boston.coef_) print(model_boston.intercept_) predictions = model_boston.predict(boston.data) plt.scatter(predictions, boston.target) plt.xlabel("prediction") plt.ylabel("target") plt.show() mean_abs_error = (np.abs(((boston.target - predictions) / boston.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) sk.metrics.median_absolute_error(boston.target, predictions) sk.metrics.mean_squared_error(boston.target, predictions) Explanation: Boston Housing Price End of explanation df_diabetes = sm.add_constant(df_diabetes0) df_diabetes.tail(3) model_diabets2 = sm.OLS(df_diabetes.ix[:, -1], df_diabetes.ix[:, :-1]) result_diabetes2 = model_diabets2.fit() result_diabetes2 Explanation: statsmodels 를 사용한 선형 회귀 분석 실제로는 선형회귀분석의 경우 이 모델을 사용한다. statsmodels 패키지에서는 OLS 클래스를 사용하여 선형 회귀 분석을 실시한다. http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.html statsmodels.regression.linear_model.OLS(endog, exog=None) 입력 인수 endog : 종속 변수. 1차원 배열 exog : 독립 변수, 2차원 배열. statsmodels 의 OLS 클래스는 자동으로 상수항을 만들어주지 않기 때문에 사용자가 add_constant 명령으로 상수항을 추가해야 한다. 모형 객체가 생성되면 fit, predict 메서드를 사용하여 추정 및 예측을 실시한다. 예측 결과는 RegressionResults 클래스 객체로 출력되면 summary 메서드로 결과 보고서를 볼 수 있다. End of explanation print(result_diabetes2.summary()) df_boston = sm.add_constant(df_boston0) model_boston2 = sm.OLS(df_boston.ix[:, -1], df_boston.ix[:, :-1]) result_boston2 = model_boston2.fit() print(result_boston2.summary()) Explanation: DF는 ndarray의 리스트 리스트 of 벡터 ndarray는 다 된다 list of list는 리스트 안에 리스트 Dep. Variable은 우리가 구할 값 target은 라벨 No. Observations은 샘플수 Df Model은 parameter-1 std err는 coef의 +- err 수치 가장 먼저 P>|t| 이거부터. 이게 중요. 0인지 아닌지. 0이면 살리고 아니면 죽일 가능성이 높다. Prob(Omnibus) = 0.471이면 그냥 정규분포다. Cond. No가 10000 이하면 괜찮아 그 다음 봐야 할 것은 coef. coef_ : 추정된 가중치 벡터. ‘-’이면 악영향 Result가 별도로 저장되는 것은 stats 모델의 특징. 사이킷은 아니야 End of explanation dir(result_boston2) Explanation: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다. End of explanation sm.graphics.plot_fit(result_boston2, "CRIM") plt.show() Explanation: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다. plot_fit(results, exog_idx) Plot fit against one regressor. abline_plot([intercept, ...]) Plots a line given an intercept and slope. influence_plot(results[, ...]) Plot of influence in regression. plot_leverage_resid2(results) Plots leverage statistics vs. plot_partregress(endog, ...) Plot partial regression for a single regressor. plot_ccpr(results, exog_idx) Plot CCPR against one regressor. plot_regress_exog(results, ...) Plot regression results against one regressor. End of explanation
8,226
Given the following text description, write Python code to implement the functionality described below step by step Description: STEM Introduction This notebook demonstrates how to do a basic STEM simulation using PyQSTEM with ASE. Step1: We create an orthorhombic unit cell of MoS2. The unit cell is repeated 3x3 times, in order to accomodate the size of the probe at all scan positions. We set a scan range that covers the central unit cell. Step2: We create a QSTEM object in STEM mode and set the atomic object. Step3: We build a (very bad) probe. Building the probe will also determine the resolution of the potential, when we build it. Step4: The potential is build and imported to python. Step5: We can view the extent of the potential using the .view() method of the PyQSTEM object. When the potential is build in this way, it is made to cover exactly the maximum probe extent. Step6: We add a couple of detectors and run qstem. Step7: After running we can extract the results from the detectors.
Python Code: from __future__ import print_function %matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib as mpl from ase.io import read from pyqstem.util import atoms_plot from pyqstem import PyQSTEM from ase.build import mx2 mpl.rc('font',**{'size' : 13}) Explanation: STEM Introduction This notebook demonstrates how to do a basic STEM simulation using PyQSTEM with ASE. End of explanation atoms=mx2(formula='MoS2', kind='2H', a=3.18, thickness=3.19, size=(2, 2, 1), vacuum=2) cell=atoms.get_cell() cell[1,0]=0 atoms.set_cell(cell) atoms.wrap() # wrap atoms outside the unit cell atoms.center() # center the atoms in the unit cell atoms*=(3,3,1) scan_range=[[cell[0,0],2*cell[0,0],30], [cell[1,1],2*cell[1,1],30]] fig,ax=plt.subplots(figsize=(7,5)) atoms_plot(atoms,scan_range=scan_range,ax=ax,legend=True) Explanation: We create an orthorhombic unit cell of MoS2. The unit cell is repeated 3x3 times, in order to accomodate the size of the probe at all scan positions. We set a scan range that covers the central unit cell. End of explanation qstem = PyQSTEM('STEM') qstem.set_atoms(atoms) Explanation: We create a QSTEM object in STEM mode and set the atomic object. End of explanation resolution = (0.02,0.02) # resolution in x and y-direction [Angstrom] samples = (300,300) # samples in x and y-direction defocus = -50 # defocus [Angstrom] v0 = 300 # acceleration voltage [keV] alpha = 20 # convergence angle [mrad] astigmatism = 40 # astigmatism magnitude [Angstrom] astigmatism_angle = 100 # astigmatism angle [deg.] aberrations = {'a33': 3000, 'phi33': 120} # higher order aberrations [Angstrom] or [deg.] qstem.build_probe(v0,alpha,(300,300),resolution=(0.02,0.02),defocus=defocus,astig_mag=astigmatism, astig_angle=astigmatism_angle,aberrations=aberrations) wave=qstem.get_wave() wave.view(cmap='inferno') Explanation: We build a (very bad) probe. Building the probe will also determine the resolution of the potential, when we build it. End of explanation qstem.build_potential(5,scan_range=scan_range) potential=qstem.get_potential_or_transfunc() Explanation: The potential is build and imported to python. End of explanation fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,6)) qstem.view(ax=ax1) potential.view(ax=ax2,cmap='inferno',method='real') Explanation: We can view the extent of the potential using the .view() method of the PyQSTEM object. When the potential is build in this way, it is made to cover exactly the maximum probe extent. End of explanation detector1_radii=(70,200) # inner and outer radius of detector 1 detector2_radii=(0,70) # inner and outer radius of detector 2 qstem.add_detector('detector1',detector1_radii) qstem.add_detector('detector2',detector2_radii) qstem.run() Explanation: We add a couple of detectors and run qstem. End of explanation img1=np.array(qstem.read_detector('detector1')) img2=np.array(qstem.read_detector('detector2')) img1=np.tile(img1,(2,2)) img2=np.tile(img2,(2,2)) extent=[0,scan_range[0][1]*3-scan_range[0][0],0,scan_range[1][1]*3-scan_range[1][0]] fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,6)) ims1=ax1.imshow(img1.T,extent=extent,interpolation='nearest',cmap='gray') divider = make_axes_locatable(ax1) cax1 = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(ims1, cax=cax1) ax1.set_xlabel('x [Angstrom]') ax1.set_ylabel('y [Angstrom]') ims2=ax2.imshow(img2.T,extent=extent,interpolation='nearest',cmap='gray') divider = make_axes_locatable(ax2) cax2 = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(ims2, cax=cax2) ax2.set_xlabel('x [Angstrom]') ax2.set_ylabel('y [Angstrom]') plt.tight_layout() plt.show() Explanation: After running we can extract the results from the detectors. End of explanation
8,227
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Step1: Regression Step2: The Auto MPG dataset The dataset is available from the UCI Machine Learning Repository. Get the data First download the dataset. Step3: Import it using pandas Step4: Clean the data The dataset contains a few unknown values. Step5: To keep this initial tutorial simple drop those rows. Step6: The "Origin" column is really categorical, not numeric. So convert that to a one-hot Step7: Split the data into train and test Now split the dataset into a training set and a test set. We will use the test set in the final evaluation of our model. Step8: Inspect the data Have a quick look at the joint distribution of a few pairs of columns from the training set. Step9: Also look at the overall statistics Step10: Split features from labels Separate the target value, or "label", from the features. This label is the value that you will train the model to predict. Step11: Normalize the data Look again at the train_stats block above and note how different the ranges of each feature are. It is good practice to normalize features that use different scales and ranges. Although the model might converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input. Note Step12: This normalized data is what we will use to train the model. Caution Step13: Inspect the model Use the .summary method to print a simple description of the model Step14: Now try out the model. Take a batch of 10 examples from the training data and call model.predict on it. Step15: It seems to be working, and it produces a result of the expected shape and type. Train the model Train the model for 1000 epochs, and record the training and validation accuracy in the history object. Step16: Visualize the model's training progress using the stats stored in the history object. Step17: This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the model.fit call to automatically stop training when the validation score doesn't improve. We'll use an EarlyStopping callback that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training. You can learn more about this callback here. Step18: The graph shows that on the validation set, the average error is usually around +/- 2 MPG. Is this good? We'll leave that decision up to you. Let's see how well the model generalizes by using the test set, which we did not use when training the model. This tells us how well we can expect the model to predict when we use it in the real world. Step19: Make predictions Finally, predict MPG values using data in the testing set Step20: It looks like our model predicts reasonably well. Let's take a look at the error distribution.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. Explanation: Copyright 2018 The TensorFlow Authors. End of explanation # Use seaborn for pairplot !pip install seaborn import pathlib import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import tensorflow.compat.v1 as tf from tensorflow import keras from tensorflow.keras import layers print(tf.__version__) Explanation: Regression: predict fuel efficiency <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: This is an archived TF1 notebook. These are configured to run in TF2's compatibility mode but will run in TF1 as well. To use TF1 in Colab, use the %tensorflow_version 1.x magic. In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a classification problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture). This notebook uses the classic Auto MPG Dataset and builds a model to predict the fuel efficiency of late-1970s and early 1980s automobiles. To do this, we'll provide the model with a description of many automobiles from that time period. This description includes attributes like: cylinders, displacement, horsepower, and weight. This example uses the tf.keras API, see this guide for details. End of explanation dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data") dataset_path Explanation: The Auto MPG dataset The dataset is available from the UCI Machine Learning Repository. Get the data First download the dataset. End of explanation column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) dataset = raw_dataset.copy() dataset.tail() Explanation: Import it using pandas End of explanation dataset.isna().sum() Explanation: Clean the data The dataset contains a few unknown values. End of explanation dataset = dataset.dropna() Explanation: To keep this initial tutorial simple drop those rows. End of explanation origin = dataset.pop('Origin') dataset['USA'] = (origin == 1)*1.0 dataset['Europe'] = (origin == 2)*1.0 dataset['Japan'] = (origin == 3)*1.0 dataset.tail() Explanation: The "Origin" column is really categorical, not numeric. So convert that to a one-hot: End of explanation train_dataset = dataset.sample(frac=0.8,random_state=0) test_dataset = dataset.drop(train_dataset.index) Explanation: Split the data into train and test Now split the dataset into a training set and a test set. We will use the test set in the final evaluation of our model. End of explanation sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde") plt.show() Explanation: Inspect the data Have a quick look at the joint distribution of a few pairs of columns from the training set. End of explanation train_stats = train_dataset.describe() train_stats.pop("MPG") train_stats = train_stats.transpose() train_stats Explanation: Also look at the overall statistics: End of explanation train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG') Explanation: Split features from labels Separate the target value, or "label", from the features. This label is the value that you will train the model to predict. End of explanation def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset) Explanation: Normalize the data Look again at the train_stats block above and note how different the ranges of each feature are. It is good practice to normalize features that use different scales and ranges. Although the model might converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input. Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on. End of explanation def build_model(): model = keras.Sequential([ layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]), layers.Dense(64, activation=tf.nn.relu), layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mean_absolute_error', 'mean_squared_error']) return model model = build_model() Explanation: This normalized data is what we will use to train the model. Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier. That includes the test set as well as live data when the model is used in production. The model Build the model Let's build our model. Here, we'll use a Sequential model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, build_model, since we'll create a second model, later on. End of explanation model.summary() Explanation: Inspect the model Use the .summary method to print a simple description of the model End of explanation example_batch = normed_train_data[:10] example_result = model.predict(example_batch) example_result Explanation: Now try out the model. Take a batch of 10 examples from the training data and call model.predict on it. End of explanation # Display training progress by printing a single dot for each completed epoch class PrintDot(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs): if epoch % 100 == 0: print('') print('.', end='') EPOCHS = 1000 history = model.fit( normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[PrintDot()]) Explanation: It seems to be working, and it produces a result of the expected shape and type. Train the model Train the model for 1000 epochs, and record the training and validation accuracy in the history object. End of explanation hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() def plot_history(history): hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch plt.figure() plt.xlabel('Epoch') plt.ylabel('Mean Abs Error [MPG]') plt.plot(hist['epoch'], hist['mean_absolute_error'], label='Train Error') plt.plot(hist['epoch'], hist['val_mean_absolute_error'], label = 'Val Error') plt.ylim([0,5]) plt.legend() plt.figure() plt.xlabel('Epoch') plt.ylabel('Mean Square Error [$MPG^2$]') plt.plot(hist['epoch'], hist['mean_squared_error'], label='Train Error') plt.plot(hist['epoch'], hist['val_mean_squared_error'], label = 'Val Error') plt.ylim([0,20]) plt.legend() plt.show() plot_history(history) Explanation: Visualize the model's training progress using the stats stored in the history object. End of explanation model = build_model() # The patience parameter is the amount of epochs to check for improvement early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) history = model.fit(normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()]) plot_history(history) Explanation: This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the model.fit call to automatically stop training when the validation score doesn't improve. We'll use an EarlyStopping callback that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training. You can learn more about this callback here. End of explanation loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2) print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae)) Explanation: The graph shows that on the validation set, the average error is usually around +/- 2 MPG. Is this good? We'll leave that decision up to you. Let's see how well the model generalizes by using the test set, which we did not use when training the model. This tells us how well we can expect the model to predict when we use it in the real world. End of explanation test_predictions = model.predict(normed_test_data).flatten() plt.scatter(test_labels, test_predictions) plt.xlabel('True Values [MPG]') plt.ylabel('Predictions [MPG]') plt.axis('equal') plt.axis('square') plt.xlim([0,plt.xlim()[1]]) plt.ylim([0,plt.ylim()[1]]) _ = plt.plot([-100, 100], [-100, 100]) plt.show() Explanation: Make predictions Finally, predict MPG values using data in the testing set: End of explanation error = test_predictions - test_labels plt.hist(error, bins = 25) plt.xlabel("Prediction Error [MPG]") _ = plt.ylabel("Count") plt.show() Explanation: It looks like our model predicts reasonably well. Let's take a look at the error distribution. End of explanation
8,228
Given the following text description, write Python code to implement the functionality described below step by step Description: Algorithms Exercise 2 Imports Step2: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
Python Code: %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import numpy as np Explanation: Algorithms Exercise 2 Imports End of explanation s=[] i=0 def find_peaks(a): Find the indices of the local maxima in a sequence. # YOUR CODE HERE if a[0]>a[1]: #if the first number is bigger than the second number s.append(0) #add 0 as a peak for x in range (len(a)-1): # if a[x]>a[x-1] and a[x]>a[x+1] and x!=0: #if the current number is bigger than the one before it and the one after it # print (x) s.append(x) #add it to the list of peaks if a[-1]>a[-2]: #if the last number is bigger than the second to last one it is a peak # print (len(a)-1) s.append(len(a)-1) #add the location of the last number to the list of locations return s #below here is used for testing, not sure why assert tests are not working since my tests do # p2 = find_peaks(np.array([0,1,2,3])) # p2 p1 = find_peaks([2,0,1,0,2,0,1]) p1 # p3 = find_peaks([3,2,1,0]) # p3 # np.shape(p1) # y=np.array([0,2,4,6]) # np.shape(y) # print(s) p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0])) Explanation: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input. End of explanation from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] # YOUR CODE HERE # num=[] # pi_digits_str[0] # for i in range(len(pi_digits_str)): # num[i]=pi_digits_str[i] f=plt.figure(figsize=(12,8)) plt.title("Histogram of Distances between Peaks in Pi") plt.ylabel("Number of Occurences") plt.xlabel("Distance from Previous Peak") plt.tick_params(direction='out') plt.box(True) plt.grid(False) test=np.array(list(pi_digits_str),dtype=np.int) peaks=find_peaks(test) dist=np.diff(peaks) plt.hist(dist,bins=range(15)); assert True # use this for grading the pi digits histogram Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram. End of explanation
8,229
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: NERC Source ID: SANDBOX-1 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:27 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
8,230
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: NOAA-GFDL Source ID: GFDL-CM4 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: CMIP5:GFDL-CM3 Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:34 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" DOC.set_value("OGCM") Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" DOC.set_value("Boussinesq") DOC.set_value("Primitive equations") Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" DOC.set_value("Potential temperature") DOC.set_value("SSH") DOC.set_value("Salinity") DOC.set_value("U-velocity") DOC.set_value("V-velocity") DOC.set_value("W-velocity") Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" DOC.set_value("Jackett et al. 2006") Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" DOC.set_value("Other: varying") Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" DOC.set_value("Present day") Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Yes") Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Mix over top 40 m") Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" DOC.set_value("Other: finite differences") Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" DOC.set_value("Forward-backward") Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" DOC.set_value("Flux form") Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("2nd order centered") Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Sweby") Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" DOC.set_value("Iso-level") Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" DOC.set_value("Bi-harmonic") Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" DOC.set_value("Second order") Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" DOC.set_value("Time + space varying (Smagorinsky)") Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("1e05 m2/sec") Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" DOC.set_value("Isoneutral") Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" DOC.set_value("Harmonic") Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" DOC.set_value("Second order") Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" DOC.set_value("Constant") Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(600) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(600) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" DOC.set_value("GM") Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Skew flux") Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Flow dependent") Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" DOC.set_value("Turbulent closure - KPP") Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Constant 10**-5 m**2/s") Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" DOC.set_value("Turbulent closure - KPP") Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("1e-4 m**2/s") Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" DOC.set_value("Enhanced vertical diffusion") Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Baroclinic tides, Barotropic tides") Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("10**-5 m**2/s") Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("1e-4 m**2/s") Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" DOC.set_value("Non-linear semi-explicit") Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" DOC.set_value("Diffusive") Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(100) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Specific treatment") Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("No") Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Spatial varying") Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" DOC.set_value("Non-linear") Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" DOC.set_value("No-slip") Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
8,231
Given the following text description, write Python code to implement the functionality described below step by step Description: A Decision Tree of Observable Operators Part 1 Step1: ..that was returned from a function called at subscribe-time Step2: ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time Step3: ...after a specified delay Step4: ...that emits a sequence of items repeatedly Step6: ...from scratch, with custom logic and cleanup (calling a function again and again) Step7: ...for each observer that subscribes OR according to a condition at subscription time Step8: ...that emits a sequence of integers Step9: ...at particular intervals of time Step10: ...after a specified delay (see timer) ...that completes without emitting items Step11: ...that does nothing at all Step12: ...that excepts
Python Code: reset_start_time(O.just) stream = O.just({'answer': rand()}) disposable = subs(stream) sleep(0.5) disposable = subs(stream) # same answer # all stream ops work, its a real stream: disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2)) Explanation: A Decision Tree of Observable Operators Part 1: NEW Observables. source: http://reactivex.io/documentation/operators.html#tree. (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros) This tree can help you find the ReactiveX Observable operator you’re looking for. <h2 id="tocheading">Table of Contents</h2> <div id="toc"></div> Usage There are no configured behind the scenes imports or code except startup.py, which defines output helper functions, mainly: rst, reset_start_time: resets a global timer, in order to have use cases starting from 0. subs(observable): subscribes to an observable, printing notifications with time, thread, value All other code is explicitly given in the notebook. Since all initialisiation of tools is in the first cell, you always have to run the first cell after ipython kernel restarts. All other cells are autonmous. In the use case functions, in contrast to the official examples we simply use rand quite often (mapped to randint(0, 100)), to demonstrate when/how often observable sequences are generated and when their result is buffered for various subscribers. When in doubt then run the cell again, you might have been "lucky" and got the same random. RxJS The (bold printed) operator functions are linked to the official documentation and created roughly analogous to the RxJS examples. The rest of the TOC lines links to anchors within the notebooks. Output When the output is not in marble format we display it like so: ``` new subscription on stream 276507289 3.4 M [next] 1.4: {'answer': 42} 3.5 T1 [cmpl] 1.6: fin `` where the lines are syncronouslyprinted as they happen. "M" and "T1" would be thread names ("M" is main thread). For each use case inreset_start_time()(aliasrst`), a global timer is set to 0 and we show the offset to it, in milliseconds & with one decimal value and also the offset to the start of stream subscription. In the example 3.4, 3.5 are millis since global counter reset, while 1.4, 1.6 are offsets to start of subscription. I want to create a NEW Observable... ... that emits a particular item: just End of explanation print('There is a little API difference to RxJS, see Remarks:\n') rst(O.start) def f(): log('function called') return rand() stream = O.start(func=f) d = subs(stream) d = subs(stream) header("Exceptions are handled correctly (an observable should never except):") def breaking_f(): return 1 / 0 stream = O.start(func=breaking_f) d = subs(stream) d = subs(stream) # startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future #stream = O.start_async(f) #d = subs(stream) Explanation: ..that was returned from a function called at subscribe-time: start End of explanation rst(O.from_iterable) def f(): log('function called') return rand() # aliases: O.from_, O.from_list # 1.: From a tuple: stream = O.from_iterable((1,2,rand())) d = subs(stream) # d = subs(stream) # same result # 2. from a generator gen = (rand() for j in range(3)) stream = O.from_iterable(gen) d = subs(stream) rst(O.from_callback) # in my words: In the on_next of the subscriber you'll have the original arguments, # potentially objects, e.g. user original http requests. # i.e. you could merge those with the result stream of a backend call to # a webservice or db and send the request.response back to the user then. def g(f, a, b): f(a, b) log('called f') stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar') d = subs(stream.delay(200)) # d = subs(stream.delay(200)) # does NOT work Explanation: ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: from End of explanation rst() # start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms: stream = O.timer(200, 100).time_interval()\ .map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\ .take(3) d = subs(stream, name='observer1') # intermix directly with another one d = subs(stream, name='observer2') Explanation: ...after a specified delay: timer End of explanation rst(O.repeat) # repeat is over *values*, not function calls. Use generate or create for function calls! subs(O.repeat({'rand': time.time()}, 3)) header('do while:') l = [] def condition(x): l.append(1) return True if len(l) < 2 else False stream = O.just(42).do_while(condition) d = subs(stream) Explanation: ...that emits a sequence of items repeatedly: repeat End of explanation rx = O.create rst(rx) def f(obs): # this function is called for every observer obs.on_next(rand()) obs.on_next(rand()) obs.on_completed() def cleanup(): log('cleaning up...') return cleanup stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals d = subs(stream) d = subs(stream) sleep(0.5) rst(title='Exceptions are handled nicely') l = [] def excepting_f(obs): for i in range(3): l.append(1) obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) )) obs.on_completed() stream = O.create(excepting_f) d = subs(stream) d = subs(stream) rst(title='Feature or Bug?') print('(where are the first two values?)') l = [] def excepting_f(obs): for i in range(3): l.append(1) obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) )) obs.on_completed() stream = O.create(excepting_f).delay(100) d = subs(stream) d = subs(stream) # I think its an (amazing) feature, preventing to process functions results of later(!) failing functions rx = O.generate rst(rx) The basic form of generate takes four parameters: the first item to emit a function to test an item to determine whether to emit it (true) or terminate the Observable (false) a function to generate the next item to test and emit based on the value of the previous item a function to transform items before emitting them def generator_based_on_previous(x): return x + 1.1 def doubler(x): return 2 * x d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler)) rx = O.generate_with_relative_time rst(rx) stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100) d = subs(stream) Explanation: ...from scratch, with custom logic and cleanup (calling a function again and again): create End of explanation rst(O.defer) # plural! (unique per subscription) streams = O.defer(lambda: O.just(rand())) d = subs(streams) d = subs(streams) # gets other values - created by subscription! # evaluating a condition at subscription time in order to decide which of two streams to take. rst(O.if_then) cond = True def should_run(): return cond streams = O.if_then(should_run, O.return_value(43), O.return_value(56)) d = subs(streams) log('condition will now evaluate falsy:') cond = False streams = O.if_then(should_run, O.return_value(43), O.return_value(rand())) d = subs(streams) d = subs(streams) Explanation: ...for each observer that subscribes OR according to a condition at subscription time: defer / if_then End of explanation rst(O.range) d = subs(O.range(0, 3)) Explanation: ...that emits a sequence of integers: range End of explanation rst(O.interval) d = subs(O.interval(100).time_interval()\ .map(lambda x, v: '%(interval)s %(value)s' \ % ItemGetter(x)).take(3)) Explanation: ...at particular intervals of time: interval (you can .publish() it to get an easy "hot" observable) End of explanation rst(O.empty) d = subs(O.empty()) Explanation: ...after a specified delay (see timer) ...that completes without emitting items: empty End of explanation rst(O.never) d = subs(O.never()) Explanation: ...that does nothing at all: never End of explanation rst(O.throw) d = subs(O.throw(ZeroDivisionError)) Explanation: ...that excepts: throw End of explanation
8,232
Given the following text description, write Python code to implement the functionality described below step by step Description: Autoregressive Moving Average (ARMA) Step1: Generate some data from an ARMA process Step2: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated. Step3: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series.
Python Code: %matplotlib inline from __future__ import print_function import numpy as np import statsmodels.api as sm import pandas as pd from statsmodels.tsa.arima_process import arma_generate_sample np.random.seed(12345) Explanation: Autoregressive Moving Average (ARMA): Artificial data End of explanation arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) Explanation: Generate some data from an ARMA process: End of explanation arparams = np.r_[1, -arparams] maparams = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated. End of explanation dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs) y = pd.TimeSeries(y, index=dates) arma_mod = sm.tsa.ARMA(y, order=(2,2)) arma_res = arma_mod.fit(trend='nc', disp=-1) print(arma_res.summary()) y.tail() import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10,8)) fig = arma_res.plot_predict(start='1999m6', end='2001m5', ax=ax) legend = ax.legend(loc='upper left') Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series. End of explanation
8,233
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic Relative Permeability Example in 2D This example using invasion percolation to invade air (non-wetting) into a water-filled (wetting) 2D network. Being in 2D helps with visualization of the results Step1: Initialize Required Objects We'll use several pre-defined classes here to simplify the example, allowing us to focus on the actual problem of computing relative permeability. Step2: Using InvasionPercolation to Simulate Air Invasion The InvasionPercolation algorithm will be used to simulaton the air invasion. We'll inject only from one face ('left'), and will otherwise use all the default behavior, such as no trapping. Step3: The plot_coordinates function in openpnm.topotools can be used to create a quick and simple plot of the invasion pattern. Note that the ip object possesses a method called results, which accepts as an argument the desired saturation. By passing in Snwp=0.1 this method returns a dictionary containing boolean arrays under the keys 'pore.occupancy' and 'throat.occupancy'. By catching this dictionary with the phase object (air.update) the phase now has arrays 'pore.occupancy' and 'throat.occupancy'. In the following lines, we visualize the invasion pattern by plotting only those pore that have been filled by invading air. Step4: Create StokesFlow Algorithm Now that the invasion pattern for this domain has been established using IP, we can start to see what effect the presence of each phase has on the effective permeability of the other. Step5: We can solve the flow problem on the netowrk without altering the throat conductances, which will give us the maximum flow through the domain for single phase flow Step6: Next we will illustrate how to alter the hydraulic conductances of the water phase to account for the presence of air filled pores and throats. Start by passing 'pore.occupancy' and 'throat.occupancy' to the air object at a specified saturation (0.1 in this case), then reach into the phys2 object and set the conductance of the air filled throats to 1000x lower than the least conductive water filled throat Step7: We then re-run the flow problem, which will now utilize the altered hydraulic conductance values. The pressure field calculated by the StokesFlow algorithm (st) must be passed back to the Phase object (water). Step8: Finally we can visualize the pressure field quickly using OpenPNM's build in plot_coordinates function. Note that we set all pores that are invaded with air to have a 0 pressure by multiplying the result by the inverse of the `air['pore.occupancy'] array`, which sets the invaded pores to a dark color. Step9: Calculate Relative Permeability Curve The above illustration showed how to get the effective permeability at one saturation. We now put this logic into a for loop to obtain water flow rates throat the partialy air-invaded network at a variety of saturations.
Python Code: import warnings import scipy as sp import numpy as np import openpnm as op import matplotlib.pyplot as plt np.set_printoptions(precision=4) np.random.seed(10) %matplotlib inline ws = op.Workspace() ws.settings["loglevel"] = 40 Explanation: Basic Relative Permeability Example in 2D This example using invasion percolation to invade air (non-wetting) into a water-filled (wetting) 2D network. Being in 2D helps with visualization of the results End of explanation pn = op.network.Cubic(shape=[100, 100, 1]) geo = op.geometry.StickAndBall(network=pn, pores=pn.Ps, throats=pn.Ts) air = op.phases.Air(network=pn) water = op.phases.Water(network=pn) phys = op.physics.Standard(network=pn, phase=air, geometry=geo) phys_water = op.physics.Standard(network=pn, phase=water, geometry=geo) Explanation: Initialize Required Objects We'll use several pre-defined classes here to simplify the example, allowing us to focus on the actual problem of computing relative permeability. End of explanation ip = op.algorithms.InvasionPercolation(network=pn) ip.setup(phase=air) ip.set_inlets(pores=pn.pores(['left'])) ip.run() Explanation: Using InvasionPercolation to Simulate Air Invasion The InvasionPercolation algorithm will be used to simulaton the air invasion. We'll inject only from one face ('left'), and will otherwise use all the default behavior, such as no trapping. End of explanation #NBVAL_IGNORE_OUTPUT air.update(ip.results(Snwp=0.1)) fig = plt.figure(figsize=(6, 6)) fig = op.topotools.plot_coordinates(network=pn, fig=fig) fig = op.topotools.plot_coordinates(network=pn, fig=fig, pores=air['pore.occupancy'], color='grey') Explanation: The plot_coordinates function in openpnm.topotools can be used to create a quick and simple plot of the invasion pattern. Note that the ip object possesses a method called results, which accepts as an argument the desired saturation. By passing in Snwp=0.1 this method returns a dictionary containing boolean arrays under the keys 'pore.occupancy' and 'throat.occupancy'. By catching this dictionary with the phase object (air.update) the phase now has arrays 'pore.occupancy' and 'throat.occupancy'. In the following lines, we visualize the invasion pattern by plotting only those pore that have been filled by invading air. End of explanation st = op.algorithms.StokesFlow(network=pn) st.setup(phase=water) st.set_value_BC(pores=pn.pores('front'), values=1) st.set_value_BC(pores=pn.pores('back'), values=0) Explanation: Create StokesFlow Algorithm Now that the invasion pattern for this domain has been established using IP, we can start to see what effect the presence of each phase has on the effective permeability of the other. End of explanation st.run() Qmax = st.rate(pores=pn.pores('front')) print(Qmax) Explanation: We can solve the flow problem on the netowrk without altering the throat conductances, which will give us the maximum flow through the domain for single phase flow: End of explanation air.update(ip.results(Snwp=0.1)) val = np.amin(phys_water['throat.hydraulic_conductance'])/1000 phys_water['throat.hydraulic_conductance'][air['throat.occupancy']] = val Explanation: Next we will illustrate how to alter the hydraulic conductances of the water phase to account for the presence of air filled pores and throats. Start by passing 'pore.occupancy' and 'throat.occupancy' to the air object at a specified saturation (0.1 in this case), then reach into the phys2 object and set the conductance of the air filled throats to 1000x lower than the least conductive water filled throat End of explanation st.run() water.update(st.results()) Explanation: We then re-run the flow problem, which will now utilize the altered hydraulic conductance values. The pressure field calculated by the StokesFlow algorithm (st) must be passed back to the Phase object (water). End of explanation #NBVAL_IGNORE_OUTPUT fig = plt.figure(figsize=(6, 6)) fig = op.topotools.plot_coordinates(network=pn, c=water['pore.pressure']*~air['pore.occupancy'], fig=fig, s=50, marker='s') Explanation: Finally we can visualize the pressure field quickly using OpenPNM's build in plot_coordinates function. Note that we set all pores that are invaded with air to have a 0 pressure by multiplying the result by the inverse of the `air['pore.occupancy'] array`, which sets the invaded pores to a dark color. End of explanation #NBVAL_IGNORE_OUTPUT phys_water.regenerate_models() # Regenerate phys2 to reset any calculation done above data = [] # Initialize a list to hold data for s in np.arange(0, 1, 0.1): # Loop through saturations # 1: Update air object with occupancy at given saturation air.update(ip.results(Snwp=s)) # 2: Overwrite water's hydraulic conductance in air-filled locations phys_water['throat.hydraulic_conductance'][air['throat.occupancy']] = val # 3: Re-run flow problem st.run() # 4: Compute flow through inlet phase and append to data data.append([s, st.rate(pores=pn.pores('front'))[0]]) data = np.vstack(data).T # Convert data to numpy array # Plot relative permeability curve for water flow in partially air filled network plt.plot(*data, 'b-o') plt.show() Explanation: Calculate Relative Permeability Curve The above illustration showed how to get the effective permeability at one saturation. We now put this logic into a for loop to obtain water flow rates throat the partialy air-invaded network at a variety of saturations. End of explanation
8,234
Given the following text description, write Python code to implement the functionality described below step by step Description: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font> Download Step1: Objetos Em Python, tudo é objeto!
Python Code: # Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font> Download: http://github.com/dsacademybr End of explanation # Criando uma lista lst_num = ["Data", "Science", "Academy", "Nota", 10, 10] # A lista lst_num é um objeto, uma instância da classe lista em Python type(lst_num) lst_num.count(10) # Usamos a função type, para verificar o tipo de um objeto print(type(10)) print(type([])) print(type(())) print(type({})) print(type('a')) # Criando um novo tipo de objeto chamado Carro class Carro(object): pass # Instância do Carro palio = Carro() print(type(palio)) # Criando uma classe class Estudantes: def __init__(self, nome, idade, nota): self.nome = nome self.idade = idade self.nota = nota # Criando um objeto chamado Estudante1 a partir da classe Estudantes Estudante1 = Estudantes("Pele", 12, 9.5) # Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe Estudante1.nome # Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe Estudante1.idade # Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe Estudante1.nota # Criando uma classe class Funcionarios: def __init__(self, nome, salario): self.nome = nome self.salario = salario def listFunc(self): print("O nome do funcionário é " + self.nome + " e o salário é R$" + str(self.salario)) # Criando um objeto chamado Func1 a partir da classe Funcionarios Func1 = Funcionarios("Obama", 20000) # Usando o método da classe Func1.listFunc() print("**** Usando atributos *****") hasattr(Func1, "nome") hasattr(Func1, "salario") setattr(Func1, "salario", 4500) hasattr(Func1, "salario") getattr(Func1, "salario") delattr(Func1, "salario") hasattr(Func1, "salario") Explanation: Objetos Em Python, tudo é objeto! End of explanation
8,235
Given the following text description, write Python code to implement the functionality described below step by step Description: Reading the data Step1: Build clustering model Here we build a kmeans model , and select the "optimal" of clusters. Here we see that the optimal number of clusters is 2. Step2: Build the optimal model and apply it Step3: Cluster Profiles Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases. As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question). The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars)
Python Code: #contributions = pd.read_json(path_or_buf='../data/EGALITE4.brut.json', orient="columns") def loadContributions(file, withsexe=False): contributions = pd.read_json(path_or_buf=file, orient="columns") rows = []; rindex = []; for i in range(0, contributions.shape[0]): row = {}; row['id'] = contributions['id'][i] rindex.append(contributions['id'][i]) if (withsexe): if (contributions['sexe'][i] == 'Homme'): row['sexe'] = 0 else: row['sexe'] = 1 for question in contributions['questions'][i]: if (question.get('Reponse')): # and (question['texte'][0:5] != 'Savez') : row[question['titreQuestion']+' : '+question['texte']] = 1 for criteres in question.get('Reponse'): # print(criteres['critere'].keys()) row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1 rows.append(row) df = pd.DataFrame(data=rows) df.fillna(0, inplace=True) return df df = loadContributions('../data/EGALITE1.brut.json', True) df = df.merge(right=loadContributions('../data/EGALITE2.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE3.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE4.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE5.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE6.brut.json'), how='outer', right_on='id', left_on='id') df.fillna(0, inplace=True) df.index = df['id'] df.to_csv('consultation_an.csv', format='%d') #df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe'] df.head() df = loadContributions('../data/EGALITE4.brut.json', True) Explanation: Reading the data End of explanation from sklearn.cluster import KMeans from sklearn import metrics import numpy as np X = df.drop('id', axis=1).values def train_kmeans(nb_clusters, X): kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X) return kmeans #print(kmeans.predict(X)) #kmeans.cluster_centers_ def select_nb_clusters(): perfs = {}; for nbclust in range(2,10): kmeans_model = train_kmeans(nbclust, X); labels = kmeans_model.labels_ # from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index # we are in an unsupervised model. cannot get better! # perfs[nbclust] = metrics.calinski_harabaz_score(X, labels); perfs[nbclust] = metrics.silhouette_score(X, labels); print(perfs); return perfs; df['clusterindex'] = train_kmeans(4, X).predict(X) #df perfs = select_nb_clusters(); # result : # {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438} optimal_nb_clusters = max(perfs, key=perfs.get); print("optimal_nb_clusters" , optimal_nb_clusters); Explanation: Build clustering model Here we build a kmeans model , and select the "optimal" of clusters. Here we see that the optimal number of clusters is 2. End of explanation km_model = train_kmeans(optimal_nb_clusters, X); df['clusterindex'] = km_model.predict(X) lGroupBy = df.groupby(['clusterindex']).mean(); # km_model.__dict__ cluster_profile_counts = df.groupby(['clusterindex']).count(); cluster_profile_means = df.groupby(['clusterindex']).mean(); global_counts = df.count() global_means = df.mean() cluster_profile_counts.head() #cluster_profile_means.head() #df.info() df_profiles = pd.DataFrame(); nbclusters = cluster_profile_means.shape[0] df_profiles['clusterindex'] = range(nbclusters) for col in cluster_profile_means.columns: if(col != "clusterindex"): df_profiles[col] = np.zeros(nbclusters) for cluster in range(nbclusters): df_profiles[col][cluster] = cluster_profile_means[col][cluster] # row.append(df[col].mean()); df_profiles.head() #print(df_profiles.columns) intereseting_columns = {}; for col in df_profiles.columns: if(col != "clusterindex"): global_mean = df[col].mean() diff_means_global = abs(df_profiles[col] - global_mean). max(); # print(col , diff_means_global) if(diff_means_global > 0.1): intereseting_columns[col] = True #print(intereseting_columns) %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt #cols = [ col for col in cluster_profile_counts.columns] #cluster_profile_means.ix[0].plot.bar() Explanation: Build the optimal model and apply it End of explanation interesting = list(intereseting_columns.keys()) df_profiles_sorted = df_profiles[interesting].sort_index(axis=1) df_profiles_sorted.plot.bar(figsize =(1, 1)) df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False) df_profiles_sorted.T df_profiles.sort_index(axis=1).T Explanation: Cluster Profiles Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases. As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question). The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars) End of explanation
8,236
Given the following text description, write Python code to implement the functionality described below step by step Description: What is Pandas? One of the best options for working with tabular data in Python is to use the Python Data Analysis Library (a.k.a. Pandas). The Pandas library provides data structures, produces high quality plots with matplotlib and integrates nicely with other libraries that use NumPy arrays. Like NumPy, Pandas introduces some new data types to our Python word, the most important of which is the DataFrame. Briefly, a DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers, floating point values, factors and more) in columns. It is similar to a spreadsheet or an SQL table or the data.frame in R. A DataFrame always has an index (0-based). An index refers to the position of an element in the data structure. Once we load our data into a DataFrame, pandas gives a host of analytical capabilities with the data. Among these are Step1: The DataFrame We'll begin by exploring the key elements of the DataFrame object. Some notions are self evident, i.e., data are stored in rows and columns, much like a spreadsheet. Others are more nuanced Step2: A few key points here Step3: While the order of columns has changed (they are alphabetized), the data are the same as the previous DataFrame even though it was created differently. This is just another way to think of the data underlying a DataFrame. Of note here is that each dictionary has the same number of elements in it and the order of the elements is important otherwise the values in one dictionary/column would lose its proper correspondence with the other elements. Eh, so what... What does this reveal? List of lists vs set of dictionaries? Well, it explains how you can extract elements from the DataFrame. Thinking of a DataFrame as a list of list, getting the value of the 2nd column, 3rd row is equivalent of getting data from the 2nd item in the 3rd list. We can get that value using the DataFrame's iloc function (short for intrinsic location), passing the row and column of the location we want. Step4: And if we hop over to thinking a DataFrame is a set of dictionaries, we can target a specific value by specifying the index of the value (row) from the dictionary column) we want. The row however, is referred to by the index we assigned, not it's implicit index generated by the order in which it was entered. Step5: We'll return to how we extract data from a DataFrame, but for now just soak in the fact that values in a DataFrame can be referenced by their implicit location (i.e. their row, column coordinates) and by their explicit column name and row index. Creating a dataframe from a CSV file More than likely we'll be reading in data vs entering it manually, so let's review how files are read into a Pandas Dataframe. Pandas can read many other formats Step6: Exploring our Species Survey Data Now, as we often do, let's look at the type of the object we just created. Step7: We can print the entire contents of the data frame by just calling the object. Remember that in Jupyter notebooks, we can toggle the output by clicking the lightly shaded area to the left of it... Step8: At the bottom of the [long] output above, we see that the data includes 33,549 rows and 9 columns. The first column is the index of the DataFrame. The index is used to identify the position of the data, but it is not an actual column of the DataFrame. It looks like the read_csv function in Pandas read our file properly. All the values in a column have the same type. For example, months have type int64, which is a kind of integer. Cells in the month column cannot have fractional values, but the weight and hindfoot_length columns can, because they have type float64. The object type doesn’t have a very helpful name, but in this case it represents strings (such as ‘M’ and ‘F’ in the case of sex). Exploring our Species Survey Data Now, as we often do, let's look at the type of the object we just created. Step9: As expected, it’s a DataFrame (or, to use the full name that Python uses to refer to it internally, a pandas.core.frame.DataFrame).<br> What kind of things does surveys_df contain? DataFrames have an attribute called dtypes that answers this Step10: All the values in a column have the same type. For example, months have type int64, which is a kind of integer. Cells in the month column cannot have fractional values, but the weight and hindfoot_length columns can, because they have type float64. The object type doesn’t have a very helpful name, but in this case it represents strings (such as ‘M’ and ‘F’ in the case of sex). Useful Ways to View DataFrame Objects in Python There are many ways to summarize and access the data stored in DataFrames, using attributes and methods provided by the DataFrame object. To access an <u>attribute</u>, use the DataFrame object name followed by the attribute name df_object.attribute. Using the DataFrame surveys_df and attribute columns, an index of all the column names in the DataFrame can be accessed with surveys_df.columns. <u>Methods</u> are called in a similar fashion using the syntax df_object.method(). As an example, surveys_df.head() gets the first few rows in the DataFrame surveys_df using the head() method. With a method, we can supply extra information in the parens to control behaviour. Let’s look at the data using these. <font color='red'>Challenge - DataFrames</font> Using our DataFrame surveys_df, try out the attributes & methods below to see what they return. 1. surveys_df.columns 1. surveys_df.shape Take note of the output of shape - what format does it return the shape of the DataFrame in? 1. surveys_df.head() Also, what does surveys_df.head(15) do? 1. surveys_df.tail() Use the boxes below to type in the above commands and see what they produce. Step11: Inspecting the Data in a Pandas DataFrame We’ve read our data into Python. Next, let’s perform some quick summary statistics to learn more about the data that we’re working with. We might want to know how many animals were collected in each plot, or how many of each species were caught. We can perform summary stats quickly using groups. But first we need to figure out what we want to group by. Let’s begin by exploring the data in our data frame Step12: We can extract one column of data into a new object by referencing that column as shown here Step13: Examining the type of this speciesIDs object reveals another Pandas data type Step14: A series object is a one-dimensional array, much like a NumPy array, with its own set of properties and functions. The values are indexed allowing us to extract values at a specific row (try Step15: <font color='red'>Challenge - Counts and Lists from Data </font> Create a list of unique plot ID’s found in the surveys data. Call it plot_names. How many unique plots are there in the data? How many unique species are in the data? What is the difference between len(plot_names) and surveys_df['plot_id'].nunique()? Step16: Grouping Data in Pandas We often want to calculate summary statistics grouped by subsets or attributes within fields of our data. For example, we might want to calculate the average weight of all individuals per plot. We can calculate basic statistics for all records in a single column using the syntax below Step17: We can also extract one specific metric if we wish Step18: But if we want to summarize by one or more variables, for example sex, we can use Pandas’ .groupby method. Once we’ve created a groupby DataFrame, we can quickly calculate summary statistics by a group of our choice.
Python Code: #Import the package import pandas as pd Explanation: What is Pandas? One of the best options for working with tabular data in Python is to use the Python Data Analysis Library (a.k.a. Pandas). The Pandas library provides data structures, produces high quality plots with matplotlib and integrates nicely with other libraries that use NumPy arrays. Like NumPy, Pandas introduces some new data types to our Python word, the most important of which is the DataFrame. Briefly, a DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers, floating point values, factors and more) in columns. It is similar to a spreadsheet or an SQL table or the data.frame in R. A DataFrame always has an index (0-based). An index refers to the position of an element in the data structure. Once we load our data into a DataFrame, pandas gives a host of analytical capabilities with the data. Among these are: - subset/select/query specific rows and or columns of data - handling missing data - grouping/aggregating data - sorting, transforming, pivoting, and "melting" data - computing descriptive and summary stats - combining data in different DataFrames - plotting data In short, Pandas and its DataFrame object is essentially a one-stop for anything data. Learning by Diving In! Here in this notebook, we'll dive in with some examples and then explain what's going on. The highlights here include: * Creating DataFrames programmatically and then by reading in data; and in doing so reviewing the structure and key elements of a DataFrame * Examining key properties and methods of a DataFrame that quickly reveal information about our DataFrame: the number of rows and columns, what types of data it stores, quick summary statistics, etc. * Summarizing the data in a DataFrame, including how to quickly count and list unique items in a column. * Examining the various ways to handle missing data in a DataFrame * Grouping data * Subsetting observations (rows) and variables (columns) It all starts by importing the package... End of explanation #Creating a simple data frame as a list of lists df = pd.DataFrame([['Joe',22,True],['Bob',25,False],['Sue',28,False],['Ken',24,True]], index = [10,20,40,30], columns = ['Name','Age','IsStudent'] ) #Display the resulting data frame df Explanation: The DataFrame We'll begin by exploring the key elements of the DataFrame object. Some notions are self evident, i.e., data are stored in rows and columns, much like a spreadsheet. Others are more nuanced: implicit and explicit indices, tables vs. views, and some others. Let's begin examining the components of DataFrames by examining two ways they can be created. DataFrame as a list of lists First, a DataFrame can be considered as a list of lists. Below we see an example where we have 4 sub-lists, each containing 3 items (e.g. the first list ['Joe',22,& True]). Each of these 4 sub-lists comprises a row in the resulting DataFrame, and each item in a given list becomes a column. End of explanation #Creating a data frame as dictionaries of lists df = pd.DataFrame({"Name":['Joe','Bob','Sue','Ken'], "Age":[22,25,28,24], "IsStudent":[True,False,False,True]}, index = [10,20,40,30] ) df Explanation: A few key points here: * First is that each of the sub-lists has the same number of elements (3) and the same data types as the other sub-lists. Otherwise we'd end up with missing data or "coerced" data types. * Second is that we also explicitly specify and index for the rows (index = [1,2,3,4]). The index allows us to identify a specific row. * Likewise, we explicitly set column names with columns = ['Name','Age','IsStudent'], and yes, these allow us to indentify specific columns in our DataFrame. Data frame as a collection of dictionaries Another way to build (and think of) a DataFrame as a set of dictionaries where each dictionary is a column of data, with the dictionary's key being the column name and it's value being a list of values: End of explanation #Get the 3rd item from the 2nd row; recalling Python is zero-based print df.iloc[1,2] Explanation: While the order of columns has changed (they are alphabetized), the data are the same as the previous DataFrame even though it was created differently. This is just another way to think of the data underlying a DataFrame. Of note here is that each dictionary has the same number of elements in it and the order of the elements is important otherwise the values in one dictionary/column would lose its proper correspondence with the other elements. Eh, so what... What does this reveal? List of lists vs set of dictionaries? Well, it explains how you can extract elements from the DataFrame. Thinking of a DataFrame as a list of list, getting the value of the 2nd column, 3rd row is equivalent of getting data from the 2nd item in the 3rd list. We can get that value using the DataFrame's iloc function (short for intrinsic location), passing the row and column of the location we want. End of explanation #Get the value in the 'Name' column corresponding to the row with an index of '20' print df['Name'][20] Explanation: And if we hop over to thinking a DataFrame is a set of dictionaries, we can target a specific value by specifying the index of the value (row) from the dictionary column) we want. The row however, is referred to by the index we assigned, not it's implicit index generated by the order in which it was entered. End of explanation #Read in the surveys.csv file surveys_df = pd.read_csv('../data/surveys.csv') Explanation: We'll return to how we extract data from a DataFrame, but for now just soak in the fact that values in a DataFrame can be referenced by their implicit location (i.e. their row, column coordinates) and by their explicit column name and row index. Creating a dataframe from a CSV file More than likely we'll be reading in data vs entering it manually, so let's review how files are read into a Pandas Dataframe. Pandas can read many other formats: Excel files, HTML tables, JSON, etc. But let's concentrate on the simplest one - the csv file - and discuss the key parameters involved. In the Data folder within our workspace is a file named surveys.csv which holds the data we'll use. If you're curious, this dataset is part of the Portal Teaching data, a subset of the data from Ernst et al Long-term monitoring and experimental manipulation of a Chihuahuan Desert ecosystem near Portal, Arizona, USA. The dataset is stored as a .csv file: each row holds information for a single animal, and the columns represent: | Column | Description | | :--- | :--- | |record_id | Unique id for the observation | |month| month of observation | |day | day of observation | |year | year of observation | |plot_id | ID of a particular plot | |species_id | 2-letter code | |sex | sex of animal (“M”, “F”) | |hindfoot_length | length of the hindfoot in mm | |weight | weight of the animal in grams | Below, we read in this file, saving the contents to the variabel surveys_df End of explanation #Show the object type of the object we just created type(surveys_df) Explanation: Exploring our Species Survey Data Now, as we often do, let's look at the type of the object we just created. End of explanation #Show the data frame's contents surveys_df Explanation: We can print the entire contents of the data frame by just calling the object. Remember that in Jupyter notebooks, we can toggle the output by clicking the lightly shaded area to the left of it... End of explanation #Show the object type of the object we just created type(surveys_df) Explanation: At the bottom of the [long] output above, we see that the data includes 33,549 rows and 9 columns. The first column is the index of the DataFrame. The index is used to identify the position of the data, but it is not an actual column of the DataFrame. It looks like the read_csv function in Pandas read our file properly. All the values in a column have the same type. For example, months have type int64, which is a kind of integer. Cells in the month column cannot have fractional values, but the weight and hindfoot_length columns can, because they have type float64. The object type doesn’t have a very helpful name, but in this case it represents strings (such as ‘M’ and ‘F’ in the case of sex). Exploring our Species Survey Data Now, as we often do, let's look at the type of the object we just created. End of explanation #Show the data types of the columns in our data frame surveys_df.dtypes Explanation: As expected, it’s a DataFrame (or, to use the full name that Python uses to refer to it internally, a pandas.core.frame.DataFrame).<br> What kind of things does surveys_df contain? DataFrames have an attribute called dtypes that answers this: End of explanation # Challenge 1 surveys_df. # Challenge 2 # Challenge 3 Explanation: All the values in a column have the same type. For example, months have type int64, which is a kind of integer. Cells in the month column cannot have fractional values, but the weight and hindfoot_length columns can, because they have type float64. The object type doesn’t have a very helpful name, but in this case it represents strings (such as ‘M’ and ‘F’ in the case of sex). Useful Ways to View DataFrame Objects in Python There are many ways to summarize and access the data stored in DataFrames, using attributes and methods provided by the DataFrame object. To access an <u>attribute</u>, use the DataFrame object name followed by the attribute name df_object.attribute. Using the DataFrame surveys_df and attribute columns, an index of all the column names in the DataFrame can be accessed with surveys_df.columns. <u>Methods</u> are called in a similar fashion using the syntax df_object.method(). As an example, surveys_df.head() gets the first few rows in the DataFrame surveys_df using the head() method. With a method, we can supply extra information in the parens to control behaviour. Let’s look at the data using these. <font color='red'>Challenge - DataFrames</font> Using our DataFrame surveys_df, try out the attributes & methods below to see what they return. 1. surveys_df.columns 1. surveys_df.shape Take note of the output of shape - what format does it return the shape of the DataFrame in? 1. surveys_df.head() Also, what does surveys_df.head(15) do? 1. surveys_df.tail() Use the boxes below to type in the above commands and see what they produce. End of explanation # Look at the column names surveys_df.columns Explanation: Inspecting the Data in a Pandas DataFrame We’ve read our data into Python. Next, let’s perform some quick summary statistics to learn more about the data that we’re working with. We might want to know how many animals were collected in each plot, or how many of each species were caught. We can perform summary stats quickly using groups. But first we need to figure out what we want to group by. Let’s begin by exploring the data in our data frame: First, examine the column names. (Yes,I know we just did that in the Challenge above...) End of explanation speciesIDs = surveys_df['species_id'] Explanation: We can extract one column of data into a new object by referencing that column as shown here: End of explanation type(speciesIDs) Explanation: Examining the type of this speciesIDs object reveals another Pandas data type: the Series which is slightly different than a DataFrame... End of explanation #Reveal how many unique species_ID values are in the table speciesIDs.nunique() #List the unique values speciesIDs.unique() Explanation: A series object is a one-dimensional array, much like a NumPy array, with its own set of properties and functions. The values are indexed allowing us to extract values at a specific row (try: speciesIDs[5]) or slice of rows (try: species[2:7]). We can also, using the series.nunique() and series.unique() functions, generate a count of unique values in the series and a list of unique values, respectively. End of explanation # Challenge 1 # Challenge 2 Explanation: <font color='red'>Challenge - Counts and Lists from Data </font> Create a list of unique plot ID’s found in the surveys data. Call it plot_names. How many unique plots are there in the data? How many unique species are in the data? What is the difference between len(plot_names) and surveys_df['plot_id'].nunique()? End of explanation surveys_df['weight'].describe() Explanation: Grouping Data in Pandas We often want to calculate summary statistics grouped by subsets or attributes within fields of our data. For example, we might want to calculate the average weight of all individuals per plot. We can calculate basic statistics for all records in a single column using the syntax below: End of explanation print" Min: ", surveys_df['weight'].min() print" Max: ", surveys_df['weight'].max() print" Mean: ", surveys_df['weight'].mean() print" Std Dev: ", surveys_df['weight'].std() print" Count: ", surveys_df['weight'].count() Explanation: We can also extract one specific metric if we wish: End of explanation # Group data by sex grouped_data = surveys_df.groupby('sex') # Show just the grouped means grouped_data.mean() # Or, use the describe function to reveal all summary stats for the grouped data grouped_data.describe() Explanation: But if we want to summarize by one or more variables, for example sex, we can use Pandas’ .groupby method. Once we’ve created a groupby DataFrame, we can quickly calculate summary statistics by a group of our choice. End of explanation
8,237
Given the following text description, write Python code to implement the functionality described below step by step Description: Option chains Step1: Suppose we want to find the options on the SPX, with the following conditions Step2: To avoid issues with market data permissions, we'll use delayed data Step3: Then get the ticker. Requesting a ticker can take up to 11 seconds. Step4: Take the current market value of the ticker Step5: The following request fetches a list of option chains Step6: These are four option chains that differ in exchange and tradingClass. The latter is 'SPX' for the monthly and 'SPXW' for the weekly options. Note that the weekly expiries are disjoint from the monthly ones, so when interested in the weekly options the monthly options can be added as well. In this case we're only interested in the monthly options trading on SMART Step7: What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions Step8: Now to get the market data for all options in one go Step9: The option greeks are available from the modelGreeks attribute, and if there is a bid, ask resp. last price available also from bidGreeks, askGreeks and lastGreeks. For streaming ticks the greek values will be kept up to date to the current market situation.
Python Code: from ib_insync import * util.startLoop() ib = IB() ib.connect('127.0.0.1', 7497, clientId=12) Explanation: Option chains End of explanation spx = Index('SPX', 'CBOE') ib.qualifyContracts(spx) Explanation: Suppose we want to find the options on the SPX, with the following conditions: Use the next three monthly expiries; Use strike prices within +- 20 dollar of the current SPX value; Use strike prices that are a multitude of 5 dollar. To get the current market value, first create a contract for the underlyer (the S&P 500 index): End of explanation ib.reqMarketDataType(4) Explanation: To avoid issues with market data permissions, we'll use delayed data: End of explanation [ticker] = ib.reqTickers(spx) ticker Explanation: Then get the ticker. Requesting a ticker can take up to 11 seconds. End of explanation spxValue = ticker.marketPrice() spxValue Explanation: Take the current market value of the ticker: End of explanation chains = ib.reqSecDefOptParams(spx.symbol, '', spx.secType, spx.conId) util.df(chains) Explanation: The following request fetches a list of option chains: End of explanation chain = next(c for c in chains if c.tradingClass == 'SPX' and c.exchange == 'SMART') chain Explanation: These are four option chains that differ in exchange and tradingClass. The latter is 'SPX' for the monthly and 'SPXW' for the weekly options. Note that the weekly expiries are disjoint from the monthly ones, so when interested in the weekly options the monthly options can be added as well. In this case we're only interested in the monthly options trading on SMART: End of explanation strikes = [strike for strike in chain.strikes if strike % 5 == 0 and spxValue - 20 < strike < spxValue + 20] expirations = sorted(exp for exp in chain.expirations)[:3] rights = ['P', 'C'] contracts = [Option('SPX', expiration, strike, right, 'SMART', tradingClass='SPX') for right in rights for expiration in expirations for strike in strikes] contracts = ib.qualifyContracts(*contracts) len(contracts) contracts[0] Explanation: What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions: End of explanation tickers = ib.reqTickers(*contracts) tickers[0] Explanation: Now to get the market data for all options in one go: End of explanation ib.disconnect() Explanation: The option greeks are available from the modelGreeks attribute, and if there is a bid, ask resp. last price available also from bidGreeks, askGreeks and lastGreeks. For streaming ticks the greek values will be kept up to date to the current market situation. End of explanation
8,238
Given the following text description, write Python code to implement the functionality described below step by step Description: HIV Methylation Age Advancement Step1: Looking at Predicted Time of Onset The idea of age acceleration, only really makes sense in this context as a person should age normally until the onset of the disease Step2: Interestingly a lot of the paitents off the diagnonal in the recently diagnosed group have detectable HIV rna in the blood plasma. Step3: Further inspection of age We see a relatively big trend of age advancement with patient age We have a high degree of correlation between these two variables in our dataset As can be seen below, these two variables age tighly correlated for the short term infected patients with a little more variablity for the chronic HIV patients It can also be seen that the patients with recent HIV infection were infected at a considerably older age than those with long term infection Step4: Adjust out the age effect in age advancement This could be an artifact of a number of things including the association of age with duration of HIV In general we don't have much evidence for an age dependent effect but there is a slight trend For now we will look at both the raw age advancment and the age adjusted advancement When age is not adjusted for its probably a good idea to use age as a covariate as we might get spurious correlations with are artifacts of this age trend Step5: Look at Confounders Data from the labs Step6: Cell composition from mixture model estimates Step7: While we see a significant effect of NK cell concentration with increasing age advancment, this does not seem to be specific to HIV+ patients. Step8: Multivariate modeling of confounders Here we are looking at biological age, MCV, and NK cell count. We contructed a similar model with monocyte count as well but found that it did not add to the model fit. Step9: Modeling residuals of aging model with HIV and cell composition Here we have to use the estimated cell counts as we do not have blood work for the controls Step10: Looking at residuals of model fit with cell composition This is very conservative
Python Code: import NotebookImport from IPython.display import clear_output from HIV_Age_Advancement import * from Setup.DX_Imports import * import statsmodels.api as sm import seaborn as sns sns.set_context("paper", font_scale=1.7, rc={"lines.linewidth": 2.5}) sns.set_style("white") Explanation: HIV Methylation Age Advancement: Confounders We know that there is a considerable amount of confounding going on with our HIV-associated aging signal and the patient's cellular composition. Patients with HIV inherently have different cellular compositions including lower CD4 T-cell counts and higher proportions of other cell types. In addition we know that in the normal aging process the composition of blood changes througout time. It is very hard to determine whether appearant age advancement is due to age associated blood composition chnages which happen as a direct consequence of HIV infection, or if HIV infection causes accelerated aging resulting in an adjustment of the blood makeup. End of explanation fig, ax = subplots(figsize=(5,4)) plot_regression(a2, p2, ax=ax) fig.tight_layout() Explanation: Looking at Predicted Time of Onset The idea of age acceleration, only really makes sense in this context as a person should age normally until the onset of the disease End of explanation fig, ax = subplots(figsize=(5,4)) plot_regression(a2.ix[ti(labs['LLQ PLASMA'] != '>LLQ')], p2, ax=ax) series_scatter(a2.ix[ti(labs['LLQ PLASMA'] == '>LLQ')], p2, color=colors[0], ax=ax, ann=None) fig.tight_layout() Explanation: Interestingly a lot of the paitents off the diagnonal in the recently diagnosed group have detectable HIV rna in the blood plasma. End of explanation fig, axs = subplots(1,3, figsize=(14,4), sharey=True) age_at_dx = (clinical['estimated duration hiv (months)'] / 12.) age_at_dx.name = 'age_at_dx' series_scatter(age, age_at_dx.ix[duration.index], ax=axs[0]) violin_plot_pandas(duration[duration != 'Control'], age, ax=axs[1]) violin_plot_pandas(duration, age_at_dx, ax=axs[2]) for ax in axs: prettify_ax(ax) fig, axs = subplots(1,3, figsize=(14,4), sharey=True) age_at_dx = age - (clinical['estimated duration hiv (months)'] / 12.) age_at_dx.name = 'age_at_dx' series_scatter(age, age_at_dx.ix[duration.index], ax=axs[0]) violin_plot_pandas(duration[duration != 'Control'], age, ax=axs[1]) violin_plot_pandas(duration, age_at_dx, ax=axs[2]) for ax in axs: prettify_ax(ax) Explanation: Further inspection of age We see a relatively big trend of age advancement with patient age We have a high degree of correlation between these two variables in our dataset As can be seen below, these two variables age tighly correlated for the short term infected patients with a little more variablity for the chronic HIV patients It can also be seen that the patients with recent HIV infection were infected at a considerably older age than those with long term infection End of explanation age_advancement = (p2 - a2).ix[duration.index].dropna() age_advancement.name = 'age_advancement' reg = linear_regression(age, age_advancement) age_adj = (age_advancement - age * reg['slope']).dropna() age_adj = age_adj - reg.intercept age_adj.name = 'age advancment (adjusted)' fig, axs = subplots(1,2, figsize=(10,4), sharey=True) series_scatter(age_advancement, age, ax=axs[0]) series_scatter(age_adj, age, ax=axs[1]) for ax in axs: prettify_ax(ax) fig.tight_layout() residual = (pred_c - age).ix[duration.index] residual.name = 'residual' reg = linear_regression(age, residual) resid_adj = (residual - age * reg['slope']).dropna() resid_adj = resid_adj - reg.intercept resid_adj.name = 'residual (adjusted)' fig, axs = subplots(1,2, figsize=(10,4), sharey=True) series_scatter(residual, age, ax=axs[0]) series_scatter(resid_adj, age, ax=axs[1]) for ax in axs: prettify_ax(ax) fig.tight_layout() #r = p2 - a2 a,b,c = residual.groupby(duration) sp.stats.bartlett(a[1].dropna(), c[1].dropna()) sp.stats.bartlett(a[1].dropna(), b[1].dropna(), c[1].dropna()) violin_plot_pandas(duration, p2 - a2) Explanation: Adjust out the age effect in age advancement This could be an artifact of a number of things including the association of age with duration of HIV In general we don't have much evidence for an age dependent effect but there is a slight trend For now we will look at both the raw age advancment and the age adjusted advancement When age is not adjusted for its probably a good idea to use age as a covariate as we might get spurious correlations with are artifacts of this age trend End of explanation l2 = (labs.ix[:, labs.dtypes.isin([dtype('int64'), dtype('float64')])] .dropna(1, how='all')) l3 = labs.ix[:, ti(labs.apply(lambda s: len(s.unique()), axis=0) < 6)] spearman_pandas(residual, np.log2(l2['CD4/CD8 ratio'])) pearson_pandas(residual, np.log2(l2['CD4/CD8 ratio'])) spearman_pandas(residual.ix[ti(duration=='HIV Long')], np.log2(l2['CD4/CD8 ratio'])) spearman_pandas(resid_adj.ix[ti(duration=='HIV Short')], np.log2(l2['CD4/CD8 ratio'])) spearman_pandas(resid_adj, np.log2(l2['CD4/CD8 ratio'])) series_scatter(residual, np.log2(l2['CD4/CD8 ratio'])) l2 = (labs.ix[:, labs.dtypes.isin([dtype('int64'), dtype('float64')])] .dropna(1, how='all')) l3 = labs.ix[:, ti(labs.apply(lambda s: len(s.unique()), axis=0) < 6)] keepers = labs.index.difference(['RG065','RG175','RG279','RA182','RM285']) keepers = keepers.intersection(duration.index) l2 = l2.ix[keepers] l3 = l3.ix[keepers] duration.name = 'duration' violin_plot_pandas(combine(labs['LLQ PLASMA'] == '>LLQ', duration=='HIV Long'), age, order=['neither','duration','both','LLQ PLASMA']) violin_plot_pandas(combine(labs['LLQ PLASMA'] == '>LLQ', duration=='HIV Long'), age_advancement, order=['neither','duration','both','LLQ PLASMA']) series_scatter(np.log(labs['rnvalue PLASMA'][labs['LLQ PLASMA'] == '>LLQ']), age_advancement) screen_feature(age_advancement, pearson_pandas, l2.T, align=False).head() bins = np.floor(age_advancement / 5.) bins = bins.clip(-1,2) spearman_pandas(bins, l2.MCV) fig, axs = subplots(1,2, figsize=(6,4)) bins = np.floor(age_advancement / 5.) bins = bins.clip(-1,2).map({-1: '< 0', 0:'0-5', 1:'5+', 2:'5+'}) box_plot_pandas(bins, l2.MCV, order=['< 0','0-5','5+'], ax=axs[0]) box_plot_pandas(bins, l2['age'], order=['< 0','0-5','5+'], ax=axs[1]) for ax in axs: prettify_ax(ax) fig.tight_layout() fig, ax = subplots(figsize=(5,4)) series_scatter(age_advancement, l2.MCV, ax=ax, color=colors[3], edgecolor='black') prettify_ax(ax) fig.tight_layout() fig.savefig(FIGDIR + 'mcv_age_advancement.png', dpi=300) Explanation: Look at Confounders Data from the labs End of explanation screen_feature(age_advancement, spearman_pandas, cell_counts.T, align=False) Explanation: Cell composition from mixture model estimates End of explanation fig, ax = subplots(1,1, figsize=(4,3)) rr = cell_counts.NK k = pred_c.index hiv = duration != 'Control' sns.regplot(*match_series(residual.ix[k], rr.ix[ti(hiv==0)]), ax=ax, label='HIV+') sns.regplot(*match_series(residual.ix[k], rr.ix[ti(hiv>0)]), ax=ax, label='Control') prettify_ax(ax) Explanation: While we see a significant effect of NK cell concentration with increasing age advancment, this does not seem to be specific to HIV+ patients. End of explanation age_adj.name = 'age_advancement' hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' age.name = 'bio_age' duration_t = clinical['estimated duration hiv (months)'] / 12. duration.name = 'duration' monocytes = labs['Monocyte %'] monocytes.name = 'monocytes' df = process_factors([age_advancement, duration, age, age_at_dx, l2.MCV, l2.MCH, cell_counts.NK, cell_counts.CD4T, monocytes], standardize=True) fmla = robjects.Formula('age_advancement ~ bio_age + MCV + NK') m = robjects.r.lm(fmla, df) s = robjects.r.summary(m) print '\n\n'.join(str(s).split('\n\n')[-3:]) Explanation: Multivariate modeling of confounders Here we are looking at biological age, MCV, and NK cell count. We contructed a similar model with monocyte count as well but found that it did not add to the model fit. End of explanation hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' age.name = 'chron_age' pred_c.name = 'bio_age' hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' df = process_factors([residual, hiv, age, cell_counts.NK, cell_counts.CD4T, cell_counts.CD8T, cell_counts.Bcell, cell_counts.Mono, cell_counts.Gran], standardize=False) fmla = robjects.Formula('residual ~ chron_age + HIV + NK + CD4T + CD8T + ' 'Bcell + Mono + Gran') m = robjects.r.lm(fmla, df) s = robjects.r.summary(m) print '\n\n'.join(str(s).split('\n\n')[-3:]) hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' df = process_factors([residual, hiv, pred_c, age, cell_counts.NK, cell_counts.CD4T, cell_counts.CD8T, cell_counts.Bcell, cell_counts.Mono, cell_counts.Gran], standardize=False) fmla = robjects.Formula('residual ~ bio_age + HIV + NK') m = robjects.r.lm(fmla, df) s = robjects.r.summary(m) print '\n\n'.join(str(s).split('\n\n')[-3:]) Explanation: Modeling residuals of aging model with HIV and cell composition Here we have to use the estimated cell counts as we do not have blood work for the controls End of explanation hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' age.name = 'chron_age' pred_c.name = 'bio_age' df = process_factors([residual, hiv, pred_c, age, cell_counts.NK, cell_counts.CD4T, cell_counts.CD8T, cell_counts.Bcell, cell_counts.Mono, cell_counts.Gran]) fmla = robjects.Formula('bio_age ~ chron_age + NK + CD4T + CD8T + ' 'Bcell + Mono + Gran') m = robjects.r.lm(fmla, df) s = robjects.r.summary(m) print '\n\n'.join(str(s).split('\n\n')[-3:]) 1.4299 / 2.3176 hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' age.name = 'chron_age' pred_c.name = 'bio_age' df = process_factors([residual, hiv, pred_c, age, cell_counts.NK, cell_counts.CD4T, cell_counts.CD8T, cell_counts.Bcell, cell_counts.Mono, cell_counts.Gran]) fmla = robjects.Formula('residual ~ chron_age + NK + CD4T + CD8T + ' 'Bcell + Mono + Gran') m = robjects.r.lm(fmla, df) s = robjects.r.summary(m) print '\n\n'.join(str(s).split('\n\n')[-3:]) rmse = lambda v: (v ** 2).mean() ** .5 v = robjects.r.residuals(m) r2 = pd.Series(pandas2ri.ri2py(v), index=list(v.names[0])) r2.name = 'residual' hiv = (duration != 'Control').astype(float) hiv.name = 'HIV' df = process_factors([r2, hiv, pred_c, cell_counts.NK, cell_counts.CD4T, cell_counts.CD8T, cell_counts.Bcell, cell_counts.Mono, cell_counts.Gran]) fmla = robjects.Formula('residual ~ HIV') m1 = robjects.r.lm(fmla, df) s = robjects.r.summary(m1) print '\n\n'.join(str(s).split('\n\n')[-3:]) Explanation: Looking at residuals of model fit with cell composition This is very conservative End of explanation
8,239
Given the following text description, write Python code to implement the functionality described below step by step Description: What is a Jupyter Notebook? From the Jupyter website (http Step1: Text can be entered into cells by designating it for markdown, allowing simple formatting. It is also possible to enter Latex formulae into markdown cells Step2: Sharing your notebook To allow others to interact with your notebook, create a github repository of the notebook, data, and dependencies. Then share using mybinder Widgets for easy interaction!
Python Code: # code goes into these boxes ("cells") # cells are executed consecutively, with the output printed immediately beneath the cell print("Welcome to SMARTFest 2018!") Explanation: What is a Jupyter Notebook? From the Jupyter website (http://jupyter.org): The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: * data cleaning and transformation * numerical simulation * statistical modeling * data visualization * machine learning, and much more. Some helpful resources: Anaconda is a very easy way to get started with Jupyter notebooks for Windows, Mac and Linux environments A gallery of interesting notebooks includes tutorials, and notebooks published in academic journals MyBinder is a new way to share your notebooks interactively nbextensions offers a range of useful notebook extensions Jupyterlab is the next generation Jupyter notebook, with some very nice new features. It is still under development, but is worth getting familiar with A small Demo: End of explanation %matplotlib inline import matplotlib.pyplot as plt # import a practice dataset from sklearn import datasets diabetes = datasets.load_diabetes() bmi = diabetes.data[2] bp = diabetes.data[3] plt.scatter(bmi,bp); # add a line of best fit: import numpy as np z = np.polyfit(x=bmi, y=bp, deg=1) p = np.poly1d(z) trendline = p(sorted(bmi)) plt.scatter(bmi,bp) plt.plot(sorted(bmi),trendline); Explanation: Text can be entered into cells by designating it for markdown, allowing simple formatting. It is also possible to enter Latex formulae into markdown cells: $$ p_{obs} = \frac{(k+1)\cdot p_{linear}}{k \cdot p_{linear}+1} $$ Graphical output can also be incorporated: End of explanation from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def f(x): return 3*x^2 - 2*x + 8 interact(f, x=10); def g(mu,sigma): s = np.random.normal(mu, sigma, 1000) count, bins, ignored = plt.hist(s, 30, normed=True) plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') plt.xlim((-5,5)) plt.ylim((0,3)) plt.show() interact(g, mu=widgets.FloatSlider(min=-4,max=4,step=0.5,value=0), sigma=widgets.FloatSlider(min=0.1,max=3,step=0.1,value=0.1)); def random_plot(sigma=0.85): x = np.random.randint(0,50,100) s = sigma * np.random.randn(100) y = x + (10 * s) plt.scatter(x,y) random_plot(0.1) interact(random_plot, sigma=widgets.FloatSlider(min=0.1,max=1,step=0.1,value=0.5)); Explanation: Sharing your notebook To allow others to interact with your notebook, create a github repository of the notebook, data, and dependencies. Then share using mybinder Widgets for easy interaction! End of explanation
8,240
Given the following text description, write Python code to implement the functionality described below step by step Description: Illustrates numpy vs einsum In deep learning, we perform a lot of tensor operations. einsum simplifies and unifies the APIs for these operations. einsum can be found in numerical computation libraries and deep learning frameworks. Let us demonstrate how to import and use einsum in numpy, TensorFlow and PyTorch. Step1: Tensor multiplication with transpose in numpy and einsum Step2: Properties of square matrices in numpy and einsum We demonstrate diagonal. Step3: Trace. Step4: Sum along an axis. Step5: Let us demonstrate tensor transpose. We can also use w.T to transpose w in numpy. Step6: Dot, inner and outer products in numpy and einsum.
Python Code: import numpy as np from numpy import einsum w = np.arange(6).reshape(2,3).astype(np.float32) x = np.ones((3,1), dtype=np.float32) print("w:\n", w) print("x:\n", x) y = np.matmul(w, x) print("y:\n", y) y = einsum('ij,jk->ik', torch.from_numpy(w), torch.from_numpy(x)) print("y:\n", y) Explanation: Illustrates numpy vs einsum In deep learning, we perform a lot of tensor operations. einsum simplifies and unifies the APIs for these operations. einsum can be found in numerical computation libraries and deep learning frameworks. Let us demonstrate how to import and use einsum in numpy, TensorFlow and PyTorch. End of explanation w = np.arange(6).reshape(2,3).astype(np.float32) x = np.ones((1,3), dtype=np.float32) print("w:\n", w) print("x:\n", x) y = np.matmul(w, np.transpose(x)) print("y:\n", y) y = einsum('ij,kj->ik', w, x) print("y:\n", y) Explanation: Tensor multiplication with transpose in numpy and einsum End of explanation w = np.arange(9).reshape(3,3).astype(np.float32) d = np.diag(w) print("w:\n", w) print("d:\n", d) d = einsum('ii->i', w) print("d:\n", d) Explanation: Properties of square matrices in numpy and einsum We demonstrate diagonal. End of explanation t = np.trace(w) print("t:\n", t) t = einsum('ii->', w) print("t:\n", t) Explanation: Trace. End of explanation s = np.sum(w, axis=0) print("s:\n", s) s = einsum('ij->j', w) print("s:\n", s) Explanation: Sum along an axis. End of explanation t = np.transpose(w) print("t:\n", t) t = einsum("ij->ji", w) print("t:\n", t) Explanation: Let us demonstrate tensor transpose. We can also use w.T to transpose w in numpy. End of explanation a = np.ones((3,), dtype=np.float32) b = np.ones((3,), dtype=np.float32) * 2 print("a:\n", a) print("b:\n", b) d = np.dot(a,b) print("d:\n", d) d = einsum("i,i->", a, b) print("d:\n", d) i = np.inner(a, b) print("i:\n", i) i = einsum("i,i->", a, b) print("i:\n", i) o = np.outer(a,b) print("o:\n", o) o = einsum("i,j->ij", a, b) print("o:\n", o) Explanation: Dot, inner and outer products in numpy and einsum. End of explanation
8,241
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step7: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step9: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step11: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step13: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step16: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step19: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step22: Encoding Implement encoding_layer() to create a Encoder RNN layer Step25: Decoding - Training Create a training decoding layer Step28: Decoding - Inference Create inference decoder Step31: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note Step34: Build the Neural Network Apply the functions you implemented above to Step35: Neural Network Training Hyperparameters Tune the following parameters Step37: Build the Graph Build the graph using the neural network you implemented. Step41: Batch and pad the source and target sequences Step44: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step46: Save Parameters Save the batch_size and save_path parameters for inference. Step48: Checkpoint Step51: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step53: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def get_ids(input_text, vocab_to_int): Returns a list of word IDs for each word in the input :param input_text: Input string :param vocab_to_int: A mapping of word to wordID. :return: A list of [IDs] for each sentence in the input. return [[vocab_to_int[word] for word in sentence.split()] for sentence in input_text] def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function source_sentences = [sentence for sentence in source_text.split('\n')] target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')] source_id_text = get_ids(source_sentences, source_vocab_to_int) target_id_text = get_ids(target_sentences, target_vocab_to_int) return source_id_text, target_id_text DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_seq_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_seq_length = tf.reduce_max(target_seq_length) source_seq_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_seq_length, max_target_seq_length, source_seq_length DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), end], 1) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) # TODO: Implement Function embed_encoder_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # Create an LSTM cell wrapped in a DropOutWrapper def create_lstm_cell(rnn_size): encoder_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42)) return tf.contrib.rnn.DropoutWrapper(encoder_cell, output_keep_prob=keep_prob) encoder_cell = tf.contrib.rnn.MultiRNNCell([create_lstm_cell(rnn_size) for _ in range(num_layers)]) return tf.nn.dynamic_rnn(encoder_cell, embed_encoder_input, sequence_length=source_sequence_length, dtype=tf.float32) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id # TODO: Implement Function # Try if a dropout has to be added here for the Decoder RNN cell. helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) output = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id # TODO: Implement Function start_token = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_token') helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_token, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) decoder_output = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # TODO: Implement Function decoded_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) decoded_embed_input = tf.nn.embedding_lookup(decoded_embeddings, dec_input) # Create an LSTM cell wrapped in a DropOutWrapper def create_lstm_cell(rnn_size): encoder_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42)) return encoder_cell decoded_cell = tf.contrib.rnn.MultiRNNCell([create_lstm_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope("decode"): training_logits = decoding_layer_train(encoder_state, decoded_cell, decoded_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference_logits = decoding_layer_infer(encoder_state, decoded_cell, decoded_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, len(target_vocab_to_int), output_layer, batch_size, keep_prob) return training_logits, inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # TODO: Implement Function _, encoding_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) decoding_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_dec_output, inference_dec_output = decoding_layer(decoding_input, encoding_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_dec_output, inference_dec_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation # Number of Epochs epochs = 8 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.6 display_step = 100 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths Explanation: Batch and pad the source and target sequences End of explanation DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
8,242
Given the following text description, write Python code to implement the functionality described below step by step Description: Neural Network <img style="float Step1: Initialize Weights Let's start looking at some initial weights. All Zeros or Ones If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case. With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust. Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start. Run the cell below to see the difference between weights of all zeros against all ones. Step2: As you can see the accuracy is close to guessing for both zeros and ones, around 10%. The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run. A good solution for getting these random weights is to sample from a uniform distribution. Uniform Distribution A [uniform distribution](https Step3: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2. Now that you understand the tf.random_uniform function, let's apply it to some initial weights. Baseline Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0. Step4: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction. General rule for setting weights The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron). Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1). Step5: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small? Too small Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot. Step6: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$. Step7: The range we found and $y=1/\sqrt{n}$ are really close. Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution. Normal Distribution Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram. tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a normal distribution. shape Step8: Let's compare the normal distribution against the previous uniform distribution. Step9: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution. Truncated Normal Distribution tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. shape Step10: Again, let's compare the previous results with the previous distribution. Step11: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations. We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
Python Code: # Save the shapes of weights for each layer layer_1_weight_shape = (mnist.train.images.shape[1], 256) layer_2_weight_shape = (256, 128) layer_3_weight_shape = (128, mnist.train.labels.shape[1]) Explanation: Neural Network <img style="float: left" src="images/neural_network.png"/> For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers. End of explanation all_zero_weights = [ tf.Variable(tf.zeros(layer_1_weight_shape)), tf.Variable(tf.zeros(layer_2_weight_shape)), tf.Variable(tf.zeros(layer_3_weight_shape)) ] all_one_weights = [ tf.Variable(tf.ones(layer_1_weight_shape)), tf.Variable(tf.ones(layer_2_weight_shape)), tf.Variable(tf.ones(layer_3_weight_shape)) ] helper.compare_init_weights( mnist, 'All Zeros vs All Ones', [ (all_zero_weights, 'All Zeros'), (all_one_weights, 'All Ones')]) Explanation: Initialize Weights Let's start looking at some initial weights. All Zeros or Ones If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case. With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust. Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start. Run the cell below to see the difference between weights of all zeros against all ones. End of explanation helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3)) Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%. The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run. A good solution for getting these random weights is to sample from a uniform distribution. Uniform Distribution A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution. tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None) Outputs random values from a uniform distribution. The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. dtype: The type of the output: float32, float64, int32, or int64. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3. End of explanation # Default for tf.random_uniform is minval=0 and maxval=1 basline_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape)), tf.Variable(tf.random_uniform(layer_2_weight_shape)), tf.Variable(tf.random_uniform(layer_3_weight_shape)) ] helper.compare_init_weights( mnist, 'Baseline', [(basline_weights, 'tf.random_uniform [0, 1)')]) Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2. Now that you understand the tf.random_uniform function, let's apply it to some initial weights. Baseline Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0. End of explanation uniform_neg1to1_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1)) ] helper.compare_init_weights( mnist, '[0, 1) vs [-1, 1)', [ (basline_weights, 'tf.random_uniform [0, 1)'), (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')]) Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction. General rule for setting weights The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron). Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1). End of explanation uniform_neg01to01_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1)) ] uniform_neg001to001_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01)) ] uniform_neg0001to0001_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001)) ] helper.compare_init_weights( mnist, '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)', [ (uniform_neg1to1_weights, '[-1, 1)'), (uniform_neg01to01_weights, '[-0.1, 0.1)'), (uniform_neg001to001_weights, '[-0.01, 0.01)'), (uniform_neg0001to0001_weights, '[-0.001, 0.001)')], plot_n_batches=None) Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small? Too small Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot. End of explanation import numpy as np general_rule_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))), tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))), tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0]))) ] helper.compare_init_weights( mnist, '[-0.1, 0.1) vs General Rule', [ (uniform_neg01to01_weights, '[-0.1, 0.1)'), (general_rule_weights, 'General Rule')], plot_n_batches=None) Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$. End of explanation helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000])) Explanation: The range we found and $y=1/\sqrt{n}$ are really close. Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution. Normal Distribution Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram. tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a normal distribution. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). End of explanation normal_01_weights = [ tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)), tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)), tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1)) ] helper.compare_init_weights( mnist, 'Uniform [-0.1, 0.1) vs Normal stddev 0.1', [ (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'), (normal_01_weights, 'Normal stddev 0.1')]) Explanation: Let's compare the normal distribution against the previous uniform distribution. End of explanation helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000])) Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution. Truncated Normal Distribution tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). End of explanation trunc_normal_01_weights = [ tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)), tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)), tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1)) ] helper.compare_init_weights( mnist, 'Normal vs Truncated Normal', [ (normal_01_weights, 'Normal'), (trunc_normal_01_weights, 'Truncated Normal')]) Explanation: Again, let's compare the previous results with the previous distribution. End of explanation helper.compare_init_weights( mnist, 'Baseline vs Truncated Normal', [ (basline_weights, 'Baseline'), (trunc_normal_01_weights, 'Truncated Normal')]) Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations. We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now. End of explanation
8,243
Given the following text description, write Python code to implement the functionality described below step by step Description: iching is a packge developed by Cheng-Jun Wang. It employs the method of Shicao prediction to reproce the prediction of I Ching--the Book of Exchanges. The I Ching ([î tɕíŋ]; Chinese Step1: First Change Step2: Second Change Step3: Third Change Step4: Plot transitions
Python Code: from iching import iching from datetime import date today = date.today() today = str(today).replace('-', '') birthtoday = int('19850526' + today) iching.ichingDate(birthtoday) fixPred, changePred = iching.getPredict() print iching.ichingName(fixPred, changePred ), iching.ichingText(fixPred, iching) iching.ichingDate(1985052620150704) if changePred: print ching.ichingText(fixPred, iching) else: print None Explanation: iching is a packge developed by Cheng-Jun Wang. It employs the method of Shicao prediction to reproce the prediction of I Ching--the Book of Exchanges. The I Ching ([î tɕíŋ]; Chinese: 易經; pinyin: Yìjīng), also known as the Classic of Changes or Book of Changes in English, is an ancient divination text and the oldest of the Chinese classics. Fifty yarrow Achillea millefolium subsp. m. var. millefolium stalks, used for I Ching divination.The Zhou yi provided a guide to cleromancy that used the stalks of the yarrow plant, but it is not known how the yarrow stalks became numbers, or how specific lines were chosen from the line readings.[22] In the hexagrams, broken lines were used as shorthand for the numbers 6 (六) and 8 (八), and solid lines were shorthand for values of 7 (七) and 9 (九). The Great Commentary contains a late classic description of a process where various numerological operations are performed on a bundle of 50 stalks, leaving remainders of 6 to 9. 大衍之数五十,其用四十有九。分而为二以象两,挂一以象三,揲之以四以象四时,归奇于扐以象闰。五岁再闰,故再扐而后挂。天一,地二;天三,地四;天五,地六;天七,地八;天九,地十。天数五,地数五。五位相得而各有合,天数二十有五,地数三十,凡天地之数五十有五,此所以成变化而行鬼神也。乾之策二百一十有六,坤之策百四十有四,凡三百六十,当期之日。二篇之策,万有一千五百二十,当万物之数也。是故四营而成《易》,十有八变而成卦,八卦而小成。引而伸之,触类而长之,天下之能事毕矣。显道神德行,是故可与酬酢,可与祐神矣。子曰:“知变化之道者,其知神之所为乎。” Install pip install iching Import and Use End of explanation data = 50 - 1 sky, earth, firstChange, data = iching.getChange(data) print sky, '\n', earth, '\n',firstChange, '\n', data Explanation: First Change End of explanation sky, earth, secondChange, data = iching.getChange(data) print sky, '\n', earth, '\n',secondChange, '\n', data Explanation: Second Change End of explanation sky, earth, thirdChange, data = iching.getChange(data) print sky, '\n', earth, '\n',thirdChange, '\n', data Explanation: Third Change End of explanation %matplotlib inline iching.plotTransition(100, w = 50) %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 10),facecolor='white') plt.subplot(2, 2, 1) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 2) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 3) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 4) iching.plotTransition(1000, w = 50) Explanation: Plot transitions End of explanation
8,244
Given the following text description, write Python code to implement the functionality described below step by step Description: 2.0 - Evaluate RnR Performance Simple example which will Step1: Set RnR Cluster ID Refer to the sample notebook 1.0 - Create RnR Cluster & Train Ranker for help setting up a cluster. Step2: Split Labelled Data into Training & Validation Splits The InsuranceLibV2 actually provides separate training and validation splits. But for this demo, we pretend we only have access to the 2,000 question dev subset of labelled ground truth data. So we will have to split the data ourselves. This data is included in this repository already formatted in the relevance file format. However, if your ground truth was annotated using the RnR Web UI, you can export it and use the RnRToolingExportFileQueryStream to read the ground truth in directly from the export-questions.json that can be downloaded from the RnR Web UI. Step3: Evaluate Base Retrieve (Solr) On Each Fold The rows setting defines how many search results you wish to evaluate for relevance w.r.t. the ground truth annotations. As a starting point, you will likely want to set a large rows parameter so you can observe recall at varying depths in the search result. I primarily use NDCG as the metric of evaluation (see here for why); but you can choose an alternative metric that makes sense for the application. Step4: At this point, you can experiment with tweaking the solr schema and document ingestion process and assess performance with each tweak to see its impact on the performance of your retrieval system. It's important to tweak one thing at a time so that you can assess their impact in isolation of other changes. You would like to increase the likelihood of overlapping terms between the query and the correct answer documents. So some things to try include Step5: Evaluate Ranker On Each Fold Based on the above analysis, there is a sharp increase in recall until you reach a depth of ~80 at which point the increase in recall starts to level off (though ideally you might go higher to follow the trend). So, for now, we choose 90 as the rows setting for our ranker as a compromise between the number of results which should be. You can, of course, simply try a bunch of rows settings and evaluate overall ranker performance with each setting too. WARNING
Python Code: import sys from os import path, getcwd import json from tempfile import mkdtemp import glob sys.path.extend([path.abspath(path.join(getcwd(), path.pardir))]) from rnr_debug_helpers.utils.rnr_wrappers import RetrieveAndRankProxy, \ RankerProxy from rnr_debug_helpers.utils.io_helpers import load_config, smart_file_open, \ RankerRelevanceFileQueryStream, initialize_query_stream, insert_modifier_in_filename, PredictionReader from rnr_debug_helpers.create_cross_validation_splits import split_files_into_k_cv_folds from rnr_debug_helpers.generate_rnr_feature_file import generate_rnr_features from rnr_debug_helpers.compute_ranking_stats import compute_performance_stats from rnr_debug_helpers.calculate_recall_at_varying_k_on_base_display_order import compute_recall_stats, \ print_recall_stats_to_csv config_file_path = path.abspath(path.join(getcwd(), path.pardir, 'config', 'config.ini')) print('Using config from {}'.format(config_file_path)) config = load_config(config_file_path=config_file_path) insurance_lib_data_dir = path.abspath(path.join(getcwd(), path.pardir, 'resources', 'insurance_lib_v2')) print('Using data from {}'.format(insurance_lib_data_dir)) Explanation: 2.0 - Evaluate RnR Performance Simple example which will: Use a previously created RnR Cluster (see example 1.0) Split labelled data into training & validation splits Evaluate the performance of the base system (i.e. Solr) Use results from 4, to choose the appropriate rows parameter setting for training the ranker Train and evaluate the ranker; seeing the performance difference from using base Solr. To learn more about the data used in the experiment, see here: https://github.ibm.com/rchakravarti/rnr-debugging-scripts/tree/master/resources/insurance_lib_v2 Note: Ensure credentials have been updated in config/config.ini Import the necessary scripts and data End of explanation cluster_id = "sc40bbecbd_362a_4388_b61b_e3a90578d3b3" collection_id = 'TestCollection' bluemix_wrapper = RetrieveAndRankProxy(solr_cluster_id=cluster_id, config=config) if not bluemix_wrapper.collection_previously_created(collection_id): raise ValueError('Must specify one of the available collections: {}'. format(bluemix_wrapper.bluemix_connection.list_collections(self.solr_cluster_id))) Explanation: Set RnR Cluster ID Refer to the sample notebook 1.0 - Create RnR Cluster & Train Ranker for help setting up a cluster. End of explanation experimental_directory = mkdtemp() number_of_folds = 3 with smart_file_open(path.join(insurance_lib_data_dir, 'validation_gt_relevance_file.csv')) as infile: split_files_into_k_cv_folds(initialize_query_stream(infile, file_format='relevance_file'), experimental_directory, k=number_of_folds) print('\nCreated train and validation splits in directory: {}'.format(experimental_directory)) for filename in glob.glob('{}/*/*.csv'.format(experimental_directory), recursive=True): print(filename) Explanation: Split Labelled Data into Training & Validation Splits The InsuranceLibV2 actually provides separate training and validation splits. But for this demo, we pretend we only have access to the 2,000 question dev subset of labelled ground truth data. So we will have to split the data ourselves. This data is included in this repository already formatted in the relevance file format. However, if your ground truth was annotated using the RnR Web UI, you can export it and use the RnRToolingExportFileQueryStream to read the ground truth in directly from the export-questions.json that can be downloaded from the RnR Web UI. End of explanation rows = 100 average_ndcg = 0.0 ndcg_evaluated_at = 50 for i in range(1, number_of_folds + 1): test_set = path.join(experimental_directory, 'Fold%d' % i, 'validation.relevance_file.csv') prediction_file = insert_modifier_in_filename(test_set,'fcselect_predictions','txt') with smart_file_open(test_set) as infile: # generate predictions labelled_test_questions = RankerRelevanceFileQueryStream(infile) json.dump(bluemix_wrapper.generate_fcselect_prediction_scores( test_questions=labelled_test_questions, num_rows=rows, prediction_file_location=prediction_file, collection_id=collection_id), sys.stdout, sort_keys=True, indent=4) # score them labelled_test_questions.reset() with smart_file_open(prediction_file) as preds_file: prediction_reader = PredictionReader(preds_file) stats_for_fold, _ = compute_performance_stats(prediction_reader=prediction_reader, ground_truth_query_stream=labelled_test_questions, k=ndcg_evaluated_at) print('\nPerformance on Fold %d' % i) json.dump(stats_for_fold, sys.stdout, sort_keys=True, indent=4) average_ndcg += stats_for_fold['ndcg@%d' % ndcg_evaluated_at] average_ndcg /= number_of_folds print('\nAverage NDCG@%d across folds: %.2f' % (ndcg_evaluated_at, average_ndcg)) Explanation: Evaluate Base Retrieve (Solr) On Each Fold The rows setting defines how many search results you wish to evaluate for relevance w.r.t. the ground truth annotations. As a starting point, you will likely want to set a large rows parameter so you can observe recall at varying depths in the search result. I primarily use NDCG as the metric of evaluation (see here for why); but you can choose an alternative metric that makes sense for the application. End of explanation average_recall_over_folds = None recall_settings = range(10, rows +1, 10) for i in range(1, number_of_folds + 1): print('\nComputing recall stats for fold %d' % i) test_set = path.join(experimental_directory, 'Fold%d' % i, 'validation.relevance_file.csv') prediction_file = insert_modifier_in_filename(test_set,'fcselect_predictions','txt') with smart_file_open(test_set) as infile: labelled_test_questions = RankerRelevanceFileQueryStream(infile) with smart_file_open(prediction_file) as preds_file: prediction_reader = PredictionReader(preds_file) recall_stats = compute_recall_stats(recall_settings, labelled_test_questions, prediction_reader) if average_recall_over_folds is None: average_recall_over_folds = recall_stats else: for k in recall_stats.keys(): average_recall_over_folds[k] += recall_stats[k] for k in average_recall_over_folds.keys(): average_recall_over_folds[k] /= float(number_of_folds) print_recall_stats_to_csv(average_recall_over_folds, sys.stdout) import matplotlib.pyplot as plt plt.plot([10, 20, 30, 40, 50, 60, 70, 80, 90, 100], [0.10499188404402464, 0.14970515053390282, 0.1888193396855732, 0.22338266958456865, 0.24548006683189091, 0.26680334746718226, 0.2805234226981687, 0.2932795904934835, 0.3027138998653241, 0.3110476959302546]) plt.show() Explanation: At this point, you can experiment with tweaking the solr schema and document ingestion process and assess performance with each tweak to see its impact on the performance of your retrieval system. It's important to tweak one thing at a time so that you can assess their impact in isolation of other changes. You would like to increase the likelihood of overlapping terms between the query and the correct answer documents. So some things to try include: - are you incorporating stopword removal / lowercasing / stemming etc into your index/query analyzers? - are all the appropriate text fields in the document being indexed for consumption by /fcselect? - are there domain specific synonyms which could be added (for query time expansion)? - is the similarity score appropriate for your use case? - could the documents themselves be enhanced with metadata fields that might make it easier for users to discover the right answer? E.g. if you have query logs, can you add old questions that have led users to click on the document as text in a new metadata field (Note: If you do this, make sure that the questions you collect for document augmentation are from a different time period than the time period from which you collected user queries for this training and evaluation...otherwise it's cheating and you'll end up with a false sense of system performance.) Choose a rows Setting for Ranker Training After you've tweaked the first pass search from base Solr, now we can train a ranker which will re-order the rows of the search result. As alluded to in the previous step, we can choose a reasonable rows setting by evaluating recall at varying depths in the search results. TODO: Show plot instead of listing a csv End of explanation rows=90 average_ndcg = 0.0 ndcg_evaluated_at = 50 for i in range(1, number_of_folds + 1): train_set = path.join(experimental_directory, 'Fold%d' % i, 'train.relevance_file.csv') test_set = path.join(experimental_directory, 'Fold%d' % i, 'validation.relevance_file.csv') # Step 1: Generate a feature file that can be used to train a ranker with smart_file_open(train_set) as infile: labelled_train_questions = RankerRelevanceFileQueryStream(infile) feature_file = insert_modifier_in_filename(train_set,'fcselect_features','txt') with smart_file_open(feature_file, mode='w') as outfile: stats = generate_rnr_features(collection_id=collection_id, cluster_id=cluster_id, num_rows=rows, in_query_stream=labelled_train_questions, outfile=outfile, config=config) # Step 2: Train a ranker ranker_api_wrapper = RankerProxy(config=config) ranker_name = 'TestRanker' ranker_id = ranker_api_wrapper.train_ranker(train_file_location=feature_file, train_file_has_answer_id=True, is_enabled_make_space=True, ranker_name=ranker_name) ranker_api_wrapper.wait_for_training_to_complete(ranker_id=ranker_id) # Step 3: Generate predictions using the ranker id with smart_file_open(test_set) as infile: prediction_file = insert_modifier_in_filename(test_set,'fcselect_with_ranker_predictions','txt') labelled_test_questions = RankerRelevanceFileQueryStream(infile) json.dump(bluemix_wrapper.generate_fcselect_prediction_scores( test_questions=labelled_test_questions, num_rows=rows, ranker_id=ranker_id, prediction_file_location=prediction_file, collection_id=collection_id), sys.stdout, sort_keys=True, indent=4) # Step 4: Evaluate labelled_test_questions.reset() with smart_file_open(prediction_file) as preds_file: prediction_reader = PredictionReader(preds_file, file_has_confidence_scores=True) stats_for_fold, _ = compute_performance_stats(prediction_reader=prediction_reader, ground_truth_query_stream=labelled_test_questions, k=ndcg_evaluated_at) print('\nPerformance on Fold %d' % i) json.dump(stats_for_fold, sys.stdout, sort_keys=True, indent=4) average_ndcg += stats_for_fold['ndcg@%d' % ndcg_evaluated_at] average_ndcg /= number_of_folds print('\nAverage NDCG@%d across folds: %.2f' % (ndcg_evaluated_at, average_ndcg)) Explanation: Evaluate Ranker On Each Fold Based on the above analysis, there is a sharp increase in recall until you reach a depth of ~80 at which point the increase in recall starts to level off (though ideally you might go higher to follow the trend). So, for now, we choose 90 as the rows setting for our ranker as a compromise between the number of results which should be. You can, of course, simply try a bunch of rows settings and evaluate overall ranker performance with each setting too. WARNING: Each set of credentials gives you 8 rankers, since I experiment a lot, I have a convenience flag to delete rankers in case the quota is full. You obviously want to switch this flag off if you have rankers you don't want deleted. End of explanation
8,245
Given the following text description, write Python code to implement the functionality described below step by step Description: This shows how to have some data and update priors form posterious as we get more data NOTE this requires Pymc3 3.1 Updating priors In this notebook, I will show how it is possible to update the priors as new data becomes available. The example is a slightly modified version of the linear regression in the Getting started with PyMC3 notebook. Step1: Generating data Step2: Model specification Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values. Step3: In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values. Because we draw samples from the posterior distribution (shown on the right in the figure above), we need to estimate their probability density (shown on the left in the figure above). Kernel density estimation (KDE) is a way to achieve this, and we will use this technique here. In any case, it is an empirical distribution that cannot be expressed analytically. Fortunately PyMC3 provides a way to use custom distributions, via Interpolated class. Step4: Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers. Step5: You can re-execute the last two cells to generate more updates. What is interesting to note is that the posterior distributions for our parameters tend to get centered on their true value (vertical lines), and the distribution gets thiner and thiner. This means that we get more confident each time, and the (false) belief we had at the beginning gets flushed away by the new data we incorporate.
Python Code: # pymc3.distributions.DensityDist? import matplotlib.pyplot as plt import matplotlib as mpl from pymc3 import Model, Normal, Slice from pymc3 import sample from pymc3 import traceplot from pymc3.distributions import Interpolated from theano import as_op import theano.tensor as tt import numpy as np from scipy import stats %matplotlib inline %load_ext version_information %version_information pymc3 Explanation: This shows how to have some data and update priors form posterious as we get more data NOTE this requires Pymc3 3.1 Updating priors In this notebook, I will show how it is possible to update the priors as new data becomes available. The example is a slightly modified version of the linear regression in the Getting started with PyMC3 notebook. End of explanation # Initialize random number generator np.random.seed(123) # True parameter values alpha_true = 5 beta0_true = 7 beta1_true = 13 # Size of dataset size = 100 # Predictor variable X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 # Simulate outcome variable Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size) Explanation: Generating data End of explanation basic_model = Model() with basic_model: # Priors for unknown model parameters alpha = Normal('alpha', mu=0, sd=1) beta0 = Normal('beta0', mu=12, sd=1) beta1 = Normal('beta1', mu=18, sd=1) # Expected value of outcome mu = alpha + beta0 * X1 + beta1 * X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y) # draw 1000 posterior samples trace = sample(1000) traceplot(trace); Explanation: Model specification Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values. End of explanation def from_posterior(param, samples): smin, smax = np.min(samples), np.max(samples) width = smax - smin x = np.linspace(smin, smax, 100) y = stats.gaussian_kde(samples)(x) # what was never sampled should have a small probability but not 0, # so we'll extend the domain and use linear approximation of density on it x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]]) y = np.concatenate([[0], y, [0]]) return Interpolated(param, x, y) Explanation: In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values. Because we draw samples from the posterior distribution (shown on the right in the figure above), we need to estimate their probability density (shown on the left in the figure above). Kernel density estimation (KDE) is a way to achieve this, and we will use this technique here. In any case, it is an empirical distribution that cannot be expressed analytically. Fortunately PyMC3 provides a way to use custom distributions, via Interpolated class. End of explanation traces = [trace] for _ in range(10): # generate more data X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size) model = Model() with model: # Priors are posteriors from previous iteration alpha = from_posterior('alpha', trace['alpha']) beta0 = from_posterior('beta0', trace['beta0']) beta1 = from_posterior('beta1', trace['beta1']) # Expected value of outcome mu = alpha + beta0 * X1 + beta1 * X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y) # draw 10000 posterior samples trace = sample(1000) traces.append(trace) print('Posterior distributions after ' + str(len(traces)) + ' iterations.') cmap = mpl.cm.autumn for param in ['alpha', 'beta0', 'beta1']: plt.figure(figsize=(8, 2)) for update_i, trace in enumerate(traces): samples = trace[param] smin, smax = np.min(samples), np.max(samples) x = np.linspace(smin, smax, 100) y = stats.gaussian_kde(samples)(x) plt.plot(x, y, color=cmap(1 - update_i / len(traces))) plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k') plt.ylabel('Frequency') plt.title(param) plt.show() Explanation: Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers. End of explanation for _ in range(10): # generate more data X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size) model = Model() with model: # Priors are posteriors from previous iteration alpha = from_posterior('alpha', trace['alpha']) beta0 = from_posterior('beta0', trace['beta0']) beta1 = from_posterior('beta1', trace['beta1']) # Expected value of outcome mu = alpha + beta0 * X1 + beta1 * X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y) # draw 10000 posterior samples trace = sample(1000) traces.append(trace) print('Posterior distributions after ' + str(len(traces)) + ' iterations.') cmap = mpl.cm.autumn for param in ['alpha', 'beta0', 'beta1']: plt.figure(figsize=(8, 2)) for update_i, trace in enumerate(traces): samples = trace[param] smin, smax = np.min(samples), np.max(samples) x = np.linspace(smin, smax, 100) y = stats.gaussian_kde(samples)(x) plt.plot(x, y, color=cmap(1 - update_i / len(traces))) plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k') plt.ylabel('Frequency') plt.title(param) plt.show() for _ in range(10): # generate more data X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size) model = Model() with model: # Priors are posteriors from previous iteration alpha = from_posterior('alpha', trace['alpha']) beta0 = from_posterior('beta0', trace['beta0']) beta1 = from_posterior('beta1', trace['beta1']) # Expected value of outcome mu = alpha + beta0 * X1 + beta1 * X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y) # draw 10000 posterior samples trace = sample(1000) traces.append(trace) print('Posterior distributions after ' + str(len(traces)) + ' iterations.') cmap = mpl.cm.autumn for param in ['alpha', 'beta0', 'beta1']: plt.figure(figsize=(8, 2)) for update_i, trace in enumerate(traces): samples = trace[param] smin, smax = np.min(samples), np.max(samples) x = np.linspace(smin, smax, 100) y = stats.gaussian_kde(samples)(x) plt.plot(x, y, color=cmap(1 - update_i / len(traces))) plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k') plt.ylabel('Frequency') plt.title(param) plt.show() Explanation: You can re-execute the last two cells to generate more updates. What is interesting to note is that the posterior distributions for our parameters tend to get centered on their true value (vertical lines), and the distribution gets thiner and thiner. This means that we get more confident each time, and the (false) belief we had at the beginning gets flushed away by the new data we incorporate. End of explanation
8,246
Given the following text description, write Python code to implement the functionality described below step by step Description: Logistic Regression with a Neural Network mindset Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning. Instructions Step1: 2 - Overview of the Problem set Problem Statement Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing). Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images. Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. Exercise Step4: Expected Output for m_train, m_test and num_px Step5: Expected Output Step7: <font color='blue'> What you need to remember Step9: Expected Output Step11: Expected Output Step13: Expected Output Step14: Expected Output Step16: Expected Output Step17: Run the following cell to train your model. Step18: Expected Output Step19: Let's also plot the cost function and the gradients. Step20: Interpretation Step21: Interpretation
Python Code: import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline Explanation: Logistic Regression with a Neural Network mindset Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning. Instructions: - Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so. You will learn to: - Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - h5py is a common package to interact with a dataset that is stored on an H5 file. - matplotlib is a famous library to plot graphs in Python. - PIL and scipy are used here to test your model with your own picture at the end. End of explanation # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() Explanation: 2 - Overview of the Problem set Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px). You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat. Let's get more familiar with the dataset. Load the data by running the following code. End of explanation # Example of a picture index = 23 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:,index]) + ", it's a '" + classes[np.squeeze(train_set_y[:,index])].decode("utf-8") + "' picture.") Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing). Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images. End of explanation ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_y.shape[1] m_test = test_set_y.shape[1] num_px = train_set_x_orig[0].shape[0] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. Exercise: Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image) Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0]. End of explanation # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) Explanation: Expected Output for m_train, m_test and num_px: <table style="width:15%"> <tr> <td>**m_train**</td> <td> 209 </td> </tr> <tr> <td>**m_test**</td> <td> 50 </td> </tr> <tr> <td>**num_px**</td> <td> 64 </td> </tr> </table> For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns. Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1). A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use: python X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X End of explanation train_set_x = train_set_x_flatten / 255. test_set_x = test_set_x_flatten / 255. Explanation: Expected Output: <table style="width:35%"> <tr> <td>**train_set_x_flatten shape**</td> <td> (12288, 209)</td> </tr> <tr> <td>**train_set_y shape**</td> <td>(1, 209)</td> </tr> <tr> <td>**test_set_x_flatten shape**</td> <td>(12288, 50)</td> </tr> <tr> <td>**test_set_y shape**</td> <td>(1, 50)</td> </tr> <tr> <td>**sanity check after reshaping**</td> <td>[17 31 56 22 33]</td> </tr> </table> To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255. One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). <!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !--> Let's standardize our dataset. End of explanation # GRADED FUNCTION: sigmoid def sigmoid(z): Compute the sigmoid of z Arguments: x -- A scalar or numpy array of any size. Return: s -- sigmoid(z) ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(9.2) = " + str(sigmoid(9.2))) Explanation: <font color='blue'> What you need to remember: Common steps for pre-processing a new dataset are: - Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) - Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) - "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images. You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network! <img src="images/LogReg_kiank.png" style="width:650px;height:400px;"> Mathematical expression of the algorithm: For one example $x^{(i)}$: $$z^{(i)} = w^T x^{(i)} + b \tag{1}$$ $$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$ The cost is then computed by summing over all training examples: $$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$ Key steps: In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm ## The main steps for building a Neural Network are: 1. Define the model structure (such as number of input features) 2. Initialize the model's parameters 3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent) You often build 1-3 separately and integrate them into one function we call model(). 4.1 - Helper functions Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b)$ to make predictions. End of explanation # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) ### START CODE HERE ### (≈ 1 line of code) w, b = np.zeros((dim, 1)), 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) Explanation: Expected Output: <table style="width:20%"> <tr> <td>**sigmoid(0)**</td> <td> 0.5</td> </tr> <tr> <td>**sigmoid(9.2)**</td> <td> 0.999898970806 </td> </tr> </table> 4.2 - Initializing parameters Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. End of explanation # GRADED FUNCTION: propagate def propagate(w, b, X, Y): Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) cost = -1.0 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1.0 / m * np.dot(X, (A - Y).T) db = 1.0 / m * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) Explanation: Expected Output: <table style="width:15%"> <tr> <td> ** w ** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> ** b ** </td> <td> 0 </td> </tr> </table> For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagation Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters. Exercise: Implement a function propagate() that computes the cost function and its gradient. Hints: Forward Propagation: - You get X - You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$ - You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$ Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$ $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ End of explanation # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" % (i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) Explanation: Expected Output: <table style="width:50%"> <tr> <td> ** dw ** </td> <td> [[ 0.99993216] [ 1.99980262]]</td> </tr> <tr> <td> ** db ** </td> <td> 0.499935230625 </td> </tr> <tr> <td> ** cost ** </td> <td> 6.000064773192205</td> </tr> </table> d) Optimization You have initialized your parameters. You are also able to compute a cost function and its gradient. Now, you want to update the parameters using gradient descent. Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. End of explanation # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1, m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### # Y_prediction[A >= 0.5] = int(1) # Y_prediction[A < 0.5] = int(0) for i in range(A.shape[1]): # Convert probabilities a[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) if A[0][i] > 0.5: Y_prediction[0][i] = 1 else: Y_prediction[0][i] = 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction.astype(int) print("predictions = " + str(predict(w, b, X))) Explanation: Expected Output: <table style="width:40%"> <tr> <td> **w** </td> <td>[[ 0.1124579 ] [ 0.23106775]] </td> </tr> <tr> <td> **b** </td> <td> 1.55930492484 </td> </tr> <tr> <td> **dw** </td> <td> [[ 0.90158428] [ 1.76250842]] </td> </tr> <tr> <td> **db** </td> <td> 0.430462071679 </td> </tr> </table> Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions: Calculate $\hat{Y} = A = \sigma(w^T X + b)$ Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this). End of explanation # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False): Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. ### START CODE HERE ### dim = X_train.shape[0] w, b = initialize_with_zeros(dim) params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations = num_iterations, learning_rate = learning_rate, print_cost = print_cost) w = params["w"] b = params["b"] Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d Explanation: Expected Output: <table style="width:30%"> <tr> <td> **predictions** </td> <td> [[ 1. 1.]] </td> </tr> </table> <font color='blue'> What to remember: You've implemented several functions that: - Initialize (w,b) - Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent - Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order. Exercise: Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() End of explanation d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) Explanation: Run the following cell to train your model. End of explanation # Example of a picture that was wrongly classified. index = 5 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print(d["Y_prediction_test"][0, index]) print ("y = " + str(test_set_y[0, index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0, index]].decode("utf-8") + "\" picture.") Explanation: Expected Output: <table style="width:40%"> <tr> <td> **Train Accuracy** </td> <td> 99.04306220095694 % </td> </tr> <tr> <td>**Test Accuracy** </td> <td> 70.0 % </td> </tr> </table> Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week! Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set. End of explanation # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() Explanation: Let's also plot the cost function and the gradients. End of explanation learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() Explanation: Interpretation: You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate Reminder: In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate. Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens. End of explanation ## START CODE HERE ## (PUT YOUR IMAGE NAME) ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") Explanation: Interpretation: - Different learning rates give different costs and thus different predictions results. - If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy. - In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! End of explanation
8,247
Given the following text description, write Python code to implement the functionality described below step by step Description: MD Top 15 violations by total revenue (revenue and total) Step1: Areas to explore Failure to secure dc tags --- huge revenue maker Residential Parking beyond permit period Park at Expired Meter MD Top 15 violations by total tickets (revenue and total)
Python Code: dc_df = df[(df.rp_plate_state.isin(['MD']))] dc_fines = dc_df.groupby(['violation_code']).fine.sum().reset_index('violation_code') fine_codes_15 = dc_fines.sort_values(by='fine', ascending=False)[:15] top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)] top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() Explanation: MD Top 15 violations by total revenue (revenue and total) End of explanation dc_df = df[(df.rp_plate_state.isin(['MD']))] dc_fines = dc_df.groupby(['violation_code']).counter.sum().reset_index('violation_code') fine_codes_15 = dc_fines.sort_values(by='counter', ascending=False)[:15] top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)] top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() Explanation: Areas to explore Failure to secure dc tags --- huge revenue maker Residential Parking beyond permit period Park at Expired Meter MD Top 15 violations by total tickets (revenue and total) End of explanation
8,248
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Analysis Evaluation Author Step1: Process AQS for evaluation Download annual zip file(s) Unzip Use sed, grep, or awk to get spatial/temporal subset Reshape data Missing data should be masked Dimensions (time, point) CHECK POINT What do you think the dimensions should be for AQS observations? What meta-data should be present? Getting AQS Observations Get AQS observations Raw outputs from AQS website Representational State Transfer - good for small amounts of data REST format was having problems due to its transition. Raw outputs Download directly or download inline Step2: wktpolygon This option is most relevant for regional extractions. Step3: CHECK POINT What should the bounding box be as a WKT Polygon? ANSWERS Hidden <div style="visibility Step4: Review Output Step5: Extract GEOS-Chem at AQS Step6: Reproduced in Python Read in files Get variables Calculate MB, RMSE Modify Repeat except for month specific results Repeat except for site specific results Reproduction Provided
Python Code: # Prepare my slides %pylab inline %cd working Explanation: Python Analysis Evaluation Author: Barron H. Henderson End of explanation !pncaqsraw4pnceval.py --help Explanation: Process AQS for evaluation Download annual zip file(s) Unzip Use sed, grep, or awk to get spatial/temporal subset Reshape data Missing data should be masked Dimensions (time, point) CHECK POINT What do you think the dimensions should be for AQS observations? What meta-data should be present? Getting AQS Observations Get AQS observations Raw outputs from AQS website Representational State Transfer - good for small amounts of data REST format was having problems due to its transition. Raw outputs Download directly or download inline End of explanation from shapely.wkt import loads geom = loads("POLYGON ((30 10, 40 35, 20 40, 10 20, 30 10))") x, y = geom.exterior.xy plt.plot(x, y, ls = '-', marker = 'o') Explanation: wktpolygon This option is most relevant for regional extractions. End of explanation !pncaqsraw4pnceval.py -O --timeresolution=daily \ --start-date 2013-05-01 --end-date 2013-07-01 \ --wktpolygon "POLYGON ((-181.25 0, 178.75 0, 178.75 90, -181.25 90, -181.25 0))" %ls -l AQS_DATA_20130501-20130701.nc Explanation: CHECK POINT What should the bounding box be as a WKT Polygon? ANSWERS Hidden <div style="visibility: hidden"> "POLYGON ((llcrnrlon llcrnrlat, lrcrnrlon lrcrnrlat, urcrnrlon urcrnrlat, ulcrnrlon ulcrnrlat, llcrnrlon llcrnrlat))" </div> Download and Process End of explanation !pncdump.py --header AQS_DATA_20130501-20130701.nc Explanation: Review Output End of explanation !pncgen -O -f "bpch,vertgrid='GEOS-5-NATIVE',nogroup=('IJ-AVG-$',)" \ --extract-file AQS_DATA_20130501-20130701.nc --stack=time -v O3 -s layer72,0 \ bpch/ctm.bpch.v10-01-public-Run0.2013050100 \ bpch/ctm.bpch.v10-01-public-Run0.2013050100 \ bpch_aqs_extract.nc !pncdump.py --header bpch_aqs_extract.nc !pnceval.py --help %%bash pnceval.py --funcs NO,NP,NOP,MO,MP,MB,RMSE,IOA,AC -v O3 \ --pnc " --expr O3=Ozone*1000;O3.units=\'ppb\' -r time,mean AQS_DATA_20130501-20130701.nc" \ --pnc " -r time,mean bpch_aqs_extract.nc" from PseudoNetCDF import pnceval help(pnceval) Explanation: Extract GEOS-Chem at AQS End of explanation from PseudoNetCDF import PNC, pnceval aqs = PNC("--reduce=time,mean", "--expr=O3=Ozone*1000", "AQS_DATA_20130501-20130701.nc") geos = PNC("--reduce=time,mean", "bpch_aqs_extract.nc") aqso3 = aqs.ifiles[0].variables['O3'] geoso3 = geos.ifiles[0].variables['O3'] print(aqso3.shape) print(geoso3.shape) print(pnceval.RMSE(aqso3, geoso3)) Explanation: Reproduced in Python Read in files Get variables Calculate MB, RMSE Modify Repeat except for month specific results Repeat except for site specific results Reproduction Provided End of explanation
8,249
Given the following text description, write Python code to implement the functionality described below step by step Description: Catalog Step1: Setup OpenCGA Variables Once we have defined a variable with the client configuration and credentials, we can access to all the methods defined for the client. These methods implement calls to query different data models in OpenCGA. Over the user case addressed in this notebook we will be performing queries to the users, projects, studies, samples, individuals and cohorts<br> OpenCGA data models. Use Cases In this seciton we are going to show how to work with some of the most common scenarios.<br> - The user-cases addresed here constute a high-level introduction aimed to provide a basis for the user to make their own explorations. - The examples can be adapted to each individual user-case. Exploring User Account Step2: Using the REST response API Step3: User Projects Step4: The fqn owner@project shows the owner of the project/s; this owner has granted permission to our user to the projects above. User Studies Step5: Obtain the studies for the project_id defined above with the result_iterator() method Step6: For the rest of the notebook, we will use a specific study to query catalog information. The study is defined in the variable study_id. Step7: 2. Checking Groups and Permissions [NOTE] Step8: User Groups If we want to check in which groups is our user included Step9: Independently of the groups defined for a study, our user always belongs to the group members, which is one of the default groups in OpenCGA. User Permissions We might be wondering which specific permissions our user has. We can check this using the client.acl() method (acl = access control list) Step10: Default Groups in OpenCGA The default groups in OpenCGA are Step11: We can see that the number of samples in the study is given by #Num matches by using the parameter count=True. Execise Step12: Individuals Step13: We might be interested in knowing when the individuals were added to OpenCGA, or the individuals sex. Since pyopencga 2.0.1.1 it is possible to export the results to a pandas dataframe object with the function to_data_frame() Step14: 2. Exploring Files Files in a study We can start by exploring the number of files in the study, and retrieveing information about one file as an example of which kind of data is stored in the file data model of OpenCGA. Step15: File Specific Info There is plenty of useful information contained in the file data model like the file format, the stats, size of the file. If we want to look for more concrete information about one specific file Step16: Files with a specific sample We can also be interested in knowing the number of files for a specific sample Step17: 3. Exploring Cohorts One powerful feature of OpenCGA is the possibility of define cohorts that include individuals with common traits of interest, like a phenotype, nationality etc. The cohorts are defined at the study level. OpenCGA creates a default cohort ALL, which includes all the individuals of the study. We can explore which cohorts are defined in the study by
Python Code: ## Step 1. Import pyopencga dependecies from pyopencga.opencga_config import ClientConfiguration # import configuration module from pyopencga.opencga_client import OpencgaClient # import client module from pprint import pprint from IPython.display import JSON import matplotlib.pyplot as plt import seaborn as sns ## Step 2. OpenCGA host host = 'https://ws.opencb.org/opencga-prod' ## Step 3. User credentials user = 'demouser' passwd = 'demouser' ## you can skip this, see below. ## Step 4. Create the ClientConfiguration dict config_dict = {'rest': { 'host': host } } ## Step 5. Create the ClientConfiguration and OpenCGA client config = ClientConfiguration(config_dict) oc = OpencgaClient(config) ## Step 6. Login to OpenCGA using the OpenCGA client ## Option1: Pass the credentials to the client # (here we put only the user in order to be asked for the password interactively) # oc.login(user) ## Option2: you can pass the user and passwd oc.login(user, passwd) print('Logged succesfuly to {}, your token is: {} well done!'.format(host, oc.token)) Explanation: Catalog: Overview What is Catalog? Catalog is the OpenCGA component that takes care of clinical data, metadata and permissions. This notebook is intended to provide guidance for querying an OpenCGA server through pyopencga to explore: - Studies which the user has access to - Clinical data provided in the study (Samples, Individuals Genotypes etc.) - Other types of metadata, like permissions. A good first step when start working with OpenCGA is to retrieve information about our user, which projects and studies are we allowed to see.<br> It is also recommended to get a taste of the clinical data we are encountering in the study: How many samples and individuals does the study have? Is there any defined cohorts? Can we get some statistics about the genotypes of the samples in the Sudy? For guidance on how to loggin and get started with opencga you can refer to : pyopencga_first_steps.ipynb [NOTE] The server methods used by pyopencga client are defined in the following swagger URL: - https://ws.opencb.org/opencga-prod/webservices/ Setup the Client and Login into pyopencga Configuration and Credentials Let's assume we already have pyopencga installed in our python setup (all the steps described on pyopencga_first_steps.ipynb). You need to provide at least a host server URL in the standard configuration format for OpenCGA as a python dictionary or in a json file. End of explanation ## Getting user information ## [NOTE] User needs the quey_id string directly --> (user) #Print using the print_results() function: user_info = oc.users.info(user) user_info.print_results( title='User info with print_results() function:') # metadata=False ## Uncomment next line to display an interactive JSON viewer # JSON(user_info.get_results()) Explanation: Setup OpenCGA Variables Once we have defined a variable with the client configuration and credentials, we can access to all the methods defined for the client. These methods implement calls to query different data models in OpenCGA. Over the user case addressed in this notebook we will be performing queries to the users, projects, studies, samples, individuals and cohorts<br> OpenCGA data models. Use Cases In this seciton we are going to show how to work with some of the most common scenarios.<br> - The user-cases addresed here constute a high-level introduction aimed to provide a basis for the user to make their own explorations. - The examples can be adapted to each individual user-case. Exploring User Account: Permissios, Projects and Studies In this use case we cover retrieving information for our user. In OpenCGA, all the user permissions are established at a study level. One project contains at least one study, although it may contain several. Full Qualified Name (fqn) of Studies It is also very important to understand that in OpenCGA, the projects and studies have a full qualified name (fqn) with the format:<br> [[owner]@[project]]:[study] We cannot be sure if there might be other studies with the same name contained in other projects.<br> (E.g: the study platinium might be defined in two different projects: GRch37_project and GRch38_project) Because of that that, it is recomended to use the fqn when referencing studies. 1. Exploring Projects and Studies with our user Users: owner and members Depending on the permissions granted, a user can be the owner of a study or just have access to some studies owned by other users.<br>We can retrieve information about our user and its permissions by: - Using the print_results() function End of explanation # Using REST response API: print("\n User info using REST response API: \n") user_info = oc.users.info(user).get_result(0) user_projects = user_info['projects'] # Define projects owned by our user print('id:{}\taccount_type: {}\t projects_owned: {}'.format(user, user_info['account']['type'], len(user_projects))) print('\nWe can appreciate that our user: {} has {} projects from its own: {}'.format(user, len(user_projects), user_projects)) Explanation: Using the REST response API End of explanation ## Getting user projects ## [NOTE] Client specific methods have the query_id as a key:value (i.e (user=user_id)) projects_info = oc.projects.search() projects_info.print_results(fields='id,name,organism.scientificName,organism.assembly,fqn', title='Projects our user ({}) has access to:'.format(user), metadata=False) Explanation: User Projects: Although an user doesn't own any project, it might has been granted access to projects created by other users. Let's see how to find this out. We can list our user's projects using project client search() function. End of explanation # First we define one projectId project_info = oc.projects.search().get_result(0) project_id = project_info['id'] print('For this user-case, we can use project:{}'.format(project_id)) Explanation: The fqn owner@project shows the owner of the project/s; this owner has granted permission to our user to the projects above. User Studies: Let's see which studies do we have access within the project. - Define a random project for the use case within the ones available for demouser. End of explanation studies = oc.studies.search(project_id) ## Print the studies using the result_iterator() method print('Our user [{}] has access to 2 different studies within the [{}] project\n'.format(user, project_id)) for study in studies.result_iterator(): print("project:{}\t study_id:{}\t study_fqn:{} ".format(project_id, study['id'], study['fqn'])) Explanation: Obtain the studies for the project_id defined above with the result_iterator() method: End of explanation # Define the study we are going to work with study_info = oc.studies.search(project_id).get_result(0) study_id = study_info['id'] study_fqn = study_info['fqn'] print("Let's use the study: [{}] with fqn: [{}]".format(study_id, study_fqn)) Explanation: For the rest of the notebook, we will use a specific study to query catalog information. The study is defined in the variable study_id. End of explanation # # Query to the study web service # groups = oc.studies.groups(study_fqn) # study_groups = [] # Define an empty list for the groups # ## This will give us the whole list of groups existing in the studya # for group in groups.result_iterator(): # study_groups.append(group['id']) # print("group_id: {}".format(group['id'])) # print('\nThere are 3 groups in the study {}: {}'.format(study_fqn, study_groups)) Explanation: 2. Checking Groups and Permissions [NOTE]: This can ONLY be done by an admin or the owner. If your user is not any of these, skipt this section. Now we can assume that we want to check to which groups our user belongs to and which permisions pur user has been granted for the study (remember that all the permissions are established at the study level). Groups in the Study OpenCGA define the permissions (for both groups and users) at the Study level. The first step might be check which groups exist within the study. End of explanation # user_groups = [] # Define an empty list # ## This will give us only the groups our user belongs to # for group in groups.result_iterator(): # if user_id in group['userIds']: # user_groups.append(group['id']) # print("group_id: {}".format(group['id'])) # print('\nOur user {} belongs to group/s: {}'.format(user_id, user_groups)) Explanation: User Groups If we want to check in which groups is our user included End of explanation # Permissions granted directly to user: acls = oc.studies.acl(study_id, member=user_id).get_result(0) print('The user',user_id,' has the following permissions:\n\n', acls[user_id]) Explanation: Independently of the groups defined for a study, our user always belongs to the group members, which is one of the default groups in OpenCGA. User Permissions We might be wondering which specific permissions our user has. We can check this using the client.acl() method (acl = access control list): End of explanation ## Call to the sample web endpoint samples = oc.samples.search(study=study_fqn, includeIndividual=True, count=True, limit = 5) ## other possible params, count=False, id='NA12880,NA12881' samples.print_results(fields='id,creationDate,somatic,phenotypes.id,phenotypes.name,individualId', title='Info from 5 samples from study {}'.format(study_fqn)) ## Uncomment next line to display an interactive JSON viewer # JSON(samples.get_results()) Explanation: Default Groups in OpenCGA The default groups in OpenCGA are: members and admins. Intuitively, the group members is the basic group and has any default permissions. On the other hand, users in the group admins have permission to see and edit the study information. For more information about user and group permissions, check the official OpenCGA documentation: Catalog and Security - Users and Permissions Exploring Catalog Clinical Metadata A genomic data analysis platform need to keep track of different resources such as: Clinical Data: information about individuals, samples from those individuals etc. Files Metadata: information about files contained in the platform, such as VCFs and BAMs. OpenCGA Catalog is the component that assumes this role by storing this kind of information 1. Exploring Samples and Individuals Once we know the studies our user has access to, we can explore the samples within the study.<br> To fetch samples you need to use the sample client built in pyopencga. Remember that it is recomended to use the fqn when referencing studies.<br> Samples: Let's imagine we want to know how many samples are in the study stored in the study_fqn variable, and list information about the first two samples: End of explanation sample_ids = [] # Define an empty list # Define a new sample query without limit samples = oc.samples.search(study=study_fqn, count=True) for sample in samples.result_iterator(): sample_ids.append(sample['id']) print('There are {} samples with ids:\n {}\n'.format(len(sample_ids), sample_ids)) Explanation: We can see that the number of samples in the study is given by #Num matches by using the parameter count=True. Execise: How to get all the sample ids? Above, we have used the parameter limit to restrict the number of samples the query returns. We can get all the samples ids by: End of explanation ## Using the individuals search web service individuals = oc.individuals.search(study=study_fqn, count=True, limit=5) ## other possible params, count=False, id='NA12880,NA12881' individuals.print_results( title='Information about 5 individuals in the study{}'.format(study_fqn)) ## Uncomment next line to display an interactive JSON viewer # JSON(individuals.get_results()) Explanation: Individuals: Now, we can repite the same process for check the number of individuals in the study . The difference is that now we will be making a call to the individuals web service: End of explanation ## Using the individuals search web service without limit param individuals = oc.individuals.search(study=study_fqn) ## Using the new function to_data_frame() individuals_df = individuals.to_data_frame() print(individuals_df[['id', 'sex', 'uuid', 'creationDate']].head()) ## Retrieve metrics individuals_df.describe() Explanation: We might be interested in knowing when the individuals were added to OpenCGA, or the individuals sex. Since pyopencga 2.0.1.1 it is possible to export the results to a pandas dataframe object with the function to_data_frame(): End of explanation ## Using the files web service files = oc.files.search(study=study_fqn, count=True, type='FILE', limit=5, exclude='attributes') ## other possible params, count=False, id='NA12880,NA12881' files.print_results(fields='id,format,size,software', title='Information about files in study {}'.format(study_fqn)) ## Uncomment next line to display an interactive JSON viewer #JSON(files.get_results()) Explanation: 2. Exploring Files Files in a study We can start by exploring the number of files in the study, and retrieveing information about one file as an example of which kind of data is stored in the file data model of OpenCGA. End of explanation my_vcf = files.get_result(1) print('The study {} contains a {} file with id: {},\ncreated on: {}'.format(study_fqn, my_vcf['format'], my_vcf['id'], my_vcf['creationDate'])) Explanation: File Specific Info There is plenty of useful information contained in the file data model like the file format, the stats, size of the file. If we want to look for more concrete information about one specific file: End of explanation ## Using the samples info web service sample_of_interest = sample_ids[0] ## List the files for a concrete sample sample = oc.samples.info(study=study_fqn, samples= sample_of_interest) ## other possible params, count=False, id='NA12880,NA12881' sample_files = sample.get_result(0)['fileIds'] print('The sample {} has file/s: {}'.format(sample_of_interest, sample_files)) Explanation: Files with a specific sample We can also be interested in knowing the number of files for a specific sample: End of explanation ## Using the cohorts search web service cohorts = oc.cohorts.search(study=study_fqn, count=True, exclude='samples') ## other possible params, count=False, id='NA12880,NA12881' cohorts.print_results(fields='id,type,description,numSamples', title='Information about cohorts in study {}'.format(study_fqn)) ## Uncomment next line to display an interactive JSON viewer #JSON(cohorts.get_results()) Explanation: 3. Exploring Cohorts One powerful feature of OpenCGA is the possibility of define cohorts that include individuals with common traits of interest, like a phenotype, nationality etc. The cohorts are defined at the study level. OpenCGA creates a default cohort ALL, which includes all the individuals of the study. We can explore which cohorts are defined in the study by: End of explanation
8,250
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-1', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: INPE Source ID: SANDBOX-1 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:06 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
8,251
Given the following text description, write Python code to implement the functionality described below step by step Description: Video Codec Unit (VCU) Demo Example Step1: Run the Demo Step2: Insert input file path and host IP Step3: Output Format Step4: Advanced options
Python Code: from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') Explanation: Video Codec Unit (VCU) Demo Example: TRANSCODE -> STREAMOUT Introduction Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time. This notebook example shows File streaming use case using 2 ZCU106 boards wherein board-1(that acts as server) does video transcode and stream the encoded data over ethernet, board-2 (that acts as client) receives data, decode the data received and renders it on DP/HDMI monitor. Implementation Details <img src="pictures/block-diagram-transcode-streamout.png" align="center" alt="Drawing" style="width: 1000px; height: 200px"/> This example requires two boards, board-1 is used for transcode and stream-out (as a server) and board 2 is used for streaming-in and decode purpose (as a client) or VLC player on the host machine can be used as client instead of board-2 (More details regarding Test Setup for board-2 can be found in stream-in → Decode Example). Note: This notebook needs to be run along with "vcu-demo-streamin-decode-display.ipynb". The configuration settings below are for Sever side pipeline. Board Setup Board-1 is used for transcode and stream-out (as a server) 1. Connect serial cable to monitor logs on serial console. 2. If Board is connected to private network, then export proxy settings in /home/root/.bashrc file on board as below, - create/open a bashrc file using "vi ~/.bashrc" - Insert below line to bashrc file - export http_proxy="< private network proxy address >" - export https_proxy="< private network proxy address >" - Save and close bashrc file. 3. Make sure input video file is copied to board-1 for streaming. 4. Connect two boards in the same network so that they can access each other using IP address. 5. Check server IP. - root@zcu106-zynqmp:~#ifconfig 6. Check client IP on client board. 7. Check connectivity for board-1 & board-2. - root@zcu106-zynqmp:~#ping <board-2's IP> 8. Provide client's board IP as Client IP parameters. 9. Run Transcode → stream-out on board-1 End of explanation from ipywidgets import interact import ipywidgets as widgets from common import common_vcu_demo_transcode_to_streamout import os from ipywidgets import HBox, VBox, Text, Layout Explanation: Run the Demo End of explanation input_path=widgets.Text(value='', placeholder='Insert file path', description='Input File:', #style={'description_width': 'initial'}, disabled=False) address_path=widgets.Text(value='', placeholder='192.168.1.101 ', description='Client IP:', disabled=False) port_number=widgets.Text(value='', placeholder='(optional) 50000', description='Port No:', disabled=False) HBox([input_path, address_path, port_number]) Explanation: Insert input file path and host IP End of explanation codec_type=widgets.RadioButtons( options=['avc', 'hevc'], description='Video Codec:', disabled=False) codec_type Explanation: Output Format End of explanation periodicity_idr=widgets.Text(value='', placeholder='(optional) 30, 40, 50', description='Periodicity Idr:', style={'description_width': 'initial'}, #layout=Layout(width='35%', height='30px'), disabled=False) cpb_size=widgets.Text(value='', placeholder='(optional) 1000,2000', description='CPB Size:', #style={'description_width': 'initial'}, #layout=Layout(width='35%', height='30px'), disabled=False) HBox([periodicity_idr, cpb_size]) gop_length=widgets.Text(value='', placeholder='(optional) 30, 60', description='Gop Length:', disabled=False) bit_rate=widgets.Text(value='', placeholder='(optional) 1000, 20000', description='Bit Rate(Kbps):', style={'description_width': 'initial'}, disabled=False) HBox([bit_rate, gop_length]) entropy_buffers=widgets.Dropdown( options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'], value='5', description='Entropy Buffers Nos:', style={'description_width': 'initial'}, disabled=False,) entropy_buffers HBox([entropy_buffers]) #HBox([port_number, optional]) from IPython.display import clear_output from IPython.display import Javascript def run_all(ev): display(Javascript('IPython.notebook.execute_cells_below()')) def clear_op(event): clear_output(wait=True) return button1 = widgets.Button( description='Clear Output', style= {'button_color':'lightgreen'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '300px'} ) button2 = widgets.Button( description='', style= {'button_color':'white'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '82px'}, disabled=True ) button1.on_click(run_all) button1.on_click(clear_op) def start_demo(event): #clear_output(wait=True) arg = common_vcu_demo_transcode_to_streamout.cmd_line_args_generator(input_path.value, bit_rate.value, codec_type.value, address_path.value, port_number.value, entropy_buffers.value, gop_length.value, periodicity_idr.value, cpb_size.value); #!sh vcu-demo-transcode-to-streamout.sh $arg > logs.txt 2>&1 !sh vcu-demo-transcode-to-streamout.sh $arg return button = widgets.Button( description='click to start vcu-transcode-to-streamout demo', style= {'button_color':'lightgreen'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '300px'} ) button.on_click(start_demo) HBox([button, button2, button1]) Explanation: Advanced options: End of explanation
8,252
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Step1: Running TFLite models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Create a basic model of the form y = mx + c Step3: Generate a SavedModel Step4: Convert the SavedModel to TFLite Step5: Initialize the TFLite interpreter to try it out Step6: Visualize the model Step7: Download the TFLite model file
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2018 The TensorFlow Authors. End of explanation import tensorflow as tf import pathlib import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.models import Model from tensorflow.keras.layers import Input Explanation: Running TFLite models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> Setup End of explanation # Create a simple Keras model. x = [-1, 0, 1, 2, 3, 4] y = [-3, -1, 1, 3, 5, 7] model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) model.compile(optimizer='sgd', loss='mean_squared_error') model.fit(x, y, epochs=200, verbose=1) Explanation: Create a basic model of the form y = mx + c End of explanation export_dir = 'saved_model/1' tf.saved_model.save(model, export_dir) Explanation: Generate a SavedModel End of explanation # Convert the model. converter = tf.lite.TFLiteConverter.from_saved_model(export_dir) tflite_model = converter.convert() tflite_model_file = pathlib.Path('model.tflite') tflite_model_file.write_bytes(tflite_model) Explanation: Convert the SavedModel to TFLite End of explanation # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test the TensorFlow Lite model on random input data. input_shape = input_details[0]['shape'] inputs, outputs = [], [] for _ in range(100): input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() tflite_results = interpreter.get_tensor(output_details[0]['index']) # Test the TensorFlow model on random input data. tf_results = model(tf.constant(input_data)) output_data = np.array(tf_results) inputs.append(input_data[0][0]) outputs.append(output_data[0][0]) Explanation: Initialize the TFLite interpreter to try it out End of explanation plt.plot(inputs, outputs, 'r') plt.show() Explanation: Visualize the model End of explanation try: from google.colab import files files.download(tflite_model_file) except: pass Explanation: Download the TFLite model file End of explanation
8,253
Given the following text description, write Python code to implement the functionality described below step by step Description: Blind Source Separation with the Shogun Machine Learning Toolbox By Kevin Hughes This notebook illustrates <a href="http Step1: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook. Step2: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here Step3: Now let's load a second audio clip Step4: and a third audio clip Step5: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound. The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$. Afterwards I plot the mixed signals and create the wavPlayers, have a listen! Step6: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy! Step7: Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper Step8: Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
Python Code: import numpy as np import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import wavfile from scipy.signal import resample def load_wav(filename,samplerate=44100): # load file rate, data = wavfile.read(filename) # convert stereo to mono if len(data.shape) > 1: data = data[:,0]/2 + data[:,1]/2 # re-interpolate samplerate ratio = float(samplerate) / float(rate) data = resample(data, int(len(data) * ratio)) return samplerate, data.astype(np.int16) Explanation: Blind Source Separation with the Shogun Machine Learning Toolbox By Kevin Hughes This notebook illustrates <a href="http://en.wikipedia.org/wiki/Blind_signal_separation">Blind Source Seperation</a>(BSS) on audio signals using <a href="http://en.wikipedia.org/wiki/Independent_component_analysis">Independent Component Analysis</a> (ICA) in Shogun. We generate a mixed signal and try to seperate it out using Shogun's implementation of ICA & BSS called <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CJade.html">JADE</a>. My favorite example of this problem is known as the cocktail party problem where a number of people are talking simultaneously and we want to separate each persons speech so we can listen to it separately. Now the caveat with this type of approach is that we need as many mixtures as we have source signals or in terms of the cocktail party problem we need as many microphones as people talking in the room. Let's get started, this example is going to be in python and the first thing we are going to need to do is load some audio files. To make things a bit easier further on in this example I'm going to wrap the basic scipy wav file reader and add some additional functionality. First I added a case to handle converting stereo wav files back into mono wav files and secondly this loader takes a desired sample rate and resamples the input to match. This is important because when we mix the two audio signals they need to have the same sample rate. End of explanation from IPython.display import Audio from IPython.display import display def wavPlayer(data, rate): display(Audio(data, rate=rate)) Explanation: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook. End of explanation # change to the shogun-data directory import os os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica')) %matplotlib inline import pylab as pl # load fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander." # plot pl.figure(figsize=(6.75,2)) pl.plot(s1) pl.title('Signal 1') pl.show() # player wavPlayer(s1, fs1) Explanation: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on the web or from your Starcraft install directory (come on I know its still there). Another good source of data (although lets be honest less cool) is ICA central and various other more academic data sets: http://perso.telecom-paristech.fr/~cardoso/icacentral/base_multi.html. Note that for lots of these data sets the data will be mixed already so you'll be able to skip the next few steps. Okay lets load up an audio file. I chose the Terran Battlecruiser saying "Good Day Commander". In addition to the creating a wavPlayer I also plotted the data using Matplotlib (and tried my best to have the graph length match the HTML player length). Have a listen! End of explanation # load fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?" # plot pl.figure(figsize=(6.75,2)) pl.plot(s2) pl.title('Signal 2') pl.show() # player wavPlayer(s2, fs2) Explanation: Now let's load a second audio clip: End of explanation # load fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!" # plot pl.figure(figsize=(6.75,2)) pl.plot(s3) pl.title('Signal 3') pl.show() # player wavPlayer(s3, fs3) Explanation: and a third audio clip: End of explanation # Adjust for different clip lengths fs = fs1 length = max([len(s1), len(s2), len(s3)]) s1 = np.resize(s1, (length,1)) s2 = np.resize(s2, (length,1)) s3 = np.resize(s3, (length,1)) S = (np.c_[s1, s2, s3]).T # Mixing Matrix #A = np.random.uniform(size=(3,3)) #A = A / A.sum(axis=0) A = np.array([[1, 0.5, 0.5], [0.5, 1, 0.5], [0.5, 0.5, 1]]) print('Mixing Matrix:') print(A.round(2)) # Mix Signals X = np.dot(A,S) # Mixed Signal i for i in range(X.shape[0]): pl.figure(figsize=(6.75,2)) pl.plot((X[i]).astype(np.int16)) pl.title('Mixed Signal %d' % (i+1)) pl.show() wavPlayer((X[i]).astype(np.int16), fs) Explanation: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound. The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$. Afterwards I plot the mixed signals and create the wavPlayers, have a listen! End of explanation from shogun import features # Convert to features for shogun mixed_signals = features((X).astype(np.float64)) Explanation: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy! End of explanation from shogun import Jade # Separating with JADE jade = Jade() signals = jade.apply(mixed_signals) S_ = signals.get_real_matrix('feature_matrix') A_ = jade.get_real_matrix('mixing_matrix') A_ = A_ / A_.sum(axis=0) print('Estimated Mixing Matrix:') print(A_) Explanation: Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper: Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-Gaussian signals. In IEE Proceedings F (Radar and Signal Processing) (Vol. 140, No. 6, pp. 362-370). IET Digital Library. Shogun also has several other ICA algorithms including the Second Order Blind Identification (SOBI) algorithm, FFSep, JediSep, UWedgeSep and FastICA. All of the algorithms inherit from the ICAConverter base class and share some common methods for setting an intial guess for the mixing matrix, retrieving the final mixing matrix and getting/setting the number of iterations to run and the desired convergence tolerance. Some of the algorithms have additional getters for intermediate calculations, for example Jade has a method for returning the 4th order cumulant tensor while the "Sep" algorithms have a getter for the time lagged covariance matrices. Check out the source code on GitHub (https://github.com/shogun-toolbox/shogun) or the Shogun docs (http://www.shogun-toolbox.org/doc/en/latest/annotated.html) for more details! End of explanation # Show separation results # Separated Signal i gain = 4000 for i in range(S_.shape[0]): pl.figure(figsize=(6.75,2)) pl.plot((gain*S_[i]).astype(np.int16)) pl.title('Separated Signal %d' % (i+1)) pl.show() wavPlayer((gain*S_[i]).astype(np.int16), fs) Explanation: Thats all there is to it! Check out how nicely those signals have been separated and have a listen! End of explanation
8,254
Given the following text description, write Python code to implement the functionality described below step by step Description: <small><i>This notebook was prepared by mrb00l34n. Source and license info is on GitHub.</i></small> Challenge Notebook Problem Step1: Unit Test The following unit test is expected to fail until you solve the challenge.
Python Code: def change_ways(n, coins): # TODO: Implement me return n Explanation: <small><i>This notebook was prepared by mrb00l34n. Source and license info is on GitHub.</i></small> Challenge Notebook Problem: Counting Ways of Making Change Explanation Test Cases Algorithm Code Unit Test Solution Notebook Explanation How many ways are there of making change for n, given an array of distinct coins? For example: Input: n = 4, coins = [1, 2] Output: 3. 1+1+1+1, 1+2+1, 2+2, would be the ways of making change. Note that a coin can be used any number of times, and we are counting unique combinations. Test Cases Input: n = 0, coins = [1, 2] -> Output: 0 Input: n = 100, coins = [1, 2, 3] -> Output: 884 Input: n = 1000, coins = [1, 2, 3...99, 100] -> Output: 15658181104580771094597751280645 Algorithm Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code End of explanation # %load test_coin_change_ways.py from nose.tools import assert_equal class Challenge(object): def test_coin_change_ways(self,solution): assert_equal(solution(0, [1, 2]), 0) assert_equal(solution(100, [1, 2, 3]), 884) assert_equal(solution(1000, range(1, 101)), 15658181104580771094597751280645) print('Success: test_coin_change_ways') def main(): test = Challenge() test.test_coin_change_ways(change_ways) if __name__ == '__main__': main() Explanation: Unit Test The following unit test is expected to fail until you solve the challenge. End of explanation
8,255
Given the following text description, write Python code to implement the functionality described below step by step Description: Tarea 1 Step1: Ejercicio1 Escribe los metodos repr y str para la clase Array de forma que se imprima legiblemente como en numpy arrays. Step2: Ejercicio2 Escribe el metodo setitem para que el codigo A[i,j] = new_value cambie el valor de la entrada (i,j) del array. Step3: Ejercicio3 Implementa una funcion de zeros para crear arrays "vacios" Step4: Implementa una funcion eye(n) que crea la matriz identidad de nxn Step5: Ejercicio4 Implementa la funcion de transposicion Step6: Ejercicio5 Investiga como implementar el metodo de clase radd Step7: Implementa el metodo de clase sub similar a la suma para poder calcular expresiones como A - B para A y B arrays o numeros. Step8: Ejercicio6 Implementa las funciones mul y rmul para hacer multiplicacion matricial (y por un escalar). Step10: Ejercicio7 Implementa una funcion forward_subs que resuelva sistemas de ecuaciones de la forma $Lx = y$ con $L$ triangular inferior y $y$ cualquier Vector o Array de una columna. Step12: Ejercicio8 Implementa una funcion backward_subs que resuelva sistemas $Ux = y$ con $U$ triangular superior y y Vector o Array de una columna. Step14: Ejercicio9 Implementa una funcion LU que reciba un Array $A$ y devuelva 3 arrays $L$,$U$ y $P$ tales que $PA = LU$ con $L$ trangular inferior, $U$ triangular superior y $P$ matriz de permutacion. Step15: Ejercicio10 Implementa una funcion lu_linsolve que resuelva cualquier sistema de ecuaciones Ax = y con A un Array y y un Vector o Array de una columna.
Python Code: class Array: "Una clase minima para algebra lineal" def __init__(self, list_of_rows): "Constructor y validador" # obtener dimensiones self.data = list_of_rows nrow = len(list_of_rows) # ___caso vector: redimensionar correctamente if not isinstance(list_of_rows[0], list): nrow = 1 self.data = [[x] for x in list_of_rows] # ahora las columnas deben estar bien aunque sea un vector ncol = len(self.data[0]) self.shape = (nrow, ncol) # validar tamano correcto de filas if any([len(r) != ncol for r in self.data]): raise Exception("Las filas deben ser del mismo tamano") for a in range(nrow): for b in range(len(list_of_rows[a])): if not type(list_of_rows[a][b]) == int and not type(list_of_rows[a][b]) == float: raise Exception("Hay datos no numericos") def __repr__(self): #Ejercicio1 "Funcion que se manda llamar cuando solo pones el dato en terminal >> dato" return self.printdata(self.data) def printdata(self, data): #Ejercicio1 "Funcion que da formato a la impresion de matrices y vectores" rows = len(data) cadena = "[" for a in range(rows): if a>0: cadena+=str("\n ") cadena+=str(data[a]) cadena+="]" return cadena def __str__(self): #Ejercicio1 "Funcion que se manda llamar cuando solo pones llamas a imprimir el dato en terminal >>print(dato)" return self.printdata(self.data) def __getitem__(self, idx): #Ejercicio2 "Metodo para poder obtener datos del objeto mediante un indice" return self.data[idx[0]][idx[1]] def __setitem__(self, idx, new_value): #Ejercicio2 "Metodo por el cual podemos asignar valores al objeto mediante indices" self.data[idx[0]][idx[1]] = new_value def transpose(self): #Ejercicio4 "Metodo para obtener la matriz transpuesta del objeto Array" rows = self.shape[0] cols = self.shape[1] transpuesta = [[self.data[x][y] for x in range(rows)] for y in range(cols)] return Array(transpuesta) #return self.printdata(transpuesta) def __add__(self, other): #Ejercicio5 "Hora de sumar" if isinstance(other, Array): if self.shape != other.shape: raise Exception("Las dimensiones son distintas!") rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] + other.data[r][c] return newArray elif isinstance(other, (int, float, complex)): # en caso de que el lado derecho sea solo un numero rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] + other return newArray else: raise Exception("Data must be either Array class or scalar") # es un tipo de error particular usado en estos metodos __radd__ = __add__ #Ejercicio5 def __sub__(self, other): #Ejercicio5 "Hora de restar" "Hora de sumar" if isinstance(other, Array): if self.shape != other.shape: raise Exception("Las dimensiones son distintas!") rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] - other.data[r][c] return newArray elif isinstance(other, (int, float, complex)): # en caso de que el lado derecho sea solo un numero rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] - other return newArray else: raise Exception("Data must be either Array class or scalar") # es un tipo de error particular usado en estos metodos __rsub__ = __sub__ #Ejercicio5 def __mul__(self,other): #Ejercicio6 "Metodo para multiplicar una matriz ya sea con un escalar, u otra matriz(vectores tambien ya que son matrices de 1xn o nx1" #if len(other[0]) if isinstance(other, Array): compatible = False # if self.shape[1] == other.shape[0]: compatible = True else: print("Incompatible matrices") if compatible: rows, cols = self.shape matmult = [] for a in range(rows): vec = [] for b in range(other.shape[1]): vec.append(self.multAdd(self.data[a],[other.data[i][b] for i in range(other.shape[0])])) matmult.append(vec) return self.printdata(matmult) elif isinstance(other, (int, float)): rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) newArray.data = [[self.data[y][x]*other for x in range(cols)] for y in range(rows)] return newArray def multAdd(self,parama, paramb): #Ejercicio6 (auxiliar) "Metodo para calcular producto punto" sum = 0 for x in range(len(parama)): sum+=(parama[x]*paramb[x]) return sum __rmul__ = __mul__ #Ejercicio6 Explanation: Tarea 1: Alumno: Arturo Gonzalez Bencomo Clase Array completa La clase array completa se implemento en los ejercicios 1-6 con comentarios en el codigo indicando que metodo se implemento en que ejercicio, posteriormente se comprueba cada seccion correspondiente. Los ejercicios 7-10 no implicaron modificacion de clase array sino que se agregaron funciones nuevas externas a la clase End of explanation dato = Array([[1,2,3],[4,5,6],[8,9,10]]) print(dato) dato Explanation: Ejercicio1 Escribe los metodos repr y str para la clase Array de forma que se imprima legiblemente como en numpy arrays. End of explanation dato = Array([[1,2,3],[4,5,6],[8,9,10]]) print("Original: ") print(dato) print("Acceder a elemento por indices dato[1,2]") print(dato[1,2]) dato[0,0] = 10 print("Asignamos elemento por indices dato[0,0] = 10") print(dato) Explanation: Ejercicio2 Escribe el metodo setitem para que el codigo A[i,j] = new_value cambie el valor de la entrada (i,j) del array. End of explanation def zeros(filas, columnas): "Clase que crea un objeto de clase Array inicializado con valores en ceros sintaxis matriz_ceros(2,3)" return Array([[0 for x in range(columnas)] for y in range(filas)]) zeros(5,5) Explanation: Ejercicio3 Implementa una funcion de zeros para crear arrays "vacios" End of explanation def evaluate(a,b): "Evalua si son iguales y regresa 1, si no 0" if a == b: return 1 else: return 0 def eye(n): "Crea matriz diagonal de dim = n x n" return Array([[evaluate(x,y) for x in range(n)] for y in range(n)]) eye(5) Explanation: Implementa una funcion eye(n) que crea la matriz identidad de nxn End of explanation dato = Array([[1,2,3,6],[4,5,6,11],[8,9,10,49],[9,12,67,1]]) print("Original: ") print(dato) print("Transpuesta") print(dato.transpose()) dato1 = Array([[1,2,3,6],[4,5,6,11],[8,9,10,49],[9,12,67,1]]).transpose() print("Transpuesta: ") print(dato1) Explanation: Ejercicio4 Implementa la funcion de transposicion End of explanation dato = Array([[1,2,3],[4,5,6],[8,9,10]]) dato1 = Array([[1,2,71],[14,15,16],[28,29,20]]) print("elemento1" ) print(dato) print("elemento2" ) print(dato1) print("suma de ambos") print(dato+dato1) print("suma de Array y escalar") print(dato+1) print("suma de escalar y Array") print(1+dato) Explanation: Ejercicio5 Investiga como implementar el metodo de clase radd End of explanation print("Resta de ambos") print(dato-dato1) print("Resta de Array y escalar") print(dato-1) print("Resta de escalar y Array") print(1-dato) Explanation: Implementa el metodo de clase sub similar a la suma para poder calcular expresiones como A - B para A y B arrays o numeros. End of explanation dato = Array([[1,2,3],[4,5,6],[1,2,3],[4,5,6]]) dato1 = Array([[1,2,8,78],[14,15,12,21],[28,29,78,90]]) print("primer elemento") print(dato) print("Segundo elemento") print(dato1) print ("multiplicacion") print(dato*dato1) print("multiplicacion de primer elemento por escalar 5") print(dato * 5) print("multiplicacion de escalar 5 por primer elemento") print(5*dato) Explanation: Ejercicio6 Implementa las funciones mul y rmul para hacer multiplicacion matricial (y por un escalar). End of explanation def forward_subs(matrix,vector): "Resuelve matrices de la forma Lx = b" Codigo de validacion, tamanios, matriz L flag = False y = None if type(matrix) == Array and type(vector) == Array: if matrix.shape[0] == vector.shape[0]: if matrix.shape[0] == matrix.shape[1]: for y in range(matrix.shape[0]): for x in range(matrix.shape[1]): if x>y: if not matrix[y,x] == 0: raise Exception("No es matriz L") else: flag = True else: print("No es matriz cuadrada") else: print("No concuerdan las dimensiones de la matriz y del vector") if flag: print("Es matriz L") y = [] for a in range(vector.shape[0]): y.append(vector[a,0]) for b in range(a): y[a]-=matrix[a,b]*y[b] y[a] /= matrix[a,a] return y dato = Array([[1,0,0,0],[4,2,0,0],[1,2,1,0],[3,4,5,2]]) print("Matriz: ") print(dato) print("vector") vector = Array([[1],[2],[3],[6]]) print(vector) print("solucion de resolucion de Lx = b") print(forward_subs(dato,vector)) Explanation: Ejercicio7 Implementa una funcion forward_subs que resuelva sistemas de ecuaciones de la forma $Lx = y$ con $L$ triangular inferior y $y$ cualquier Vector o Array de una columna. End of explanation def backward_subs(matrix, vector): "Resuelve matrices de la forma Lx = b" Codigo de validacion, tamanios, matriz L flag = False y = None if type(matrix) == Array and type(vector) == Array: if matrix.shape[0] == vector.shape[0]: if matrix.shape[0] == matrix.shape[1]: for y in range(matrix.shape[0]): for x in range(matrix.shape[1]): if x<y: if not matrix[y,x] == 0: raise Exception("No es matriz U") else: flag = True else: print("No es matriz cuadrada") else: print("No concuerdan las dimensiones de la matriz y del vector") if flag: print("Es matriz U") y = [0] * vector.shape[0] for a in range(vector.shape[0]-1,-1,-1): y[a] = vector[a,0] for b in range(a+1,vector.shape[0]): y[a]-=matrix[a,b]*y[b] y[a] /= matrix[a,a] return y dato = Array([[1,1],[0,2]]) print("Matriz: ") print(dato) print("vector") vector = Array([[12],[20]]) print(vector) print("solucion de resolucion de Ux = b") print(backward_subs(dato,vector)) Explanation: Ejercicio8 Implementa una funcion backward_subs que resuelva sistemas $Ux = y$ con $U$ triangular superior y y Vector o Array de una columna. End of explanation def LU(A): "Funcion para realizar descomposicion de matriz en PA = LU" if type(A == Array): L = [[0 for x in range(A.shape[1])] for y in range(A.shape[0])] for a in range(A.shape[0]): L[a][a] = 1 L = Array(L) Matriz de permutacion list_tuples = [(0,0) for x in range(A.shape[0])] for a in range(A.shape[0]): list_tuples[a] = (a,A.data[a]) data = [] data.append((0,A.data[0])) del list_tuples[0] if len(list_tuples) == 1: starting = 0 else: starting = 1 for row in range(starting,len(list_tuples)): temp = sorted(list_tuples, key=lambda value: value[1][row], reverse=True) data += temp U = [] for row in data: U.append(row[1]) U = Array(U) permutation_matrix = [[0 for x in range(A.shape[0])] for y in range(A.shape[0])] permutation_matrix = Array(permutation_matrix) for row in range(len(data)): permutation_matrix[row, data[row][0]] = 1 n = A.shape[0] for a in range(n): for b in range(a+1,n): multiplicador = float((U[b,a]/U[a,a])) L[b,a] = multiplicador multiplicador= multiplicador*(-1) for c in range(n): U[b,c] += U[a,c]*multiplicador return [L,U,permutation_matrix] A = Array([[1,1,-1],[1,-2,3],[2,3,1]]) #A = Array([[1,7,-1],[1,6,8],[2,3,7]]) print("Matriz original") print(A) L,U,P = LU(A) print("Matrices de Lower, Upper y Permutacion son: ") print("L") print(L) print("U") print(U) print("P") print(P) Explanation: Ejercicio9 Implementa una funcion LU que reciba un Array $A$ y devuelva 3 arrays $L$,$U$ y $P$ tales que $PA = LU$ con $L$ trangular inferior, $U$ triangular superior y $P$ matriz de permutacion. End of explanation def lu_linsolve(A,y): "Funcion para resolver sistemas de ecuaciones basado en la descomposicion LU" solucion = None if type(A) == Array and type(y) == Array: if A.shape[0] == y.shape[0]: L,U,P = LU(A) new_vector = [] for x in range(P.shape[0]): for z in range(P.shape[1]): if(P[x,z] == 1): new_vector.append([y[z,0]]) new_vector = Array(new_vector) x = Array([forward_subs(L,new_vector)]) solucion = backward_subs(U, x.transpose()) else: print("No son compatibles Array y vector") print("Forma de matriz ") print(A.shape) print("Forma de vector") print(y.shape) else: print("No son estructuras de datos Array") return solucion matriz = Array([[1,1,3],[2,4,4],[2,2,2]]) print("La matriz es: ") print(matriz) vector = Array([[30],[68],[36]]) print("El vector es: ") print(vector) solucion = lu_linsolve(matriz,vector) print("La solucion al sistema de ecuaciones es") print(solucion) Explanation: Ejercicio10 Implementa una funcion lu_linsolve que resuelva cualquier sistema de ecuaciones Ax = y con A un Array y y un Vector o Array de una columna. End of explanation
8,256
Given the following text description, write Python code to implement the functionality described below step by step Description: Building your Deep Neural Network Step2: 2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will Step4: Expected output Step6: Expected output Step8: Expected output Step10: Expected output Step12: <table style="width Step14: Expected Output Step16: Expected Output Step18: Expected output with sigmoid Step20: Expected Output <table style="width
Python Code: import numpy as np import h5py import matplotlib.pyplot as plt from testCases import * from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) Explanation: Building your Deep Neural Network: Step by Step Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want! In this notebook, you will implement all the functions required to build a deep neural network. In the next assignment, you will use these functions to build a deep neural network for image classification. After this assignment you will be able to: - Use non-linear units like ReLU to improve your model - Build a deeper neural network (with more than 1 hidden layer) - Implement an easy-to-use neural network class Notation: - Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters. - Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations). Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - numpy is the main package for scientific computing with Python. - matplotlib is a library to plot graphs in Python. - dnn_utils provides some necessary functions for this notebook. - testCases provides some test cases to assess the correctness of your functions - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed. End of explanation # GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros(shape=(n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros(shape=(n_y, 1)) ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(2,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) Explanation: 2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will: Initialize the parameters for a two-layer network and for an $L$-layer neural network. Implement the forward propagation module (shown in purple in the figure below). Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). We give you the ACTIVATION function (relu/sigmoid). Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function. Compute the loss. Implement the backward propagation module (denoted in red in the figure below). Complete the LINEAR part of a layer's backward propagation step. We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function Finally update the parameters. <img src="images/final outline.png" style="width:800px;height:500px;"> <caption><center> Figure 1</center></caption><br> Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - Initialization You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network Exercise: Create and initialize the parameters of the 2-layer neural network. Instructions: - The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. - Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape. - Use zero initialization for the biases. Use np.zeros(shape). End of explanation # GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) Explanation: Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td> [[ 0.01624345 -0.00611756] [-0.00528172 -0.01072969]] </td> </tr> <tr> <td> **b1**</td> <td>[[ 0.] [ 0.]]</td> </tr> <tr> <td>**W2**</td> <td> [[ 0.00865408 -0.02301539]]</td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> 3.2 - L-layer Neural Network The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: <table style="width:100%"> <tr> <td> </td> <td> **Shape of W** </td> <td> **Shape of b** </td> <td> **Activation** </td> <td> **Shape of Activation** </td> <tr> <tr> <td> **Layer 1** </td> <td> $(n^{[1]},12288)$ </td> <td> $(n^{[1]},1)$ </td> <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> <td> $(n^{[1]},209)$ </td> <tr> <tr> <td> **Layer 2** </td> <td> $(n^{[2]}, n^{[1]})$ </td> <td> $(n^{[2]},1)$ </td> <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> <td> $(n^{[2]}, 209)$ </td> <tr> <tr> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$</td> <td> $\vdots$ </td> <tr> <tr> <td> **Layer L-1** </td> <td> $(n^{[L-1]}, n^{[L-2]})$ </td> <td> $(n^{[L-1]}, 1)$ </td> <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> <td> $(n^{[L-1]}, 209)$ </td> <tr> <tr> <td> **Layer L** </td> <td> $(n^{[L]}, n^{[L-1]})$ </td> <td> $(n^{[L]}, 1)$ </td> <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td> <td> $(n^{[L]}, 209)$ </td> <tr> </table> Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\ m & n & o \ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\ d & e & f \ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \ t \ u \end{bmatrix}\tag{2}$$ Then $WX + b$ will be: $$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix}\tag{3} $$ Exercise: Implement initialization for an L-layer Neural Network. Instructions: - The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01. - Use zeros initialization for the biases. Use np.zeros(shape). - We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network). python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1)) End of explanation # GRADED FUNCTION: linear_forward def linear_forward(A, W, b): Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently ### START CODE HERE ### (≈ 1 line of code) Z = np.dot(W, A) + b ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z)) Explanation: Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> </tr> <tr> <td>**b1** </td> <td>[[ 0.] [ 0.] [ 0.] [ 0.]]</td> </tr> <tr> <td>**W2** </td> <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> </tr> <tr> <td>**b2** </td> <td>[[ 0.] [ 0.] [ 0.]]</td> </tr> </table> 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order: LINEAR LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model) The linear forward module (vectorized over all the examples) computes the following equations: $$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$ where $A^{[0]} = X$. Exercise: Build the linear part of forward propagation. Reminder: The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help. End of explanation # GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A)) Explanation: Expected output: <table style="width:35%"> <tr> <td> **Z** </td> <td> [[ 3.1980455 7.85763489]] </td> </tr> </table> 4.2 - Linear-Activation Forward In this notebook, you will use two activation functions: Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = sigmoid(Z) ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = relu(Z) For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step. Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function. End of explanation # GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2) the cache of linear_sigmoid_forward() (there is one, indexed L-1) caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation='relu') caches.append(cache) ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation='sigmoid') caches.append(cache) ### END CODE HERE ### assert(AL.shape == (1, X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches))) Explanation: Expected output: <table style="width:35%"> <tr> <td> **With sigmoid: A ** </td> <td > [[ 0.96076066 0.99961336]]</td> </tr> <tr> <td> **With ReLU: A ** </td> <td > [[ 3.1980455 7.85763489]]</td> </tr> </table> Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID. <img src="images/model_architecture_kiank.png" style="width:600px;height:300px;"> <caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br> Exercise: Implement the forward propagation of the above model. Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.) Tips: - Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c). End of explanation # GRADED FUNCTION: compute_cost def compute_cost(AL, Y): Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL))) ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y))) Explanation: <table style="width:40%"> <tr> <td> **AL** </td> <td > [[ 0.0844367 0.92356858]]</td> </tr> <tr> <td> **Length of caches list ** </td> <td > 2</td> </tr> </table> Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost function Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning. Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$ End of explanation # GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = np.dot(dZ, cache[0].T) / m db = np.squeeze(np.sum(dZ, axis=1, keepdims=True)) / m dA_prev = np.dot(cache[1].T, dZ) ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (isinstance(db, float)) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) Explanation: Expected Output: <table> <tr> <td>**cost** </td> <td> 0.41493159961539694</td> </tr> </table> 6 - Backward propagation module Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. Reminder: <img src="images/backprop_kiank.png" style="width:650px;height:250px;"> <caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption> <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows: $$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$ In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted. Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$. This is why we talk about **backpropagation**. !--> Now, similar to forward propagation, you are going to build the backward propagation in three steps: - LINEAR backward - LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backward For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation). Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. <img src="images/linearback_kiank.png" style="width:250px;height:300px;"> <caption><center> Figure 4 </center></caption> The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need: $$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$ $$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$ $$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ Exercise: Use the 3 formulas above to implement linear_backward(). End of explanation # GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = relu_backward(dA, activation_cache) ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = sigmoid_backward(dA, activation_cache) ### END CODE HERE ### # Shorten the code dA_prev, dW, db = linear_backward(dZ, linear_cache) return dA_prev, dW, db AL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) Explanation: Expected Output: <table style="width:90%"> <tr> <td> **dA_prev** </td> <td > [[ 2.38272385 5.85438014] [ 6.31969219 15.52755701] [ -3.97876302 -9.77586689]] </td> </tr> <tr> <td> **dW** </td> <td > [[ 2.77870358 -0.05500058 -5.13144969]] </td> </tr> <tr> <td> **db** </td> <td> 5.527840195 </td> </tr> </table> 6.2 - Linear-Activation backward Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. To help you implement linear_activation_backward, we provided two backward functions: - sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows: python dZ = sigmoid_backward(dA, activation_cache) relu_backward: Implements the backward propagation for RELU unit. You can call it as follows: python dZ = relu_backward(dA, activation_cache) If $g(.)$ is the activation function, sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer. End of explanation # GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = caches[-1] grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_backward(sigmoid_backward(dAL, current_cache[1]), current_cache[0]) ### END CODE HERE ### for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) current_cache = caches[l] dA_prev_temp, dW_temp, db_temp = linear_backward(sigmoid_backward(dAL, caches[1]), caches[0]) grads["dA" + str(l + 1)] = dA_prev_temp grads["dW" + str(l + 1)] = dW_temp grads["db" + str(l + 1)] = db_temp ### END CODE HERE ### return grads X_assess, Y_assess, AL, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dA1 = "+ str(grads["dA1"])) Explanation: Expected output with sigmoid: <table style="width:100%"> <tr> <td > dA_prev </td> <td >[[ 0.08982777 0.00226265] [ 0.23824996 0.00600122] [-0.14999783 -0.00377826]] </td> </tr> <tr> <td > dW </td> <td > [[-0.06001514 -0.09687383 -0.10598695]] </td> </tr> <tr> <td > db </td> <td > 0.061800984273 </td> </tr> </table> Expected output with relu <table style="width:100%"> <tr> <td > dA_prev </td> <td > [[ 2.38272385 5.85438014] [ 6.31969219 15.52755701] [ -3.97876302 -9.77586689]] </td> </tr> <tr> <td > dW </td> <td > [[ 2.77870358 -0.05500058 -5.13144969]] </td> </tr> <tr> <td > db </td> <td > 5.527840195 </td> </tr> </table> 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. <img src="images/mn_backward.png" style="width:450px;height:300px;"> <caption><center> Figure 5 : Backward pass </center></caption> Initializing backpropagation: To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$. To do so, use this formula (derived using calculus which you don't need in-depth knowledge of): python dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$ For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"]. Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model. End of explanation # GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)] parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)] ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = " + str(parameters["W1"])) print ("b1 = " + str(parameters["b1"])) print ("W2 = " + str(parameters["W2"])) print ("b2 = " + str(parameters["b2"])) print ("W3 = " + str(parameters["W3"])) print ("b3 = " + str(parameters["b3"])) Explanation: Expected Output <table style="width:60%"> <tr> <td > dW1 </td> <td > [[-0.09686122 -0.04840482 -0.11864308]] </td> </tr> <tr> <td > db1 </td> <td > -0.262594998379 </td> </tr> <tr> <td > dA1 </td> <td > [[-0.71011462 -0.22925516] [-0.17330152 -0.05594909] [-0.03831107 -0.01236844]] </td> </tr> </table> 6.4 - Update Parameters In this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$ where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. Exercise: Implement update_parameters() to update your parameters using gradient descent. Instructions: Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. End of explanation
8,257
Given the following text description, write Python code to implement the functionality described below step by step Description: The Forward Euler method for first order differential equations $$$$ The Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. Step1: The forward Euler method $\frac{dy}{dx}=f(x,y) \quad y=y_0 \ when \ x=x_0 \quad x_0<x<X$ The forward Euler method calculates the values $y_1, y_2, y_3,... ,y_{N−1}, y_{N}$ using the formula $y_i - y_{i−1} = \frac{f(x_{i-1},\ y_{i-1})}{h} \quad i = 1, 2, 3, . . . , N$ This may be written as the explicit formula $y_i = y_{i-1} + hf(x_{i-1},\ y_{i-1}) \quad i = 1, 2, 3,... , N$ Methods where an explicit expression for $y_i$ may be written down are known as explicit methods. Step2: Example(s)
Python Code: import math Explanation: The Forward Euler method for first order differential equations $$$$ The Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. End of explanation def forward_euler(f, x0, y0, X, N): h = (X - x0) / N ys = [y0] for i in range(1, N + 1): ys.append(ys[-1] + h * f((i - 1) * h, ys[-1])) return ys Explanation: The forward Euler method $\frac{dy}{dx}=f(x,y) \quad y=y_0 \ when \ x=x_0 \quad x_0<x<X$ The forward Euler method calculates the values $y_1, y_2, y_3,... ,y_{N−1}, y_{N}$ using the formula $y_i - y_{i−1} = \frac{f(x_{i-1},\ y_{i-1})}{h} \quad i = 1, 2, 3, . . . , N$ This may be written as the explicit formula $y_i = y_{i-1} + hf(x_{i-1},\ y_{i-1}) \quad i = 1, 2, 3,... , N$ Methods where an explicit expression for $y_i$ may be written down are known as explicit methods. End of explanation print(forward_euler(lambda x, y: y + pow(math.e, x), 0, 1, 1, 4)) print(forward_euler(lambda x, y: y + pow(math.e, x), 0, 1, 1, 40)[-1]) e_approx = forward_euler(lambda x, y: pow(math.e, x), 0, 1, 1, 1000) for it, e in enumerate(e_approx): print("Iteration:", it, math.e, e, abs(e - math.e)) Explanation: Example(s) End of explanation
8,258
Given the following text description, write Python code to implement the functionality described below step by step Description: These are the URLs for the JSON data powering the ESRI/ArcGIS maps. Step1: We need a way to easily extract the actual data points from the JSON. The data will actually contain multiple layers (really, one layer per operationalLayer, but multiple operationalLayers) so, if we pass a title, we should return the operationalLayer corresponding to that title; otherwise, just return the first one. Step2: Now we need to filter out the bad points from few_crashes - the ones with 0 given as the lat/lon. Step3: Now let's build a dictionary of all the cameras, so we can merge all their info. Step4: Set the 'Few crashes' flag to True for those intersections that show up in filtered_few_crashes. Step5: Set the 'To be removed' flag to True for those intersections that show up in removed_cameras. Step6: How many camera locations have few crashes and were slated to be removed? Step7: How does this list compare to the one currently published on the Chicago Data Portal? Step8: Now we need to compute how much money has been generated at each intersection - assuming $100 fine for each violation. In order to do that, we need to make the violation data line up with the camera location data. Then, we'll add 3 fields Step9: Now it's time to ask some specific questions. First Step10: Since 12/22/2014, how much money has been generated by low-crash intersections? Step11: How about since 3/6/2015? Step12: Now let's generate a CSV of the cameras data for export.
Python Code: few_crashes_url = 'http://www.arcgis.com/sharing/rest/content/items/5a8841f92e4a42999c73e9a07aca0c23/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..' removed_url = 'http://www.arcgis.com/sharing/rest/content/items/1e01ac5dc4d54dc186502316feab156e/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..' Explanation: These are the URLs for the JSON data powering the ESRI/ArcGIS maps. End of explanation import requests def extract_features(url, title=None): r = requests.get(url) idx = 0 found = False if title: while idx < len(r.json()['operationalLayers']): for item in r.json()['operationalLayers'][idx].items(): if item[0] == 'title' and item[1] == title: found = True break if found: break idx += 1 try: return r.json()['operationalLayers'][idx]['featureCollection']['layers'][0]['featureSet']['features'] except IndexError, e: return {} few_crashes = extract_features(few_crashes_url) all_cameras = extract_features(removed_url, 'All Chicago red light cameras') removed_cameras = extract_features(removed_url, 'red-light-cams') print 'Found %d data points for few-crash intersections, %d total cameras and %d removed camera locations' % ( len(few_crashes), len(all_cameras), len(removed_cameras)) Explanation: We need a way to easily extract the actual data points from the JSON. The data will actually contain multiple layers (really, one layer per operationalLayer, but multiple operationalLayers) so, if we pass a title, we should return the operationalLayer corresponding to that title; otherwise, just return the first one. End of explanation filtered_few_crashes = [ point for point in few_crashes if point['attributes']['LONG_X'] != 0 and point['attributes']['LAT_Y'] != 0] Explanation: Now we need to filter out the bad points from few_crashes - the ones with 0 given as the lat/lon. End of explanation cameras = {} for point in all_cameras: label = point['attributes']['LABEL'] if label not in cameras: cameras[label] = point cameras[label]['attributes']['Few crashes'] = False cameras[label]['attributes']['To be removed'] = False Explanation: Now let's build a dictionary of all the cameras, so we can merge all their info. End of explanation for point in filtered_few_crashes: label = point['attributes']['LABEL'] if label not in cameras: print 'Missing label %s' % label else: cameras[label]['attributes']['Few crashes'] = True Explanation: Set the 'Few crashes' flag to True for those intersections that show up in filtered_few_crashes. End of explanation for point in removed_cameras: label = point['attributes']['displaylabel'].replace(' and ', '-') if label not in cameras: print 'Missing label %s' % label else: cameras[label]['attributes']['To be removed'] = True Explanation: Set the 'To be removed' flag to True for those intersections that show up in removed_cameras. End of explanation counter = { 'both': { 'names': [], 'count': 0 }, 'crashes only': { 'names': [], 'count': 0 }, 'removed only': { 'names': [], 'count': 0 } } for camera in cameras: if cameras[camera]['attributes']['Few crashes']: if cameras[camera]['attributes']['To be removed']: counter['both']['count'] += 1 counter['both']['names'].append(camera) else: counter['crashes only']['count'] += 1 counter['crashes only']['names'].append(camera) elif cameras[camera]['attributes']['To be removed']: counter['removed only']['count'] += 1 counter['removed only']['names'].append(camera) print '%d locations had few crashes and were slated to be removed: %s\n' % ( counter['both']['count'], '; '.join(counter['both']['names'])) print '%d locations had few crashes but were not slated to be removed: %s\n' % ( counter['crashes only']['count'], '; '.join(counter['crashes only']['names'])) print '%d locations were slated to be removed despite having reasonable numbers of crashes: %s' % ( counter['removed only']['count'], '; '.join(counter['removed only']['names'])) Explanation: How many camera locations have few crashes and were slated to be removed? End of explanation from csv import DictReader from StringIO import StringIO data_portal_url = 'https://data.cityofchicago.org/api/views/thvf-6diy/rows.csv?accessType=DOWNLOAD' r = requests.get(data_portal_url) fh = StringIO(r.text) reader = DictReader(fh) def cleaner(str): filters = [ ('Stony?Island', 'Stony Island'), ('Van?Buren', 'Van Buren'), (' (SOUTH INTERSECTION)', '') ] for filter in filters: str = str.replace(filter[0], filter[1]) return str for line in reader: line['INTERSECTION'] = cleaner(line['INTERSECTION']) cameras[line['INTERSECTION']]['attributes']['current'] = line counter = { 'not current': [], 'current': [], 'not current and slated for removal': [], 'not current and not slated for removal': [], 'current and slated for removal': [] } for camera in cameras: if 'current' not in cameras[camera]['attributes']: counter['not current'].append(camera) if cameras[camera]['attributes']['To be removed']: counter['not current and slated for removal'].append(camera) else: counter['not current and not slated for removal'].append(camera) else: counter['current'].append(camera) if cameras[camera]['attributes']['To be removed']: counter['current and slated for removal'].append(camera) for key in counter: print key, len(counter[key]) print '; '.join(counter[key]), '\n' Explanation: How does this list compare to the one currently published on the Chicago Data Portal? End of explanation import requests from csv import DictReader from datetime import datetime from StringIO import StringIO data_portal_url = 'https://data.cityofchicago.org/api/views/spqx-js37/rows.csv?accessType=DOWNLOAD' r = requests.get(data_portal_url) fh = StringIO(r.text) reader = DictReader(fh) def violation_cleaner(str): filters = [ (' AND ', '-'), (' and ', '-'), ('/', '-'), # These are streets spelled one way in ticket data, another way in location data ('STONEY ISLAND', 'STONY ISLAND'), ('CORNELL DRIVE', 'CORNELL'), ('NORTHWEST HWY', 'NORTHWEST HIGHWAY'), ('CICERO-I55', 'CICERO-STEVENSON NB'), ('31ST ST-MARTIN LUTHER KING DRIVE', 'DR MARTIN LUTHER KING-31ST'), ('4700 WESTERN', 'WESTERN-47TH'), ('LAKE SHORE DR-BELMONT', 'LAKE SHORE-BELMONT'), # These are 3-street intersections where the ticket data has 2 streets, location data has 2 other streets ('KIMBALL-DIVERSEY', 'MILWAUKEE-DIVERSEY'), ('PULASKI-ARCHER', 'PULASKI-ARCHER-50TH'), ('KOSTNER-NORTH', 'KOSTNER-GRAND-NORTH'), ('79TH-KEDZIE', 'KEDZIE-79TH-COLUMBUS'), ('LINCOLN-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK'), ('KIMBALL-LINCOLN', 'KIMBALL-LINCOLN-MCCORMICK'), ('DIVERSEY-WESTERN', 'WESTERN-DIVERSEY-ELSTON'), ('HALSTED-FULLERTON', 'HALSTED-FULLERTON-LINCOLN'), ('COTTAGE GROVE-71ST', 'COTTAGE GROVE-71ST-SOUTH CHICAGO'), ('DAMEN-FULLERTON', 'DAMEN-FULLERTON-ELSTON'), ('DAMEN-DIVERSEY', 'DAMEN-DIVERSEY-CLYBOURN'), ('ELSTON-FOSTER', 'ELSTON-LAPORTE-FOSTER'), ('STONY ISLAND-79TH', 'STONY ISLAND-79TH-SOUTH CHICAGO'), # This last one is an artifact of the filter application process ('KIMBALL-LINCOLN-MCCORMICK-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK') ] for filter in filters: str = str.replace(filter[0], filter[1]) return str def intersection_is_reversed(key, intersection): split_key = key.upper().split('-') split_intersection = intersection.upper().split('-') if len(split_key) != len(split_intersection): return False for k in split_key: if k not in split_intersection: return False for k in split_intersection: if k not in split_key: return False return True missing_intersections = set() for idx, line in enumerate(reader): line['INTERSECTION'] = violation_cleaner(line['INTERSECTION']) found = False for key in cameras: if key.lower() == line['INTERSECTION'].lower() or intersection_is_reversed(key, line['INTERSECTION']): found = True if 'total tickets' not in cameras[key]['attributes']: cameras[key]['attributes']['total tickets'] = 0 cameras[key]['attributes']['tickets since 12/22/2014'] = 0 cameras[key]['attributes']['tickets since 3/6/2015'] = 0 cameras[key]['attributes']['last ticket date'] = line['VIOLATION DATE'] else: cameras[key]['attributes']['total tickets'] += int(line['VIOLATIONS']) dt = datetime.strptime(line['VIOLATION DATE'], '%m/%d/%Y') if dt >= datetime.strptime('12/22/2014', '%m/%d/%Y'): cameras[key]['attributes']['tickets since 12/22/2014'] += int(line['VIOLATIONS']) if dt >= datetime.strptime('3/6/2015', '%m/%d/%Y'): cameras[key]['attributes']['tickets since 3/6/2015'] += int(line['VIOLATIONS']) if not found: missing_intersections.add(line['INTERSECTION']) print 'Missing %d intersections' % len(missing_intersections), missing_intersections Explanation: Now we need to compute how much money has been generated at each intersection - assuming $100 fine for each violation. In order to do that, we need to make the violation data line up with the camera location data. Then, we'll add 3 fields: number of violations overall; number on/after 12/22/2014; number on/after 3/6/2015. End of explanation import locale locale.setlocale( locale.LC_ALL, '' ) total = 0 missing_tickets = [] for camera in cameras: try: total += cameras[camera]['attributes']['total tickets'] except KeyError: missing_tickets.append(camera) print '%d tickets have been issued since 7/1/2014, raising %s' % (total, locale.currency(total * 100, grouping=True)) print 'The following %d intersections appear to never have issued a ticket in that time: %s' % ( len(missing_tickets), '; '.join(missing_tickets)) Explanation: Now it's time to ask some specific questions. First: how much money has the program raised overall? (Note that this data only goes back to 7/1/2014, several years after the program began.) End of explanation total = 0 low_crash_total = 0 for camera in cameras: try: total += cameras[camera]['attributes']['tickets since 12/22/2014'] if cameras[camera]['attributes']['Few crashes']: low_crash_total += cameras[camera]['attributes']['tickets since 12/22/2014'] except KeyError: continue print '%d tickets have been issued at low-crash intersections since 12/22/2014, raising %s' % ( low_crash_total, locale.currency(low_crash_total * 100, grouping=True)) print '%d tickets have been issued overall since 12/22/2014, raising %s' % ( total, locale.currency(total * 100, grouping=True)) Explanation: Since 12/22/2014, how much money has been generated by low-crash intersections? End of explanation total = 0 low_crash_total = 0 slated_for_closure_total = 0 for camera in cameras: try: total += cameras[camera]['attributes']['tickets since 3/6/2015'] if cameras[camera]['attributes']['Few crashes']: low_crash_total += cameras[camera]['attributes']['tickets since 3/6/2015'] if cameras[camera]['attributes']['To be removed']: slated_for_closure_total += cameras[camera]['attributes']['tickets since 3/6/2015'] except KeyError: continue print '%d tickets have been issued at low-crash intersections since 3/6/2015, raising %s' % ( low_crash_total, locale.currency(low_crash_total * 100, grouping=True)) print '%d tickets have been issued overall since 3/6/2015, raising %s' % ( total, locale.currency(total * 100, grouping=True)) print '%d tickets have been issued at cameras that were supposed to be closed since 3/6/2015, raising %s' % ( slated_for_closure_total, locale.currency(slated_for_closure_total * 100, grouping=True)) Explanation: How about since 3/6/2015? End of explanation from csv import DictWriter output = [] for camera in cameras: data = { 'intersection': camera, 'last ticket date': cameras[camera]['attributes'].get('last ticket date', ''), 'tickets since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0), 'revenue since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0) * 100, 'tickets since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0), 'revenue since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0) * 100, 'was slated for removal': cameras[camera]['attributes'].get('To be removed', False), 'had few crashes': cameras[camera]['attributes'].get('Few crashes', False), 'is currently active': True if 'current' in cameras[camera]['attributes'] else False, 'latitude': cameras[camera]['attributes'].get('LAT', 0), 'longitude': cameras[camera]['attributes'].get('LNG', 0) } output.append(data) with open('/tmp/red_light_intersections.csv', 'w+') as fh: writer = DictWriter(fh, sorted(output[0].keys())) writer.writeheader() writer.writerows(output) Explanation: Now let's generate a CSV of the cameras data for export. End of explanation
8,259
Given the following text description, write Python code to implement the functionality described below step by step Description: Tables to Networks, Networks to Tables Networks can be represented in a tabular form in two ways Step1: At this point, we have our stations and trips data loaded into memory. How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important. Let's try to answer the question Step2: Then, let's iterate over the stations DataFrame, and add in the node attributes. Step3: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here. The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time Step4: Exercise Flex your memory muscles Step5: Exercise Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count &gt;= 100) are left. (3 min.) Step6: Let's now try drawing the graph. Exercise Use nx.draw_kamada_kawai(my_graph) to draw the filtered graph to screen. This uses a force-directed layout. (1 min.) Step7: Finally, let's visualize this as a GIS person might see it, taking advantage of the latitude and longitude data. Step8: Exercise Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges. (3 min.) You may have to first annotate the connectivity of each node, as given by the number of neighbors that any node is connected to. Step9: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph. Edges represent only trips that were taken >100 times between those two nodes. Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant). Saving NetworkX Graph Files NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way. To write to disk
Python Code: import zipfile # This block of code checks to make sure that a particular directory is present. if "divvy_2013" not in os.listdir('datasets/'): print('Unzipping the divvy_2013.zip file in the datasets folder.') with zipfile.ZipFile("datasets/divvy_2013.zip","r") as zip_ref: zip_ref.extractall('datasets') stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], encoding='utf-8') stations.head(10) trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv', parse_dates=['starttime', 'stoptime']) trips.head(10) Explanation: Tables to Networks, Networks to Tables Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values. Storing the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes. The Divvy bike sharing dataset is one such example of a network data set that has been stored as such. Loading Node Lists and Adjacency Lists Let's use the Divvy bike sharing data set as a starting point. The Divvy data set is comprised of the following data: Stations and metadata (like a node list with attributes saved) Trips and metadata (like an edge list with attributes saved) The README.txt file in the Divvy directory should help orient you around the data. End of explanation G = nx.DiGraph() Explanation: At this point, we have our stations and trips data loaded into memory. How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important. Let's try to answer the question: "What are the most popular trip paths?" In this case, the bike station is a reasonable "unit of consideration", so we will use the bike stations as the nodes. To start, let's initialize an directed graph G. End of explanation for d in stations.to_dict('records'): # each row is a dictionary node_id = d.pop('id') G.add_node(node_id, attr_dict=d) Explanation: Then, let's iterate over the stations DataFrame, and add in the node attributes. End of explanation # # Run the following code at your own risk :) # for r, d in trips.iterrows(): # start = d['from_station_id'] # end = d['to_station_id'] # if (start, end) not in G.edges(): # G.add_edge(start, end, count=1) # else: # G.edge[start][end]['count'] += 1 counts = trips.groupby(['from_station_id', 'to_station_id'])['trip_id'].count().reset_index() for d in counts.to_dict('records'): G.add_edge(d['from_station_id'], d['to_station_id'], count=d['trip_id']) Explanation: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here. The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time :-). Alternatively, I would suggest doing a pandas groupby. End of explanation from collections import Counter # Count the number of edges that have x trips recorded on them. trip_count_distr = ______________________________ # Then plot the distribution of these plt.scatter(_______________, _______________, alpha=0.1) plt.yscale('log') plt.xlabel('num. of trips') plt.ylabel('num. of edges') Explanation: Exercise Flex your memory muscles: can you make a scatter plot of the distribution of the number edges that have a certain number of trips? (3 min.) The x-value is the number of trips taken between two stations, and the y-vale is be the number of edges that have that number of trips. End of explanation # Filter the edges to just those with more than 100 trips. G_filtered = G.copy() for u, v, d in G.edges(data=True): # Fill in your code here. len(G_filtered.edges()) Explanation: Exercise Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count &gt;= 100) are left. (3 min.) End of explanation # Fill in your code here. Explanation: Let's now try drawing the graph. Exercise Use nx.draw_kamada_kawai(my_graph) to draw the filtered graph to screen. This uses a force-directed layout. (1 min.) End of explanation locs = {n: np.array([d['latitude'], d['longitude']]) for n, d in G_filtered.nodes(data=True)} # for n, d in G_filtered.nodes(data=True): # print(n, d.keys()) nx.draw_networkx_nodes(G_filtered, pos=locs, node_size=3) nx.draw_networkx_edges(G_filtered, pos=locs) plt.show() Explanation: Finally, let's visualize this as a GIS person might see it, taking advantage of the latitude and longitude data. End of explanation for n in G_filtered.nodes(): ____________ c = CircosPlot(__________) __________ plt.savefig('images/divvy.png', dpi=300) Explanation: Exercise Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges. (3 min.) You may have to first annotate the connectivity of each node, as given by the number of neighbors that any node is connected to. End of explanation nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl') G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl') Explanation: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph. Edges represent only trips that were taken >100 times between those two nodes. Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant). Saving NetworkX Graph Files NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way. To write to disk: nx.write_gpickle(G, handle) To load from disk: G = nx.read_gpickle(handle) End of explanation
8,260
Given the following text description, write Python code to implement the functionality described below step by step Description: Performance optimization overview The purpose of this tutorial is twofold Illustrate the performance optimizations applied to the code generated by an Operator. Describe the options Devito provides to users to steer the optimization process. As we shall see, most optimizations are automatically applied as they're known to systematically improve performance. Others, whose impact varies across different Operator's, are instead to be enabled through specific flags. An Operator has several preset optimization levels; the fundamental ones are noop and advanced. With noop, no performance optimizations are introduced. With advanced, several flop-reducing and data locality optimization passes are applied. Examples of flop-reducing optimization passes are common sub-expressions elimination and factorization, while examples of data locality optimization passes are loop fusion and cache blocking. Optimization levels in Devito are conceptually akin to the -O2, -O3, ... flags in classic C/C++/Fortran compilers. An optimization pass may provide knobs, or options, for fine-grained tuning. As explained in the next sections, some of these options are given at compile-time, others at run-time. ** Remark ** Parallelism -- both shared-memory (e.g., OpenMP) and distributed-memory (MPI) -- is by default disabled and is not controlled via the optimization level. In this tutorial we will also show how to enable OpenMP parallelism (you'll see it's trivial!). Another mini-guide about parallelism in Devito and related aspects is available here. **** Outline API Default values Running example OpenMP parallelism The advanced mode The advanced-fsg mode API The optimization level may be changed in various ways Step1: The following cell is only needed for Continuous Integration. But actually it's also an example of how "programmatic takes precedence over global" (see API section). Step2: Running example Throughout the notebook we will generate Operator's for the following time-marching Eq. Step3: Despite its simplicity, this Eq is all we need to showcase the key components of the Devito optimization engine. OpenMP parallelism There are several ways to enable OpenMP parallelism. The one we use here consists of supplying an option to an Operator. The next cell illustrates the difference between two Operator's generated with the noop optimization level, but with OpenMP enabled on the latter one. Step4: The OpenMP-ized op0_omp Operator includes Step5: The advanced mode The default optimization level in Devito is advanced. This mode performs several compilation passes to optimize the Operator for computation (number of flops), working set size, and data locality. In the next paragraphs we'll dissect the advanced mode to analyze, one by one, some of its key passes. Loop blocking The next cell creates a new Operator that adds loop blocking to what we had in op0_omp. Step6: ** Remark ** 'blocking' is not an optimization level -- it rather identifies a specific compilation pass. In other words, the advanced mode defines an ordered sequence of passes, and blocking is one such pass. **** The blocking pass creates additional loops over blocks. In this simple Operator there's just one loop nest, so only a pair of additional loops are created. In more complex Operator's, several loop nests may individually be blocked, whereas others may be left unblocked -- this is decided by the Devito compiler according to certain heuristics. The size of a block is represented by the symbols x0_blk0_size and y0_blk0_size, which are runtime parameters akin to nthreads. By default, Devito applies 2D blocking and sets the default block shape to 8x8. There are two ways to set a different block shape Step7: SIMD vectorization Devito enforces SIMD vectorization through OpenMP pragmas. Step8: Code motion The advanced mode has a code motion pass. In explicit PDE solvers, this is most commonly used to lift expensive time-invariant sub-expressions out of the inner loops. The pass is however quite general in that it is not restricted to the concept of time-invariance -- any sub-expression invariant with respect to a subset of Dimensions is a code motion candidate. In our running example, sin(f) gets hoisted out of the inner loops since it is determined to be an expensive invariant sub-expression. In other words, the compiler trades the redundant computation of sin(f) for additional storage (the r0[...] array). Step9: Basic flop-reducing transformations Among the simpler flop-reducing transformations applied by the advanced mode we find Step10: The factorization pass makes sure r0 is collected to reduce the number of multiplications. Step11: Finally, opt-pows turns costly pow calls into multiplications. Step12: Cross-iteration redundancy elimination (CIRE) This is perhaps the most advanced among the optimization passes in Devito. CIRE [1] searches for redundancies across consecutive loop iterations. These are often induced by a mixture of nested, high-order, partial derivatives. The underlying idea is very simple. Consider Step13: We note that since there are no redundancies along x, the compiler is smart to figure out that r0 and u can safely be computed within the same x loop. This is nice -- not only is the reuse distance decreased, but also a grid-sized temporary avoided. The min-storage option Let's now consider a variant of our running example Step14: As expected, there are now two temporaries, one stemming from u.dy.dy and the other from u.dx.dx. A key difference with respect to op7_omp here is that both are grid-size temporaries. This might seem odd at first -- why should the u.dy.dy temporary, that is r1, now be a three-dimensional temporary when we know already it could be a two-dimensional temporary? This is merely a compiler heuristic Step15: The cire-mincost-sops option So far so good -- we've seen that Devito can capture and schedule cross-iteration redundancies. But what if we actually do not want certain redundancies to be captured? There are a few reasons we may like that way, for example we're allocating too much extra memory for the tensor temporaries, and we rather prefer to avoid that. For this, we can tell Devito what the minimum cost of a sub-expression should be in order to be a CIRE candidate. The cost is an integer number based on the operation count and the type of operations Step16: We observe how setting cire-min-cost=31 makes the tensor temporary produced by op7_omp disappear. 30 is indeed the minimum cost such that the targeted sub-expression becomes an optimization candidate. 8.33333333e-2F*u[t0][x + 4][y + 2][z + 4]/h_y - 6.66666667e-1F*u[t0][x + 4][y + 3][z + 4]/h_y + 6.66666667e-1F*u[t0][x + 4][y + 5][z + 4]/h_y - 8.33333333e-2F*u[t0][x + 4][y + 6][z + 4]/h_y The cost of integer arithmetic for array indexing is always zero => 0. Three + (or -) => 3 Four / by h_y, a constant symbol => 4 Four two-way * with operands (1/h_y, u) => 8 So in total we have 15 operations. For a sub-expression to be optimizable away by CIRE, the resulting saving in operation count must be at least twice the cire-mincost-sops value OR there must be no increase in working set size. Here there is an increase in working set size -- the new tensor temporary -- and at the same time the threshold is set to 31, so the compiler decides not to optimize away the sub-expression. In short, the rational is that the saving in operation count does not justify the introduction of a new tensor temporary. Next, we try again with a smaller cire-mincost-sops. Step17: The cire-mincost-inv option Akin to sum-of-products, cross-iteration redundancies may be searched across dimension-invariant sub-expressions, typically time-invariants. So, analogously to what we've seen before Step18: For convenience, the lift pass triggers CIRE for dimension-invariant sub-expressions. As seen before, this leads to producing one tensor temporary. By setting a larger value for cire-mincost-inv, we avoid a grid-size temporary to be allocated, in exchange for a trascendental function, sin, to be computed at each iteration Step19: The cire-maxpar option Sometimes it's possible to trade storage for parallelism (i.e., for more parallel dimensions). For this, Devito provides the cire-maxpar option which is by default set to Step20: The generated code uses a three-dimensional temporary that gets written and subsequently read in two separate x-y-z loop nests. Now, both loops can safely be openmp-collapsed, which is vital when running on GPUs. Impact of CIRE in the advanced mode The advanced mode triggers all of the passes we've seen so far... and in fact, many more! Some of them, however, aren't visible in our running example (e.g., all of the MPI-related optimizations). These will be treated in a future notebook. Obviously, all of the optimization options (e.g., min-cost, cire-mincost-sops, blocklevels) are applicable and composable in any arbitrary way. Step21: A crucial observation here is that CIRE is applied on top of loop blocking -- the r1 temporary is computed within the same block as u, which in turn requires an additional iteration at the block edge along y to be performed (the first y loop starts at y0_blk0 - 2, while the second one at y0_blk0). Further, the size of r1 is now a function of the block shape. Altogether, this implements what in the literature is often referred to as "overlapped tiling" (or "overlapped blocking") Step22: There are two key things to notice here Step23: Within the y loop there are several iteration variables, some of which (yr0, yr1, ...) employ modulo increment to cyclically produce the indices 0 and 1. In essence, with cire-rotate, instead of computing an entire slice of y values, at each y iteration we only keep track of the values that are strictly necessary to evaluate u at y -- only two values in this case. This results in a working set reduction, at the price of turning one parallel loop (y) into a sequential one. The cire-maxalias option Let's consider the following variation of our running example, in which the outer y derivative now comprises several terms, other than just u.dy. Step24: By paying close attention to the generated code, we see that the r0 temporary only stores the u.dy component, while the 2*f*f term is left intact in the loop nest computing u. This is due to a heuristic applied by the Devito compiler Step25: Now we have a "fatter" temporary, and that's good -- but if, instead, we had had an Operator such as the one below, the gain from a reduced operation count might be outweighed by the presence of more temporaries, which means larger working set and increased memory traffic. Step26: The advanced-fsg mode The alternative advanced-fsg optimization level applies the same passes as advanced, but in a different order. The key difference is that -fsg does not generate overlapped blocking code across CIRE-generated loop nests. Step27: The x loop here is still shared by the two loop nests, but the y one isn't. Analogously, if we consider the alternative eq already used in op7_b0_omp, we get two completely separate, and therefore individually blocked, loop nests.
Python Code: from examples.performance import unidiff_output, print_kernel Explanation: Performance optimization overview The purpose of this tutorial is twofold Illustrate the performance optimizations applied to the code generated by an Operator. Describe the options Devito provides to users to steer the optimization process. As we shall see, most optimizations are automatically applied as they're known to systematically improve performance. Others, whose impact varies across different Operator's, are instead to be enabled through specific flags. An Operator has several preset optimization levels; the fundamental ones are noop and advanced. With noop, no performance optimizations are introduced. With advanced, several flop-reducing and data locality optimization passes are applied. Examples of flop-reducing optimization passes are common sub-expressions elimination and factorization, while examples of data locality optimization passes are loop fusion and cache blocking. Optimization levels in Devito are conceptually akin to the -O2, -O3, ... flags in classic C/C++/Fortran compilers. An optimization pass may provide knobs, or options, for fine-grained tuning. As explained in the next sections, some of these options are given at compile-time, others at run-time. ** Remark ** Parallelism -- both shared-memory (e.g., OpenMP) and distributed-memory (MPI) -- is by default disabled and is not controlled via the optimization level. In this tutorial we will also show how to enable OpenMP parallelism (you'll see it's trivial!). Another mini-guide about parallelism in Devito and related aspects is available here. **** Outline API Default values Running example OpenMP parallelism The advanced mode The advanced-fsg mode API The optimization level may be changed in various ways: globally, through the DEVITO_OPT environment variable. For example, to disable all optimizations on all Operator's, one could run with DEVITO_OPT=noop python ... programmatically, adding the following lines to a program from devito import configuration configuration['opt'] = 'noop' locally, as an Operator argument Operator(..., opt='noop') Local takes precedence over programmatic, and programmatic takes precedence over global. The optimization options, instead, may only be changed locally. The syntax to specify an option is Operator(..., opt=('advanced', {&lt;optimization options&gt;}) A concrete example (you can ignore the meaning for now) is Operator(..., opt=('advanced', {'blocklevels': 2}) That is, options are to be specified together with the optimization level (advanced). Default values By default, all Operator's are run with the optimization level set to advanced. So this Operator(Eq(...)) is equivalent to Operator(Eq(...), opt='advanced') and obviously also to Operator(Eq(...), opt=('advanced', {})) In virtually all scenarios, regardless of application and underlying architecture, this ensures very good performance -- but not necessarily the very best. Misc The following functions will be used throughout the notebook for printing generated code. End of explanation from devito import configuration configuration['language'] = 'C' configuration['platform'] = 'bdw' # Optimize for an Intel Broadwell configuration['opt-options']['par-collapse-ncores'] = 1 # Maximize use loop collapsing Explanation: The following cell is only needed for Continuous Integration. But actually it's also an example of how "programmatic takes precedence over global" (see API section). End of explanation from devito import Eq, Grid, Operator, Function, TimeFunction, sin grid = Grid(shape=(80, 80, 80)) f = Function(name='f', grid=grid) u = TimeFunction(name='u', grid=grid, space_order=4) eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) Explanation: Running example Throughout the notebook we will generate Operator's for the following time-marching Eq. End of explanation op0 = Operator(eq, opt=('noop')) op0_omp = Operator(eq, opt=('noop', {'openmp': True})) # print(op0) # print(unidiff_output(str(op0), str(op0_omp))) # Uncomment to print out the diff only print_kernel(op0_omp) Explanation: Despite its simplicity, this Eq is all we need to showcase the key components of the Devito optimization engine. OpenMP parallelism There are several ways to enable OpenMP parallelism. The one we use here consists of supplying an option to an Operator. The next cell illustrates the difference between two Operator's generated with the noop optimization level, but with OpenMP enabled on the latter one. End of explanation op0_b0_omp = Operator(eq, opt=('noop', {'openmp': True, 'par-dynamic-work': 100})) print_kernel(op0_b0_omp) Explanation: The OpenMP-ized op0_omp Operator includes: the header file "omp.h" a #pragma omp parallel num_threads(nthreads) directive a #pragma omp for collapse(...) schedule(dynamic,1) directive More complex Operator's will have more directives, more types of directives, different iteration scheduling strategies based on heuristics and empirical tuning (e.g., static instead of dynamic), etc. The reason for collapse(1), rather than collapse(3), boils down to using opt=('noop', ...); if the default advanced mode had been used instead, we would see the latter clause. We note how the OpenMP pass introduces a new symbol, nthreads. This allows users to explicitly control the number of threads with which an Operator is run. op0_omp.apply(time_M=0) # Picks up `nthreads` from the standard environment variable OMP_NUM_THREADS op0_omp.apply(time_M=0, nthreads=2) # Runs with 2 threads per parallel loop A few optimization options are available for this pass (but not on all platforms, see here), though in our experience the default values do a fine job: par-collapse-ncores: use a collapse clause only if the number of available physical cores is greater than this value (default=4). par-collapse-work: use a collapse clause only if the trip count of the collapsable loops is statically known to exceed this value (default=100). par-chunk-nonaffine: a coefficient to adjust the chunk size in non-affine parallel loops. The larger the coefficient, the smaller the chunk size (default=3). par-dynamic-work: use dynamic scheduling if the operation count per iteration exceeds this value. Otherwise, use static scheduling (default=10). par-nested: use nested parallelism if the number of hyperthreads per core is greater than this value (default=2). So, for instance, we could switch to static scheduling by constructing the following Operator End of explanation op1_omp = Operator(eq, opt=('blocking', {'openmp': True})) # print(op1_omp) # Uncomment to see the *whole* generated code print_kernel(op1_omp) Explanation: The advanced mode The default optimization level in Devito is advanced. This mode performs several compilation passes to optimize the Operator for computation (number of flops), working set size, and data locality. In the next paragraphs we'll dissect the advanced mode to analyze, one by one, some of its key passes. Loop blocking The next cell creates a new Operator that adds loop blocking to what we had in op0_omp. End of explanation op1_omp_6D = Operator(eq, opt=('blocking', {'blockinner': True, 'blocklevels': 2, 'openmp': True})) # print(op1_omp_6D) # Uncomment to see the *whole* generated code print_kernel(op1_omp_6D) Explanation: ** Remark ** 'blocking' is not an optimization level -- it rather identifies a specific compilation pass. In other words, the advanced mode defines an ordered sequence of passes, and blocking is one such pass. **** The blocking pass creates additional loops over blocks. In this simple Operator there's just one loop nest, so only a pair of additional loops are created. In more complex Operator's, several loop nests may individually be blocked, whereas others may be left unblocked -- this is decided by the Devito compiler according to certain heuristics. The size of a block is represented by the symbols x0_blk0_size and y0_blk0_size, which are runtime parameters akin to nthreads. By default, Devito applies 2D blocking and sets the default block shape to 8x8. There are two ways to set a different block shape: passing an explicit value. For instance, below we run with a 24x8 block shape op1_omp.apply(..., x0_blk0_size=24) letting the autotuner pick up a better block shape for us. There are several autotuning modes. A short summary is available here op1_omp.apply(..., autotune='aggressive') Loop blocking also provides two optimization options: blockinner={False, True} -- to enable 3D (or any nD, n>2) blocking blocklevels={int} -- to enable hierarchical blocking, to exploit multiple levels of the cache hierarchy In the example below, we construct an Operator with six-dimensional loop blocking: the first three loops represent outer blocks, whereas the second three loops represent inner blocks within an outer block. End of explanation op2_omp = Operator(eq, opt=('blocking', 'simd', {'openmp': True})) # print(op2_omp) # Uncomment to see the generated code # print(unidiff_output(str(op1_omp), str(op2_omp))) # Uncomment to print out the diff only Explanation: SIMD vectorization Devito enforces SIMD vectorization through OpenMP pragmas. End of explanation op3_omp = Operator(eq, opt=('lift', {'openmp': True})) print_kernel(op3_omp) Explanation: Code motion The advanced mode has a code motion pass. In explicit PDE solvers, this is most commonly used to lift expensive time-invariant sub-expressions out of the inner loops. The pass is however quite general in that it is not restricted to the concept of time-invariance -- any sub-expression invariant with respect to a subset of Dimensions is a code motion candidate. In our running example, sin(f) gets hoisted out of the inner loops since it is determined to be an expensive invariant sub-expression. In other words, the compiler trades the redundant computation of sin(f) for additional storage (the r0[...] array). End of explanation op4_omp = Operator(eq, opt=('cse', {'openmp': True})) print(unidiff_output(str(op0_omp), str(op4_omp))) Explanation: Basic flop-reducing transformations Among the simpler flop-reducing transformations applied by the advanced mode we find: "classic" common sub-expressions elimination (CSE), factorization, optimization of powers The cell below shows how the computation of u changes by incrementally applying these three passes. First of all, we observe how the symbolic spacing h_y gets assigned to a temporary, r0, as it appears in several sub-expressions. This is the effect of CSE. End of explanation op5_omp = Operator(eq, opt=('cse', 'factorize', {'openmp': True})) print(unidiff_output(str(op4_omp), str(op5_omp))) Explanation: The factorization pass makes sure r0 is collected to reduce the number of multiplications. End of explanation op6_omp = Operator(eq, opt=('cse', 'factorize', 'opt-pows', {'openmp': True})) print(unidiff_output(str(op5_omp), str(op6_omp))) Explanation: Finally, opt-pows turns costly pow calls into multiplications. End of explanation op7_omp = Operator(eq, opt=('cire-sops', {'openmp': True})) print_kernel(op7_omp) # print(unidiff_output(str(op7_omp), str(op0_omp))) # Uncomment to print out the diff only Explanation: Cross-iteration redundancy elimination (CIRE) This is perhaps the most advanced among the optimization passes in Devito. CIRE [1] searches for redundancies across consecutive loop iterations. These are often induced by a mixture of nested, high-order, partial derivatives. The underlying idea is very simple. Consider: r0 = a[i-1] + a[i] + a[i+1] at i=1, we have r0 = a[0] + a[1] + a[2] at i=2, we have r0 = a[1] + a[2] + a[3] So the sub-expression a[1] + a[2] is computed twice, by two consecutive iterations. What makes CIRE complicated is the generalization to arbitrary expressions, the presence of multiple dimensions, the scheduling strategy due to the trade-off between redundant compute and working set, and the co-existance with other optimizations (e.g., blocking, vectorization). All these aspects won't be treated here. What instead we will show is the effect of CIRE in our running example and the optimization options at our disposal to drive the detection and scheduling of the captured redundancies. In our running example, some cross-iteration redundancies are induced by the nested first-order derivatives along y. As we see below, these redundancies are captured and assigned to the two-dimensional temporary r0. Note: the name cire-sops means "Apply CIRE to sum-of-product expressions". A sum-of-product is what taking a derivative via finite differences produces. End of explanation eq = Eq(u.forward, f**2*sin(f)*(u.dy.dy + u.dx.dx)) op7_b0_omp = Operator(eq, opt=('cire-sops', {'openmp': True})) print_kernel(op7_b0_omp) Explanation: We note that since there are no redundancies along x, the compiler is smart to figure out that r0 and u can safely be computed within the same x loop. This is nice -- not only is the reuse distance decreased, but also a grid-sized temporary avoided. The min-storage option Let's now consider a variant of our running example End of explanation op7_b1_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'min-storage': True})) print_kernel(op7_b1_omp) Explanation: As expected, there are now two temporaries, one stemming from u.dy.dy and the other from u.dx.dx. A key difference with respect to op7_omp here is that both are grid-size temporaries. This might seem odd at first -- why should the u.dy.dy temporary, that is r1, now be a three-dimensional temporary when we know already it could be a two-dimensional temporary? This is merely a compiler heuristic: by adding an extra dimension to r1, both temporaries can be scheduled within the same loop nest, thus augmenting data reuse and potentially enabling further cross-expression optimizations. We can disable this heuristic through the min-storage option. End of explanation eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example op8_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-mincost-sops': 31})) print_kernel(op8_omp) Explanation: The cire-mincost-sops option So far so good -- we've seen that Devito can capture and schedule cross-iteration redundancies. But what if we actually do not want certain redundancies to be captured? There are a few reasons we may like that way, for example we're allocating too much extra memory for the tensor temporaries, and we rather prefer to avoid that. For this, we can tell Devito what the minimum cost of a sub-expression should be in order to be a CIRE candidate. The cost is an integer number based on the operation count and the type of operations: A basic arithmetic operation such as + and * has a cost of 1. A / whose divisor is a constant expression has a cost of 1. A / whose divisor is not a constant expression has a cost of 25. A power with non-integer exponent has a cost of 50. A power with non-negative integer exponent n has a cost of n-1 (i.e., the number of * it will be converted into). A trascendental function (sin, cos, etc.) has a cost of 100. The cire-mincost-sops option can be used to control the minimum cost of CIRE candidates. End of explanation op8_b1_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-mincost-sops': 30})) print_kernel(op8_b1_omp) Explanation: We observe how setting cire-min-cost=31 makes the tensor temporary produced by op7_omp disappear. 30 is indeed the minimum cost such that the targeted sub-expression becomes an optimization candidate. 8.33333333e-2F*u[t0][x + 4][y + 2][z + 4]/h_y - 6.66666667e-1F*u[t0][x + 4][y + 3][z + 4]/h_y + 6.66666667e-1F*u[t0][x + 4][y + 5][z + 4]/h_y - 8.33333333e-2F*u[t0][x + 4][y + 6][z + 4]/h_y The cost of integer arithmetic for array indexing is always zero => 0. Three + (or -) => 3 Four / by h_y, a constant symbol => 4 Four two-way * with operands (1/h_y, u) => 8 So in total we have 15 operations. For a sub-expression to be optimizable away by CIRE, the resulting saving in operation count must be at least twice the cire-mincost-sops value OR there must be no increase in working set size. Here there is an increase in working set size -- the new tensor temporary -- and at the same time the threshold is set to 31, so the compiler decides not to optimize away the sub-expression. In short, the rational is that the saving in operation count does not justify the introduction of a new tensor temporary. Next, we try again with a smaller cire-mincost-sops. End of explanation eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example op11_omp = Operator(eq, opt=('lift', {'openmp': True})) # print_kernel(op11_omp) # Uncomment to see the generated code Explanation: The cire-mincost-inv option Akin to sum-of-products, cross-iteration redundancies may be searched across dimension-invariant sub-expressions, typically time-invariants. So, analogously to what we've seen before: End of explanation op12_omp = Operator(eq, opt=('lift', {'openmp': True, 'cire-mincost-inv': 51})) print_kernel(op12_omp) Explanation: For convenience, the lift pass triggers CIRE for dimension-invariant sub-expressions. As seen before, this leads to producing one tensor temporary. By setting a larger value for cire-mincost-inv, we avoid a grid-size temporary to be allocated, in exchange for a trascendental function, sin, to be computed at each iteration End of explanation op13_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-maxpar': True})) print_kernel(op13_omp) Explanation: The cire-maxpar option Sometimes it's possible to trade storage for parallelism (i.e., for more parallel dimensions). For this, Devito provides the cire-maxpar option which is by default set to: False on CPU backends True on GPU backends Let's see what happens when we switch it on End of explanation op14_omp = Operator(eq, openmp=True) print(op14_omp) # op14_b0_omp = Operator(eq, opt=('advanced', {'min-storage': True})) Explanation: The generated code uses a three-dimensional temporary that gets written and subsequently read in two separate x-y-z loop nests. Now, both loops can safely be openmp-collapsed, which is vital when running on GPUs. Impact of CIRE in the advanced mode The advanced mode triggers all of the passes we've seen so far... and in fact, many more! Some of them, however, aren't visible in our running example (e.g., all of the MPI-related optimizations). These will be treated in a future notebook. Obviously, all of the optimization options (e.g., min-cost, cire-mincost-sops, blocklevels) are applicable and composable in any arbitrary way. End of explanation op15_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-rotate': True})) print_kernel(op15_omp) Explanation: A crucial observation here is that CIRE is applied on top of loop blocking -- the r1 temporary is computed within the same block as u, which in turn requires an additional iteration at the block edge along y to be performed (the first y loop starts at y0_blk0 - 2, while the second one at y0_blk0). Further, the size of r1 is now a function of the block shape. Altogether, this implements what in the literature is often referred to as "overlapped tiling" (or "overlapped blocking"): data reuse across consecutive loop nests is obtained by cross-loop blocking, which in turn requires a certain degree of redundant computation at the block edge. Clearly, there's a tension between the block shape and the amount of redundant computation. For example, a small block shape guarantees a small(er) working set, and thus better data reuse, but also requires more redundant computation. The cire-rotate option So far we've seen two ways to compute the tensor temporaries: The temporary dimensions span the whole grid; The temporary dimensions span a block created by the loop blocking pass. There are a few other ways, and in particular there's a third way supported in Devito, enabled through the cire-rotate option: The temporary outermost-dimension is a function of the stencil radius; all other temporary dimensions are a function of the loop blocking shape. Let's jump straight into an example End of explanation print(op15_omp.body[1].header[4]) Explanation: There are two key things to notice here: The r1 temporary is a pointer to a two-dimensional array of size [2][z_size]. It's obtained via casting of pr1[tid], which in turn is defined as End of explanation eq = Eq(u.forward, (2*f*f*u.dy).dy) op16_omp = Operator(eq, opt=('advanced', {'openmp': True})) print_kernel(op16_omp) Explanation: Within the y loop there are several iteration variables, some of which (yr0, yr1, ...) employ modulo increment to cyclically produce the indices 0 and 1. In essence, with cire-rotate, instead of computing an entire slice of y values, at each y iteration we only keep track of the values that are strictly necessary to evaluate u at y -- only two values in this case. This results in a working set reduction, at the price of turning one parallel loop (y) into a sequential one. The cire-maxalias option Let's consider the following variation of our running example, in which the outer y derivative now comprises several terms, other than just u.dy. End of explanation op16_b0_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-maxalias': True})) print_kernel(op16_b0_omp) Explanation: By paying close attention to the generated code, we see that the r0 temporary only stores the u.dy component, while the 2*f*f term is left intact in the loop nest computing u. This is due to a heuristic applied by the Devito compiler: in a derivative, only the sub-expressions representing additions -- and therefore inner derivatives -- are used as CIRE candidates. The logic behind this heuristic is that of minimizing the number of required temporaries at the price of potentially leaving some redundancies on the table. One can disable this heuristic via the cire-maxalias option. End of explanation eq = Eq(u.forward, (2*f*f*u.dy).dy + (3*f*u.dy).dy) op16_b1_omp = Operator(eq, opt=('advanced', {'openmp': True})) op16_b2_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-maxalias': True})) # print_kernel(op16_b1_omp) # Uncomment to see generated code with one temporary but more flops # print_kernel(op16_b2_omp) # Uncomment to see generated code with two temporaries but fewer flops Explanation: Now we have a "fatter" temporary, and that's good -- but if, instead, we had had an Operator such as the one below, the gain from a reduced operation count might be outweighed by the presence of more temporaries, which means larger working set and increased memory traffic. End of explanation eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example op17_omp = Operator(eq, opt=('advanced-fsg', {'openmp': True})) print(op17_omp) Explanation: The advanced-fsg mode The alternative advanced-fsg optimization level applies the same passes as advanced, but in a different order. The key difference is that -fsg does not generate overlapped blocking code across CIRE-generated loop nests. End of explanation eq = Eq(u.forward, f**2*sin(f)*(u.dy.dy + u.dx.dx)) op17_b0 = Operator(eq, opt=('advanced-fsg', {'openmp': True})) print(op17_b0) Explanation: The x loop here is still shared by the two loop nests, but the y one isn't. Analogously, if we consider the alternative eq already used in op7_b0_omp, we get two completely separate, and therefore individually blocked, loop nests. End of explanation
8,261
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook is meant to test the functions written in python to analyze AFiNES simulation output. It's going to be messy, but so it goes... Step1: First a test of the readData function that reads in .txt files output by afines. Right now (9/12/2017) this only works for actins.txt Step2: Now calculate the velocity divergence. This is essentially stolen from here
Python Code: import os import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib notebook import pandas as pd import h5py import afinesanalysis.afinesanalysis as aa Explanation: This notebook is meant to test the functions written in python to analyze AFiNES simulation output. It's going to be messy, but so it goes... End of explanation # Change this line if on a different system dataFolder = '/media/daniel/storage1/local_LLM_Danny/rupture/a-1.0-p-1.0/txt_stack/' filename = 'actins.txt' dt = 1 domainSize = 50 nbins = 10 minpts = 10 dr = 1 rbfFunc = 'gaussian' rbfEps = 5 savestuff = False nGridPts = np.ceil(domainSize/dr) + 1 txy = aa.readData(dataFolder, filename) # Load in hdf5 file actinData = h5py.File(os.path.join(dataFolder, 'actinsParsed.hdf5'), 'a') actins = actinData.create_dataset('actinData', data=txy) del actinData Explanation: First a test of the readData function that reads in .txt files output by afines. Right now (9/12/2017) this only works for actins.txt End of explanation # Don't include the id data when getting the divergence [xx, yy, uut, vvt] = aa.interpolateVelocity(txy[:,:,:2], dt, domainSize, nbins, minpts, dr, rbfFunc, rbfEps, savestuff) divV = aa.massWeightedVelocityDivergence(txy[:,:,:2], uut, vvt, dt) interpedXVel = actinData.create_dataset('interpedXVel', data=uut) interpedYVel = actinData.create_dataset('interpedYVel', data=vvt) xGrid = actinData.create_dataset('xGrid', data=xx) yGrid = actinData.create_dataset('yGrid', data=yy) actinData.close() divV[::10].shape fig, ax = plt.subplots() ax.plot(np.linspace(1,99,100), divV[::10]) ax.set_xlabel('time (s)') ax.set_ylabel(r'$\int \rho \langle \nabla \cdot \vec{v} \rangle$') fig.savefig(os.path.join(dataFolder, 'divergence.png')) fig, ax = plt.subplots() ax.pcolor(xx,yy,massWeight.T) ax.set_aspect('equal') fig, ax = plt.subplots() ax.quiver(xx, yy ,uut[-10,...], vvt[-10,...]) divergence = np.gradient(uut)[2] + np.gradient(vvt)[1] curl = np.gradient(uut)[2] * np.gradient(vvt)[1] - np.gradient(uut)[1] + np.gradient(vvt)[2] np.sum(divergence[200,...]) for frame, div in enumerate(divergence[::10,...]): fig, ax = plt.subplots() cax1 = ax.pcolor(xx,yy, div, cmap='cool', vmax=0.3, vmin=-0.3) ax.set_aspect('equal') ax.quiver(xx,yy, uut[frame,...], vvt[frame,...]) cbar1 = fig.colorbar(cax1, ax=ax) ax.set_xlabel('x ($\mu$m)') ax.set_ylabel('y ($\mu$m)') ax.set_title('Divergence Field, sum=%2f' % np.sum(div)) fig.tight_layout() fig.savefig(os.path.join(dataFolder, 'frame{0}.png'.format(frame))) fig, ax = plt.subplots() divergenceSeries = -np.nansum(np.nansum(divergence, axis=2), axis=1) plt.plot(divergenceSeries) #fig, ax = plt.subplots() #cax1 = ax.pcolor(xx, yy, divergence, cmap='inferno') #ax.set_aspect('equal') #ax.quiver(xx,yy, uu, vv) #cbar1 = fig.colorbar(cax1, ax=ax) #ax.set_xlabel('x ($\mu$m)') #ax.set_ylabel('y ($\mu$m)') #ax.set_title('Divergence Field, sum=%2f' % np.sum(divergence)) #fig.tight_layout() #fig, ax = plt.subplots() #cax2 = ax.pcolor(xx, yy, curl, cmap='cool') #ax.quiver(xx,yy,uu,vv) #ax.set_aspect('equal') #cbar2 = fig.colorbar(cax2, ax=ax) #ax.set_xlabel('x ($\mu$m)') #ax.set_ylabel('y ($\mu$m)') #ax.set_title('Curl Field, sum=%2f' % np.sum(curl)) #fig.tight_layout() Explanation: Now calculate the velocity divergence. This is essentially stolen from here End of explanation
8,262
Given the following text description, write Python code to implement the functionality described below step by step Description: ```python R = 40. H = 60 x_w = 0. y_w = 60. f = lambda x,y Step1: Monte-Carlo integration Step2: Cartessian Gausss-Legendre quadrature Step3: integrating before in y Step5: Radial Gauss-Leg
Python Code: R = 40. H = 60 x_w = 30. y_w = 90. Rw = 20 seed = 1 def f(x,y): np.random.seed(seed) return - 3.*np.exp( -((x-x_w)**2. + (y-y_w)**2.)/(Rw**2.) ) + \ - 3.*np.exp( -((x-30)**2. + (y-40)**2.)/(20**2.) ) + \ - 1.5*np.exp( -((x-np.random.uniform(-60,60))**2. + (y-np.random.uniform(40,70))**2.)/(60.**2.) ) #+ 4.*(y/H)**0.0 Explanation: ```python R = 40. H = 60 x_w = 0. y_w = 60. f = lambda x,y: - 4.np.exp( -((x-x_w)2. + (y-y_w)2.)/(30.2.) ) + \ - 4.np.exp( -((x+70.)2. + (y-60.)2.)/(60.2.) ) #+ 15.*(y/H)0.05 ``` End of explanation x_MC = np.random.uniform(-R,R,9e6) y_MC = np.random.uniform(-R,R,9e6) + H r = np.sqrt(x_MC**2. + (y_MC-H)**2.) ind_in = np.where(r <= R) u_eq_MC = np.mean(f(x_MC[ind_in],y_MC[ind_in])) u_eq_MC x_mesh,y_mesh = np.meshgrid(np.arange(-50.,100.,0.1),np.arange(0.,120.,0.1)) F = f(x_mesh,y_mesh) plt.contourf(x_mesh,y_mesh,F,20) cb = plt.colorbar() circ = plt.Circle([0.,H],R,ec='k',fill=False) ax = plt.gca() ax.add_patch(circ) plt.plot(x_MC[ind_in],y_MC[ind_in],'.k',alpha=0.005) x_plot = np.arange(-50.,50.1,0.1) Explanation: Monte-Carlo integration End of explanation N_x = 5 N_y = 5 root_x_1D, weight_x = np.polynomial.legendre.leggauss(N_x) root_y_1D, weight_y = np.polynomial.legendre.leggauss(N_y) root_x,root_y = np.meshgrid(root_x_1D,root_y_1D) root_x,root_y = root_x.flatten(), root_y.flatten() print root_x print root_y print print root_x_1D print root_y_1D print weight_x print weight_y y1 = lambda x: np.sqrt(R**2. - x**2.) + H y2 = lambda x: -np.sqrt(R**2. - x**2.) + H xi_1D = R*root_x_1D xi = R*root_x yij = 0.5*(y1(xi) - y2(xi))*root_y + 0.5*(y1(xi) + y2(xi)) fij = f(xi,yij) fij.reshape(N_x,N_y) np.dot(fij.reshape(N_x,N_y),weight_y) G_xi = 0.5*(y1(xi_1D) - y2(xi_1D)) * np.dot(fij.reshape(N_x,N_y),weight_y) I = np.dot(R*G_xi,weight_x)/(np.pi*R**2.) print I print 100*(I-u_eq_MC)/u_eq_MC, '%' plt.contourf(x_mesh,y_mesh,F,20) plt.colorbar() circ = plt.Circle([0.,H],R,ec='k',fill=False) ax = plt.gca() ax.add_patch(circ) x_plot = np.arange(-50.,50.2,0.1) plt.plot(x_plot,y1(x_plot)) plt.plot(x_plot,y2(x_plot)) plt.plot(xi,yij,'ok') Explanation: Cartessian Gausss-Legendre quadrature End of explanation N_x = 5 N_y = 5 root_x_1D, weight_x = np.polynomial.legendre.leggauss(N_x) root_y_1D, weight_y = np.polynomial.legendre.leggauss(N_y) root_x,root_y = np.meshgrid(root_x_1D,root_y_1D) root_x,root_y = root_x.flatten(), root_y.flatten() x1 = lambda y: np.sqrt(R**2. - (y-H)**2.) x2 = lambda y: -np.sqrt(R**2. - (y-H)**2.) yi_1D = R*root_y_1D+H yi = R*root_y+H xij = 0.5*(x1(yi) - x2(yi))*root_x + 0.5*(x1(yi) + x2(yi)) fij = f(xij,yi) fij.reshape(N_x,N_y) G_yi = 0.5*(x1(yi_1D) - x2(yi_1D)) * np.dot(weight_x,fij.reshape(N_x,N_y)) I = np.dot(R*G_yi,weight_y)/(np.pi*R**2.) print I print 100*(I-u_eq_MC)/u_eq_MC, '%' plt.contourf(x_mesh,y_mesh,F,20) plt.colorbar() circ = plt.Circle([0.,H],R,ec='k',fill=False) ax = plt.gca() ax.add_patch(circ) y_plot = np.arange(-50.,50.2,0.1)+H plt.plot(x1(y_plot),y_plot) plt.plot(x2(y_plot),y_plot) plt.plot(xij,yi,'ok') Explanation: integrating before in y End of explanation f_r = lambda r,th: f(-r*np.sin(th),r*np.cos(th)+H) def gaussN(R, func, varargin, NGr=4, NGth=4): Calculate numerically the gauss integration. [1] eq. 38 Inputs ---------- R (float): Wind turbine radius [m] func (function): Wind speed function varargin: Other arguments for the function besides [r,te] NG (int): Number of Ga Outputs ---------- Ua (float): A = np.pi*R**2 #coefficients if (NGr==4)&(NGth==4): #for speed give the full values rt = np.array([[ -0.339981043584856, -0.861136311594053, 0.339981043584856, 0.861136311594053]]) te = rt.T w = np.array([[0.652145154862546, 0.347854845137454, 0.652145154862546, 0.347854845137454]]) wt=w else: rt,w = np.polynomial.legendre.leggauss(NGr) rt = np.array([rt]) #te = rt.T w = np.array([w]) te,wt = np.polynomial.legendre.leggauss(NGr) te = np.array([te]).T wt = np.array([wt]) return np.sum((np.pi/4.0)*(R**2./A)*w*wt.T*func(R*(rt+1.0)/2.0, np.pi*(te+1.0),*varargin)*(rt+1.0)) N_r = 4 N_th = 4 I = gaussN(R,f_r,[],N_r,N_th) print I print 100*(I-u_eq_MC)/u_eq_MC, '%' plt.contourf(x_mesh,y_mesh,F,20) plt.colorbar() circ = plt.Circle([0.,H],R,ec='k',fill=False) ax = plt.gca() ax.add_patch(circ) rt,w = np.polynomial.legendre.leggauss(N_r) rt = np.array([rt]) te,wt = np.polynomial.legendre.leggauss(N_th) te = np.array([te]) re = R*(rt+1.0)/2.0 te = np.pi*(te.T+1.0) plt.plot(-re*np.sin(te),re*np.cos(te)+H,'ok') Explanation: Radial Gauss-Leg End of explanation
8,263
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: SVM Hyperparameter Tuning - Using GridSearchCV to Tune a SVC Model
Python Code:: from sklearn.svm import SVC from sklearn.metrics import classification_report from sklearn.model_selection import GridSearchCV # declare parameter ranges to try params = {'C':[1, 2, 3], 'kernel':['linear', 'poly', 'rbf']} # initialise estimator svm_classifier = SVC(class_weight='balanced') # initialise grid search model model = GridSearchCV(estimator=svm_classifier, param_grid=params, scoring='accuracy', n_jobs=-1) model.fit(X_train, y_train) y_pred = model.predict(X_test) print(model.best_params_) print(classification_report(y_test, y_pred))
8,264
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">Theanets</h1> <h4 align="center">Korepanova Natalia</h4> <h5 align="center">Moscow, 2015</h5> The theanets is a deep learning and neural network toolkit. Written in Python to iteroperate with numpy and scikit-learn theano is used to accelerate computations when possible using GPU Simple API for building and training common types of neural network models Easy-to-read code Quite detailed user guide and documentation Under th hood, a fully expressive graph computation framework Installation The easiest way is by using pip Step1: 1. Creating a Model All network models theanets are instances of the base class Network , which maintain two pieces of information Step2: In general, layers argument must be a sequence of values each of which specifies the configuration of a single layer in the model Step3: Layer Attributes size Step4: If there is a string in the tuple that names a registered layer type (e.g., 'tied', 'rnn', etc.), then this type of layer will be created. If there is a string in the tuple and it does not name a registered layer type, the string is assumed to name an activation function—for example, 'logistic', 'relu+norm Step5: If a layer configuration value is a dictionary, its keyword arguments are passed directly to theanets.Layer.build() to construct a new layer instance. The dictionary must contain a size key. It can additionally contain any other keyword arguments that you wish to use when constructing the layer. Step6: sparsity Step7: 1.2 Specifying a Loss All of the predefined models in theanets are created by default with one loss function appropriate for that type of model. * Autoencoder Step8: Multiple Loss Let’s say that you want to optimize a model using both the mean absolute and the mean squared error. You could first create a regular regression model Step9: You can specify the relative weight of the two losses by manipulating the weight attribute of each loss instance. For instance, if you want the MAE loss to be twice as strong as the MSE loss Step10: Weighted Targets Include weighted=True when you create a model. Example Step11: The training and validation datasets require an additional component Step12: 1.3 Recurrent Models Time is an explicit part of the model Step13: 2. Training a Model 2.1 Specifying a Trainer The easiest way train a model with theanets is to invoke the train() method Step14: Here, a classifier model is being trained using Nesterov’s accelerated gradient, with a learning rate of 0.01 and momentum of 0.9. Multiple calls to train() are possible and can be used to implement things like custom annealing schedules (e.g., the “newbob” training strategy) Step15: Trainers In theanets the most of the trainers are provided by downhill package which provides algorithms for minimizing scalar loss functions that are defined using theano. * sgd Step16: 2.3 Specifying Regularizers Regularizers in theanets are specified during training, in calls to Network.train(), or during use, in calls to Network.predict(). Built-in Regularizers Decay Step17: Custom Regularizers To create a custom regularizer in theanets, you need to create a custom subclass of the theanets.Regularizer class, and then provide this regularizer when you run your model. Example Step18: 2.4 Training as Iteration Step19: 2.5 Saving Progress The theanets.Network base class can snapshot your model automatically during training. When you call theanets.Network.train(), you can provide the following keyword arguments Step20: Regardless of the model, you pass to predict() a numpy array containing data examples along the rows, and the method returns an array containing one row of output predictions for each row of input data. You can also compute the activations of all layer outputs in the network using the theanets.Network.feed_forward() method Step21: This method returns a dictionary that maps layer output names to their corresponding values for the given input. Like predict(), each output array contains one row for every row of input data. 3.2 Inspecting Parameters The parameters in each layer of the model are available using theanets.Network.find(). The first query term finds a layer in the network, and the second finds a parameter within that layer. The find() method returns a theano shared variable. To get a numpy array of the current values of the variable, call get_value() on the result from find(), like so Step22: Computation Graphs In theanets it is also possible to create network graphs that have arbitrary, acyclic connections among layers. Creating a nonlinear network graph requires using the inputs keyword argument when creating a layer. Step23: Examples Step24: MNIST Autoencoder (PCA) Many extremely common dimensionality reduction techniques can be expressed as autoencoders. For instance, Principal Component Analysis (PCA) can be expressed as a model with two tied, linear layers Step25: MNIST Classifier Step26: In this example, the weights in layer 1 connect the inputs to the first hidden layer; these weights have one column of 784 values for each hidden node in the network, so we can iterate over the transpose and put each column—properly reshaped—into a giant image.
Python Code: import theanets # 1. create a model -- here, a regression model. net = theanets.Regressor([10, 100, 2]) # optional: set up additional losses. net.add_loss('mae', weight=0.1) # 2. train the model. net.train( training_data, validation_data, algo='rmsprop', hidden_l1=0.01, # apply a regularizer. ) # 3. use the trained model. net.predict(test_data) Explanation: <h1 align="center">Theanets</h1> <h4 align="center">Korepanova Natalia</h4> <h5 align="center">Moscow, 2015</h5> The theanets is a deep learning and neural network toolkit. Written in Python to iteroperate with numpy and scikit-learn theano is used to accelerate computations when possible using GPU Simple API for building and training common types of neural network models Easy-to-read code Quite detailed user guide and documentation Under th hood, a fully expressive graph computation framework Installation The easiest way is by using pip: This command will install all the dependdeces of theanets including numpy and theano. You can also download package from https://github.com/lmjohns3/theanets and run the code from your local copy: Package overview Three basin steps in theanets: 1. Define the structure of the model * Classification (theanets.Classifier) * Regression (theanets.Regressor) * Parametric mapping to the input space (theanets.Autoencoder) * Recurrent modesl (theanets.reccurent module) 2. Train the model with respect to some task or cost function 3. Use the model for making predictions and/or visualizing the learned features The usual sceleton of the code: End of explanation net = theanets.Regressor(layers=[10, 20, 3]) Explanation: 1. Creating a Model All network models theanets are instances of the base class Network , which maintain two pieces of information: 1. a list of layers 2. a list of (possibly regularized) loss functions 1.1 Specifying Layers The easiest and the most common architecture to create is a network with a single “chain” of layers. Example: End of explanation net = theanets.Regressor([A, B, ..., Z]) Explanation: In general, layers argument must be a sequence of values each of which specifies the configuration of a single layer in the model: End of explanation net = theanets.Regressor([4, 5, 6, 2]) Explanation: Layer Attributes size: The number of “neurons” in the layer. form: A string specifying the type of layer to use. This defaults to “feedforward” name: A string name for the layer. The default names for the first and last layers - 'in' and 'out', the layers in between are assigned the name “hidN” where N is the number of existing layers. activation: A string describing the activation function to use for the layer. This defaults to 'relu' - $max(0,z)$. etc. Activation Functions Some possible values: * relu: $g(z) = \max(0,z)$ * linear: $g(z) = z$ * logistic, sigmoid: $g(z) = (1 + \exp(-z))^{-1}$ * tanh: $g(z) = \tanh(z)$ * softmax: $g(z) = \exp(z)/\sum_v(\exp(v))$ * norm:z: $g(z) = (z - \bar{z})/\mathbb{E}[(z - \bar{z})^2]$ * etc. Activation functions can also be composed by concatenating multiple function names togather using a +. Examples: End of explanation net = theanets.Regressor([4, (5, 'sigmoid'), (6, 'softmax')]) net = theanets.Regressor([10, (10, 'tanh+norm:z'), 10]) Explanation: If there is a string in the tuple that names a registered layer type (e.g., 'tied', 'rnn', etc.), then this type of layer will be created. If there is a string in the tuple and it does not name a registered layer type, the string is assumed to name an activation function—for example, 'logistic', 'relu+norm:z', and so on. End of explanation net = theanets.Regressor([4, dict(size=5, activation='tanh'), 2]) net = theanets.Regressor([4, dict(size=5, sparsity=0.9), 2]) Explanation: If a layer configuration value is a dictionary, its keyword arguments are passed directly to theanets.Layer.build() to construct a new layer instance. The dictionary must contain a size key. It can additionally contain any other keyword arguments that you wish to use when constructing the layer. End of explanation import theanets import theano.tensor as TT class NoBias(theanets.Layer): # Transform the inputs for this layer into an output for the layer. def transform(self, inputs): return TT.dot(inputs, self.find('w')) #Helper method to create a new weight matrix. def setup(self): self.add_weights('w', nin=self.input_size, nout=self.size) layer = theanets.Layer.build('nobias', size=4) net = theanets.Autoencoder(layers=[4, (3, 'nobias', 'linear'), (4, 'tied', 'linear')]) Explanation: sparsity: A float giving the proportion of parameter values in the layer that should be initialized to zero. Nonzero values in the parameters will be drawn from a Gaussian distribution and then an appropriate number of these parameter values will randomly be reset to zero to make the parameter “sparse.” Custom Layers To create a custom layer, just create a subclass of theanets.Layer and give it the functionality you want. An example of a normal feedforward layer but without a bias term: End of explanation net = theanets.Regressor([4, 5, 2], loss='mae') Explanation: 1.2 Specifying a Loss All of the predefined models in theanets are created by default with one loss function appropriate for that type of model. * Autoencoder: MSE between network's output and input $$\mathcal{L}(X,\theta) = \frac{1}{mn}\sum_{i=1}^m\|F_\theta(x_i)-x_i\|2^2, \quad X \in \mathbb{R}^{m\times n}$$ * Regressor: MSE between true and predicted values of the target $$\mathcal{L}(X, Y,\theta) = \frac{1}{mn}\sum{i=1}^m\|F_\theta(x_i)-y_i\|2^2, \quad X \in \mathbb{R}^{m\times n}, Y \in \mathbb{R}^{m\times o}$$ * Classifier: Cross-entropy between the network output and the true target labels $$\mathcal{L}(X, Y,\theta) = \frac{1}{m}\sum{i=1}^m \sum_{j=1}^{k}\delta_{j, y_i}\log F_\theta(x_i)_j, \quad X \in \mathbb{R}^{m\times n}, Y \in {1,...,k}^m$$ For example, to use a mean-absolute error instead of the default mean-squared error for a regression model: End of explanation net = theanets.Regressor([10, 20, 3]) net.add_loss('mae', weight=0.1) Explanation: Multiple Loss Let’s say that you want to optimize a model using both the mean absolute and the mean squared error. You could first create a regular regression model: End of explanation net.losses[1].weight = 2 Explanation: You can specify the relative weight of the two losses by manipulating the weight attribute of each loss instance. For instance, if you want the MAE loss to be twice as strong as the MSE loss: End of explanation net = theanets.recurrent.Autoencoder([3, (10, 'rnn'), 3], weighted=True) Explanation: Weighted Targets Include weighted=True when you create a model. Example: End of explanation class Step(theanets.Loss): def __call__(self, outputs): step = outputs[self.output_name] > 0 if self._weights: return (self._weights * step).sum() / self._weights.sum() else: return step.mean() net = theanets.Regressor([5, 6, 7], loss='step', weighted=True) Explanation: The training and validation datasets require an additional component: an array of floating-point values with the same shape as the expected output of the model, so that the training and validation datasets would each have three pieces: sample, label, and weight. Each value in the weight array is used as the weight for the corresponding error when computing the loss. Custom Losses Create a new theanets.Loss subclass and specify its name when you create your model. For example, to create a regression model that uses a step function averaged over all of the model inputs with weighted outputs: End of explanation class Autoencoder(theanets.Network): def __init__(self, layers=(), loss='mse', weighted=False): super(Autoencoder, self).__init__( layers=layers, loss=loss, weighted=weighted) Explanation: 1.3 Recurrent Models Time is an explicit part of the model: * all the data shapes are one dimension larger than the corresponding shapes for a feedforward network * the extra dimension represents time * the extra dimension is located on: * the first (0) axis in theanets versions through 0.6 * the second (1) axis in theanets versions 0.7 and up. Recurrent versions of the three types of models: * theanets.recurrent.Autoencoder: takes as input $X \in \mathbb{R}^{m\times t \times n}$ and recreates the same data at the output under squared-error loss * theanets.recurrent.Regressor: input data $X \in \mathbb{R}^{m\times t \times n}$ and output data $Y \in \mathbb{R}^{m\times t \times o}$, fit the output under squared-error loss * theanets.recurrent.Classifier: input data $X \in \mathbb{R}^{m\times t \times n}$ and set of integer labels $Y \in \mathbb{Z}^{m \times t}$, the default error is cross-enthropy 1.4 Custom Models To create a custom model, just define a new subclass of theanets.Network. For instance, the feedforward autoencoder model is defined basically like this: End of explanation net = theanets.Classifier(layers=[10, 5, 2]) net.train(training_data, validation_data, algo='nag', learning_rate=0.01, momentum=0.9) Explanation: 2. Training a Model 2.1 Specifying a Trainer The easiest way train a model with theanets is to invoke the train() method: End of explanation net = theanets.Classifier(layers=[10, 5, 2]) for e in (-2, -3, -4): net.train(training_data, validation_data, algo='nag', learning_rate=10 ** e, momentum=1 - 10 ** (e + 1)) Explanation: Here, a classifier model is being trained using Nesterov’s accelerated gradient, with a learning rate of 0.01 and momentum of 0.9. Multiple calls to train() are possible and can be used to implement things like custom annealing schedules (e.g., the “newbob” training strategy): End of explanation SOURCES = 'foo.npy', 'bar.npy', 'baz.npy' BATCH_SIZE = 64 def batch(): X = np.load(np.random.choice(SOURCES), mmap_mode='r') i = np.random.randint(len(X)) return X[i:i+BATCH_SIZE] net = theanets.Regressor(layers=[10, 5, 2]) net.train(train=batch, ...) Explanation: Trainers In theanets the most of the trainers are provided by downhill package which provides algorithms for minimizing scalar loss functions that are defined using theano. * sgd: Stochastic gradient descent * nag: Nesterov’s accelerated gradient * rprop: Resilient backpropagation * rmsprop: RMSProp * adadelta: ADADELTA * esgd: Equilibrated SGD * adam: Adam Also theanets defines a few algorithms which are more specific to neural networks: * sampler: This trainer sets model parameters directly to samples drawn from the training data. This is a very fast “training” algorithm since all updates take place at once; however, often features derived directly from the training data require further tuning to perform well. * layerwise: Greedy supervised layerwise pre-training: This trainer applies RMSProp to each layer sequentially. * pretrain: Greedy unsupervised layerwise pre-training: This trainer applies RMSProp to a tied-weights “shadow” autoencoder using an unlabeled dataset, and then transfers the learned autoencoder weights to the model being trained. 2.2 Providing Data There are two ways of passing data: using numpy arrays and callables. Instead of an array of data, you can provide a callable for a Dataset. This callable must take no arguments and must return a list of numpy arrays of the proper shape for your loss. For example, this code defines a batch() helper that could be used for a loss that needs one input. The callable chooses a random dataset and a random offset for each batch: End of explanation net.train(..., weight_l2=1e-4) # Decay net.train(..., weight_l1=1e-4) # Sparcity net.train(..., hidden_l1=0.1) # Hidden representations net.train(..., input_noise=0.1) # Zero-mean Gaussian noise with std=0.1 added to the input net.train(..., hidden_noise=0.1) # Zero-mean Gaussian noise with std=0.1 added to the hidden layers net.train(..., input_dropout=0.3) # Binary noise with 0.3 probability of being set to zero added to the inputs net.train(..., hidden_dropout=0.3) # Binary noise with 0.3 probability of being set to zero added # to the hidden paraeters Explanation: 2.3 Specifying Regularizers Regularizers in theanets are specified during training, in calls to Network.train(), or during use, in calls to Network.predict(). Built-in Regularizers Decay: $$\mathcal{L}(\cdot)=...+\lambda\|\theta\|_2^2$$ Sparcity: $$\mathcal{L}(\cdot)=...+\lambda\|\theta\|_1$$ Hidden representations: $$\mathcal{L}(\cdot)=...+\lambda \sum_{i=2}^{N-1}\|f_i(x)\|_1$$ where $f_i(x)$ - activation function of i-th hidden layer Zero-mean Gaussian noise is added to input data or hidden layers Multiplicative dropout (binary) noise set some inputs or hidden layers parameters to zero End of explanation class WeightInverse(theanets.Regularizer): def loss(self, layers, outputs): return sum((1 / (p * p).sum(axis=0)).sum() for l in layers for p in l.params if p.ndim == 2) net = theanets.Autoencoder([4, (8, 'linear'), (4, 'tied')]) net.train(..., weightinverse=0.001) Explanation: Custom Regularizers To create a custom regularizer in theanets, you need to create a custom subclass of the theanets.Regularizer class, and then provide this regularizer when you run your model. Example: End of explanation for train, valid in net.itertrain(train_data, valid_data, **kwargs): print('training loss:', train['loss']) print('most recent validation loss:', valid['loss']) Explanation: 2.4 Training as Iteration End of explanation results = net.predict(new_dataset) Explanation: 2.5 Saving Progress The theanets.Network base class can snapshot your model automatically during training. When you call theanets.Network.train(), you can provide the following keyword arguments: save_progress: a string containing a filename where the model should be saved include an empty format string {} to format filename with the UTC Unix timestamp at the moment the model is saved. save_every: a numeric value specifying how often the model should be saved during training integer value: the number of training iterations between checkpoints float value: the number of minutes that are allowed to elapse between checkpoints. Mannualy: * theanets.Network.save() * theanets.Network.load() 3. Using a Model 3.1 Predicting New Data End of explanation for name, value in net.feed_forward(new_dataset).items(): print(abs(value).sum(axis=1)) Explanation: Regardless of the model, you pass to predict() a numpy array containing data examples along the rows, and the method returns an array containing one row of output predictions for each row of input data. You can also compute the activations of all layer outputs in the network using the theanets.Network.feed_forward() method: End of explanation param = net.find('hid1', 'w') values = param.get_value() Explanation: This method returns a dictionary that maps layer output names to their corresponding values for the given input. Like predict(), each output array contains one row for every row of input data. 3.2 Inspecting Parameters The parameters in each layer of the model are available using theanets.Network.find(). The first query term finds a layer in the network, and the second finds a parameter within that layer. The find() method returns a theano shared variable. To get a numpy array of the current values of the variable, call get_value() on the result from find(), like so: End of explanation theanets.Classifier(( 784, dict(size=100, name='a'), dict(size=100, name='b'), dict(size=100, name='c'), dict(size=10, inputs=('a', 'b', 'c')), )) Explanation: Computation Graphs In theanets it is also possible to create network graphs that have arbitrary, acyclic connections among layers. Creating a nonlinear network graph requires using the inputs keyword argument when creating a layer. End of explanation import theanets import numpy as np from matplotlib import pyplot as plt %matplotlib inline mnist = np.loadtxt("/home/natalia/ML/mnist_train.csv", delimiter=",", skiprows=1) train_X = (mnist[:30000, 1:]/255.).astype('f') train_y = mnist[:30000, 0].astype('i') valid_X = (mnist[30000:, 1:]/255.).astype('f') valid_y = mnist[30000:, 0].astype('i') print(train_X.shape, train_y.shape, valid_X.shape, valid_y.shape) Explanation: Examples End of explanation pca = theanets.Autoencoder([784, (10, 'linear'), (784, 'tied')]) pca.train(train_X, valid_X) from utils import plot_images # from https://github.com/lmjohns3/theanets/tree/master/examples v = valid_X[:100,:] plt.figure(figsize = (10,10)) plot_images(v, 121, 'Sample data') plt.tight_layout() plot_images(pca.predict(v), 122, 'Reconstructed data') plt.tight_layout() plt.show() Explanation: MNIST Autoencoder (PCA) Many extremely common dimensionality reduction techniques can be expressed as autoencoders. For instance, Principal Component Analysis (PCA) can be expressed as a model with two tied, linear layers: End of explanation net = theanets.Classifier(layers=[784, 100, 10]) train = [train_X, train_y] valid = [valid_X, valid_y] net.train(train, valid, algo='nag', learning_rate=1e-3, momentum=0.9) Explanation: MNIST Classifier End of explanation img = np.zeros((28 * 10, 28 * 10), dtype='f') plt.figure(figsize = (8,8)) for i, pix in enumerate(net.find('hid1', 'w').get_value().T): r, c = divmod(i, 10) img[r * 28:(r+1) * 28, c * 28:(c+1) * 28] = pix.reshape((28, 28)) plt.imshow(img, cmap=plt.cm.gray) plt.show() net.train(train, valid, algo='nag', learning_rate=1e-3, momentum=0.9, weight_l1=1e-4) for train, valid in net.itertrain([train_X, train_y], [valid_X, valid_y], algo='sgd', learning_rate=1e-2, momentum=0.9, min_improvement=0.1, patience=1): print('training loss:', train['loss']) print('most recent validation loss:', valid['loss']) Explanation: In this example, the weights in layer 1 connect the inputs to the first hidden layer; these weights have one column of 784 values for each hidden node in the network, so we can iterate over the transpose and put each column—properly reshaped—into a giant image. End of explanation
8,265
Given the following text description, write Python code to implement the functionality described below step by step Description: Fetch the data from NewsroomDB NewsroomDB is the Tribune's proprietary database for tracking data that needs to be manually entered and validated rather than something that can be ingested from an official source. It's mostly used to track shooting victims and homicides. As far as I know, CPD doesn't provide granular data on shooting victims and the definition of homicide can be tricky (and vary from source to source). We'll grab shooting victims from the shootings collection. Step1: Filter to only year-to-date shooting victims Step2: Group shooting victims by year Step3: Count the victims by year Step4: Spot-check our numbers
Python Code: import os import requests # A big object to hold all our data between steps data = {} def get_table_url(table_name, base_url=os.environ['NEWSROOMDB_URL']): return '{}table/json/{}'.format(os.environ['NEWSROOMDB_URL'], table_name) def get_table_data(table_name): url = get_table_url(table_name) try: r = requests.get(url) return r.json() except: print("Request failed. Probably because the response is huge. We should fix this.") return get_table_data(table_name) data['shooting_victims'] = get_table_data('shootings') print("Loaded {} shooting victims".format(len(data['shooting_victims']))) Explanation: Fetch the data from NewsroomDB NewsroomDB is the Tribune's proprietary database for tracking data that needs to be manually entered and validated rather than something that can be ingested from an official source. It's mostly used to track shooting victims and homicides. As far as I know, CPD doesn't provide granular data on shooting victims and the definition of homicide can be tricky (and vary from source to source). We'll grab shooting victims from the shootings collection. End of explanation from datetime import date, datetime def get_shooting_date(shooting_victim): return datetime.strptime(shooting_victim['Date'], '%Y-%m-%d') def shooting_is_ytd(shooting_victim, today): try: shooting_date = get_shooting_date(shooting_victim) except ValueError: if shooting_victim['RD Number']: msg = "Could not parse date for shooting victim with RD Number {}".format( shooting_victim['RD Number']) else: msg = "Could not parse date for shooting victim with record ID {}".format( shooting_victim['_id']) print(msg) return False return (shooting_date.month <= today.month and shooting_date.day <= today.day) today = date(2016, 3, 30) #today = date.today() # Use a list comprehension to filter the shooting victims to ones that # occured on or before today's month and day. # Also sort by date because it makes it easier to group by year data['shooting_victims_ytd'] = sorted([sv for sv in data['shooting_victims'] if shooting_is_ytd(sv, today)], key=get_shooting_date) Explanation: Filter to only year-to-date shooting victims End of explanation import itertools def get_shooting_year(shooting_victim): shooting_date = get_shooting_date(shooting_victim) return shooting_date.year data['shooting_victims_ytd_by_year'] = [] for year, grp in itertools.groupby(data['shooting_victims_ytd'], key=get_shooting_year): data['shooting_victims_ytd_by_year'].append((year, list(grp))) Explanation: Group shooting victims by year End of explanation data['shooting_victims_ytd_by_year_totals'] = [(year, len(shooting_victims)) for year, shooting_victims in data['shooting_victims_ytd_by_year']] import csv import sys writer = csv.writer(sys.stdout) writer.writerow(['year', 'num_shooting_victims']) for year, num_shooting_victims in data['shooting_victims_ytd_by_year_totals']: writer.writerow([year, num_shooting_victims]) Explanation: Count the victims by year End of explanation shooting_victims_2016 = next(shooting_victims for year, shooting_victims in data['shooting_victims_ytd_by_year'] if year == 2016) num_shooting_victims_2016 = next(num_shooting_victims for year, num_shooting_victims in data['shooting_victims_ytd_by_year_totals'] if year == 2016) today = date.today() num_shootings = 0 for shooting_victim in shooting_victims_2016: num_shootings += 1 shooting_date = get_shooting_date(shooting_victim) assert shooting_date.year == 2016 assert shooting_date.month <= today.month assert shooting_date.day <= today.day assert num_shootings == num_shooting_victims_2016 Explanation: Spot-check our numbers End of explanation
8,266
Given the following text description, write Python code to implement the functionality described below step by step Description: INDIA, G-20 AND THE WORLD - Statistical Year Book India 2016 Navigation Path Step1: Data Cleanup Step2: Experiments Step3: Ideas Show top 5 countries Show only comparable countries Step4: IRIS Dataset Step5: Random Forest Step6: SVM
Python Code: %%sh # ls -l ~/Downloads/G20*csv # mv ~/Downloads/G20*csv G20.csv Explanation: INDIA, G-20 AND THE WORLD - Statistical Year Book India 2016 Navigation Path: Home > Statistical Year Book India 2016 > INDIA, G-20 AND THE WORLD The G20 (or G-20 or Group of Twenty) is an international forum for the governments and central bank governors from 20 major economies. It was founded in 1999 with the aim of studying, reviewing, and promoting high-level discussion of policy issues pertaining to the promotion of international financial stability.[3] It seeks to address issues that go beyond the responsibilities of any one organization.[3] The G20 heads of government or heads of state have periodically conferred at summits since their initial meeting in 2008, and the group also hosts separate meetings of finance ministers and central bank governors. The members include 19 individual countries and along with the European Union (EU). The EU is represented by the European Commission and by the European Central Bank. Collectively, the G20 economies account for around 85% of the gross world product (GWP), 80% of world trade (or, if excluding EU intra-trade, 75%), and two-thirds of the world population.[2] Data Source: * http://mospi.nic.in/statistical-year-book-india/2016/170 References: * Wikipedia G20 Data Gathering wget http://mospi.nic.in/statistical-year-book-india/2016/170 Country Area Population (Millions) GDP Billions (USD) Gross Domestic Product Per Capita Income at Current Price (USD) Gross domestic product based on Purchasing-Power-Parity (PPP) valuation of Country GDP in Billions ( Current International Dollar) wget https://docs.google.com/a/imaginea.com/spreadsheets/d/1jbwyZsHy_SsJ-ANWlNVgMKOl5PkoMMcqkMiMJRXDXms/edit?usp=sharing End of explanation data = pd.read_csv('G20.csv') cols = ['Area', 'Population_2010', 'Population_2011', 'Population_2012', 'Population_2013', 'Population_2014', 'Population_2015', 'GDP_2010', 'GDP_2011', 'GDP_2012', 'GDP_2013', 'GDP_2014', 'GDP_2015', 'GDP_PCI_2010', 'GDP_PCI_2011', 'GDP_PCI_2012', 'GDP_PCI_2013', 'GDP_PCI_2014', 'GDP_PCI_2015', 'GDP_PPP_2010', 'GDP_PPP_2011', 'GDP_PPP_2012', 'GDP_PPP_2013', 'GDP_PPP_2014', 'GDP_PPP_2015'] data[cols] = data[cols].applymap(lambda x: float(str(x).replace(',', ''))) all_countries = sorted(data.Country.unique()) country_labler = all_countries.index # country_labler('India') # data.Country = data.Country.map(country_labler) sorted(data.columns.tolist()) cols1 = ['GDP_2010', 'GDP_2011', 'GDP_2012', 'GDP_2013', 'GDP_2014', 'GDP_2015',] cols2 = [ 'GDP_PPP_2010', 'GDP_PPP_2011', 'GDP_PPP_2012', 'GDP_PPP_2013', 'GDP_PPP_2014', 'GDP_PPP_2015'] cols3 = [] data1 = data[['Area', 'Country', 'GDP_2010', 'GDP_2011', 'GDP_2012', 'GDP_2013', 'GDP_2014', 'GDP_2015',]].copy() data2 = data[['Area', 'Country', 'GDP_PPP_2010', 'GDP_PPP_2011', 'GDP_PPP_2012', 'GDP_PPP_2013', 'GDP_PPP_2014', 'GDP_PPP_2015',]].copy() data3 = data[['Area', 'Country', 'GDP_PCI_2010', 'GDP_PCI_2011', 'GDP_PCI_2012', 'GDP_PCI_2013', 'GDP_PCI_2014', 'GDP_PCI_2015',]].copy() data4 = data[['Area', 'Country', 'Population_2010', 'Population_2011', 'Population_2012', 'Population_2013', 'Population_2014', 'Population_2015']].copy() Explanation: Data Cleanup End of explanation import sklearn.cluster clf = sklearn.cluster.AgglomerativeClustering(5) pred = clf.fit_predict(data1['GDP_2010 GDP_2011 GDP_2012 GDP_2013 GDP_2014 GDP_2015'.split()]) pred new_data.metric.unique() new_data.head(20).copy(deep=True) # segregating year & param new_data['year'] = new_data.metric.map(lambda x: int(x.rsplit('_')[-1])) new_data['param'] = new_data.metric.map(lambda x: ''.join(x.rsplit('_')[:-1])) # drop metric column new_data.drop('metric', axis=1, inplace=True) # converting data into integers # Key values to check how the world print('Country', new_data.country.unique()) print('Country', new_data.param.unique()) temp = new_data[(new_data.country == 'USA') & (new_data.param == 'GDP')].copy(deep=True) temp X_Label = 'USA' Y_Label = 'GDP' plt.figure(figsize=(15, 5)) temp = new_data[(new_data.country == X_Label) & (new_data.param == Y_Label)].copy(deep=True) _x, _y = temp.year.values, temp.value.values plt.plot(_x, _y) plt.xticks(_x, map(str, _x)) X_Label = 'European Union' Y_Label = 'GDP' plt.figure(figsize=(15, 5)) temp = new_data[(new_data.country == X_Label) & (new_data.param == Y_Label)].copy(deep=True) _x, _y = temp.year.values, temp.value.values plt.plot(_x, _y) plt.xticks(_x, map(str, _x)) X_Label = 'USA' Y_Label = 'GDP' plt.figure(figsize=(15, 5)) temp = new_data[(new_data.country == X_Label) & (new_data.param == Y_Label)].copy(deep=True) _x, _y = temp.year.values, temp.value.values plt.plot(_x, _y) plt.xticks(_x, map(str, _x)) _y _y - _y.min() Y_Label = 'Population' plt.figure(figsize=(15, 8)) all_countries = new_data.country.unique()[:5] for X_Label in all_countries: temp = new_data[(new_data.country == X_Label) & (new_data.param == Y_Label)].copy(deep=True) _x, _y = temp.year.values, temp.value.values _y = _y - _y.min() plt.plot(_x, _y) plt.xticks(_x, map(str, _x)) plt.legend(all_countries) Explanation: Experiments End of explanation country_codes = {'Argentina': 'ARG', 'Australia': 'AUS', 'Brazil': 'BRA', 'Canada': 'CAN', 'China': 'CHN', 'European Union': 'USA', 'France': 'FRA', 'Germany': 'DEU', 'India': 'IND', 'Indonesia': 'IDN', 'Italy': 'ITA', 'Japan': 'JPN', 'Mexico': 'MEX', 'Republic of Korea': 'USA', 'Russia': 'RUS', 'Saudi Arabia': 'SAU', 'South Africa': 'ZAF', 'Turkey': 'TUR', 'USA': 'USA', 'United Kingdom': 'GBR'} chart_colors = ["rgb(0,0,0)", "rgb(255,255,255)", "rgb(255,0,0)", "rgb(0,255,0)", "rgb(0,0,255)", "rgb(255,255,0)", "rgb(0,255,255)", "rgb(255,0,255)", "rgb(192,192,192)", "rgb(128,128,128)", "rgb(128,0,0)", "rgb(128,128,0)", "rgb(0,128,0)", "rgb(128,0,128)", "rgb(0,128,128)", "rgb(0,0,128)",] chart_colors += chart_colors chart_colors = chart_colors[:len(country_codes)] data1['Country_Codes'] = data1['Country'].map(lambda x: country_codes[x]) import sklearn.cluster clf = sklearn.cluster.AgglomerativeClustering(5) pred = clf.fit_predict(data1['GDP_2010 GDP_2011 GDP_2012 GDP_2013 GDP_2014 GDP_2015'.split()]) pred data1['cluster'] = pred data1['text'] = 'Cluster ID' + data1.cluser data1.head() import plotly.plotly as py import pandas as pd # df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2014_world_gdp_with_codes.csv') data = [ dict( type = 'choropleth', locations = data1['Country_Codes'], z = data1['cluser'], text = data1['Country_Codes'], # colorscale = [[0,"rgb(5, 10, 172)"],[0.35,"rgb(40, 60, 190)"],[0.5,"rgb(70, 100, 245)"],\ # [0.6,"rgb(90, 120, 245)"],[0.7,"rgb(106, 137, 247)"],[1,"rgb(220, 220, 220)"]], # autocolorscale = True, # reversescale = True, # marker = dict( # line = dict ( # color = 'rgb(180,180,180)', # width = 0.5 # ) ), colorbar = dict( autotick = False, tickprefix = '$', title = 'GDP<br>Billions US$'), ) ] layout = dict( title = 'G-20"s GDP', geo = dict( showframe = False, showcoastlines = False, projection = dict( type = 'Mercator' ) ) ) fig = dict(data=data, layout=layout) # py.iplot( fig, validate=False, filename='d3-world-map' ) plot( fig, validate=False, filename='d3-world-map') fig = { 'data': [ { 'x': df2007.gdpPercap, 'y': df2007.lifeExp, 'text': df2007.country, 'mode': 'markers', 'name': '2007'}, { 'x': df1952.gdpPercap, 'y': df1952.lifeExp, 'text': df1952.country, 'mode': 'markers', 'name': '1952'} ], 'layout': { 'xaxis': {'title': 'GDP per Capita', 'type': 'log'}, 'yaxis': {'title': "Life Expectancy"} } } data = [] year = 'GDP_2015' data.append({ 'x': data1[year], 'y': data1['cluster'], 'mode': 'markers', 'text': data1['Country'], 'name': year, 'colors': chart_colors }) fig = dict(data=data, layout=layout) # py.iplot( fig, validate=False, filename='d3-world-map' ) plot( fig, validate=False, filename='d3-world-map') Explanation: Ideas Show top 5 countries Show only comparable countries End of explanation from sklearn import datasets # import some data to play with iris = datasets.load_iris() X = iris.data # [:, :2] # we only take the first two features. Y = iris.target X[:5] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.25, random_state=0) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) from sklearn.metrics import accuracy_score Explanation: IRIS Dataset End of explanation from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier() clf = clf.fit(X_train, y_train) accuracy_score(clf.predict(X_train), y_train) accuracy_score(clf.predict(X_test), y_test) accuracy_score(clf.predict(X), Y) Explanation: Random Forest End of explanation from sklearn import svm clf = svm.SVC(kernel='linear', C=2) clf = clf.fit(X_train, y_train) accuracy_score(clf.predict(X_train), y_train) accuracy_score(clf.predict(X_test), y_test) accuracy_score(clf.predict(X), Y) Explanation: SVM End of explanation
8,267
Given the following text description, write Python code to implement the functionality described below step by step Description: Retrain a CNN, part 2.2, using bottleneck features https Step1: This script goes along the blog post "Building powerful image classification models using very little data" from blog.keras.io. It uses data that can be downloaded at Step2: Next step is to use those saved bottleneck feature activations and train our own, very simple fc layer
Python Code: import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np from distutils.version import StrictVersion import sklearn print(sklearn.__version__) assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0') import keras print(keras.__version__) assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0') Explanation: Retrain a CNN, part 2.2, using bottleneck features https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html End of explanation !ls -lh data import numpy as np from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense from keras import applications # dimensions of our images. img_width, img_height = 150, 150 train_data_dir = 'data/train' validation_data_dir = 'data/validation' nb_train_samples = 2000 nb_validation_samples = 800 epochs = 50 batch_size = 16 # build the VGG16 network model = applications.VGG16(include_top=False, weights='imagenet') Explanation: This script goes along the blog post "Building powerful image classification models using very little data" from blog.keras.io. It uses data that can be downloaded at: https://www.kaggle.com/c/dogs-vs-cats/data In our setup, we: - created a data/ folder - created train/ and validation/ subfolders inside data/ - created cats/ and dogs/ subfolders inside train/ and validation/ - put the cat pictures index 0-999 in data/train/cats - put the cat pictures index 1000-1400 in data/validation/cats - put the dogs pictures index 12500-13499 in data/train/dogs - put the dog pictures index 13500-13900 in data/validation/dogs So that we have 1000 training examples for each class, and 400 validation examples for each class. In summary, this is our directory structure: data/ train/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... validation/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... End of explanation train_data = np.load(open('bottleneck_features_train.npy', 'rb')) train_data.shape[1:] # first half of data is dog (0), second half is cat (1) train_labels = np.array( [0] * (nb_train_samples // 2) + [1] * (nb_train_samples // 2)) # same for validation validation_data = np.load(open('bottleneck_features_validation.npy', 'rb')) validation_labels = np.array( [0] * (nb_validation_samples // 2) + [1] * (nb_validation_samples // 2)) model = Sequential() model.add(Flatten(input_shape=train_data.shape[1:])) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.summary() model.fit(train_data, train_labels, epochs=epochs, batch_size=batch_size, validation_data=(validation_data, validation_labels)) top_model_weights_path = 'bottleneck_fc_model.h5' model.save_weights(top_model_weights_path) Explanation: Next step is to use those saved bottleneck feature activations and train our own, very simple fc layer End of explanation
8,268
Given the following text description, write Python code to implement the functionality described below step by step Description: Bins Mark This Mark is essentially the same as the Hist Mark from a user point of view, but is actually a Bars instance that bins sample data. The difference with Hist is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend. Step1: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales. Step2: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y Step3: Tuning the bins Under the hood, the Bins mark is really a Bars mark, with some additional magic to control the binning. The data in sample is binned into equal-width bins. The parameters controlling the binning are the following traits Step4: Histogram Styling The styling of Hist is identical to the one of Bars
Python Code: # Create a sample of Gaussian draws np.random.seed(0) x_data = np.random.randn(1000) Explanation: Bins Mark This Mark is essentially the same as the Hist Mark from a user point of view, but is actually a Bars instance that bins sample data. The difference with Hist is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend. End of explanation x_sc = LinearScale() y_sc = LinearScale() hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,) ax_x = Axis(scale=x_sc, tick_format='0.2f') ax_y = Axis(scale=y_sc, orientation='vertical') Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0) Explanation: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales. End of explanation hist.x, hist.y Explanation: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y: End of explanation x_sc = LinearScale() y_sc = LinearScale() hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,) ax_x = Axis(scale=x_sc, tick_format='0.2f') ax_y = Axis(scale=y_sc, orientation='vertical') Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0) # Changing the number of bins hist.bins = 'sqrt' # Changing the range hist.min = 0 Explanation: Tuning the bins Under the hood, the Bins mark is really a Bars mark, with some additional magic to control the binning. The data in sample is binned into equal-width bins. The parameters controlling the binning are the following traits: bins sets the number of bins. It is either a fixed integer (10 by default), or the name of a method to determine the number of bins in a smart way ('auto', 'fd', 'doane', 'scott', 'rice', 'sturges' or 'sqrt'). min and max set the range of the data (sample) to be binned density, if set to True, normalizes the heights of the bars. For more information, see the documentation of numpy's histogram End of explanation # Normalizing the count x_sc = LinearScale() y_sc = LinearScale() hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, density=True) ax_x = Axis(scale=x_sc, tick_format='0.2f') ax_y = Axis(scale=y_sc, orientation='vertical') Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0) # changing the color hist.colors=['orangered'] # stroke and opacity update hist.stroke = 'orange' hist.opacities = [0.5] * len(hist.x) # Laying the histogram on its side hist.orientation = 'horizontal' ax_x.orientation = 'vertical' ax_y.orientation = 'horizontal' Explanation: Histogram Styling The styling of Hist is identical to the one of Bars End of explanation
8,269
Given the following text description, write Python code to implement the functionality described below step by step Description: The Alien Blaster problem This notebook presents solutions to exercises in Think Bayes. Copyright 2016 Allen B. Downey MIT License Step1: Part One In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$. Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien? Step2: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP? Step4: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report Step5: If we start with a uniform prior, we can see what the likelihood function looks like Step6: A tie is most likely if they are both terrible shots or both very good. Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K? Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior Step7: The posterior mean and MAP are lower than in the prior. Step8: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case). Part Two Suppose we have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien Blaster 10Ks. After extensive testing, we have concluded that the AB9000 hits the target 30% of the time, precisely, and the AB10K hits the target 40% of the time. If I grab a random weapon from the stockpile and shoot at 10 targets, what is the probability of hitting exactly 3? Again, you can write a number, mathematical expression, or Python code. Step9: The answer is a value drawn from the mixture of the two distributions. Continuing the previous problem, let's estimate the distribution of k, the number of successful shots out of 10. Write a few lines of Python code to simulate choosing a random weapon and firing it. Write a loop that simulates the scenario and generates random values of k 1000 times. Store the values of k you generate and plot their distribution. Step10: Here's what the distribution looks like. Step11: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs Step12: Then for each x we generate a k Step13: And the results look similar. Step14: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects Step15: Here's how we can draw samples from the meta-Pmf Step16: And here are the results, one more time Step17: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x. We can compute the mixture analtically using thinkbayes2.MakeMixture
Python Code: from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import numpy as np from thinkbayes2 import Hist, Pmf, Cdf, Suite, Beta import thinkplot Explanation: The Alien Blaster problem This notebook presents solutions to exercises in Think Bayes. Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation prior = Beta(2, 3) thinkplot.Pdf(prior.MakePmf()) prior.Mean() Explanation: Part One In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$. Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien? End of explanation posterior = Beta(3, 2) posterior.Update((2, 8)) posterior.MAP() Explanation: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP? End of explanation from scipy import stats class AlienBlaster(Suite): def Likelihood(self, data, hypo): Computes the likeliood of data under hypo. data: number of shots they took hypo: probability of a hit, p n = data x = hypo # specific version for n=2 shots likes = [x**4, (1-x)**4, (2*x*(1-x))**2] # general version for any n shots likes = [stats.binom.pmf(k, n, x)**2 for k in range(n+1)] return np.sum(likes) Explanation: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent." Write a class called AlienBlaster that inherits from Suite and provides a likelihood function that takes this data -- two shots and a tie -- and computes the likelihood of the data for each hypothetical value of $x$. If you would like a challenge, write a version that works for any number of shots. End of explanation pmf = Beta(1, 1).MakePmf() blaster = AlienBlaster(pmf) blaster.Update(2) thinkplot.Pdf(blaster) Explanation: If we start with a uniform prior, we can see what the likelihood function looks like: End of explanation pmf = Beta(2, 3).MakePmf() blaster = AlienBlaster(pmf) blaster.Update(2) thinkplot.Pdf(blaster) Explanation: A tie is most likely if they are both terrible shots or both very good. Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K? Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior: End of explanation prior.Mean(), blaster.Mean() prior.MAP(), blaster.MAP() Explanation: The posterior mean and MAP are lower than in the prior. End of explanation k = 3 n = 10 x1 = 0.3 x2 = 0.4 0.3 * stats.binom.pmf(k, n, x1) + 0.7 * stats.binom.pmf(k, n, x2) Explanation: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case). Part Two Suppose we have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien Blaster 10Ks. After extensive testing, we have concluded that the AB9000 hits the target 30% of the time, precisely, and the AB10K hits the target 40% of the time. If I grab a random weapon from the stockpile and shoot at 10 targets, what is the probability of hitting exactly 3? Again, you can write a number, mathematical expression, or Python code. End of explanation def flip(p): return np.random.random() < p def simulate_shots(n, p): return np.random.binomial(n, p) ks = [] for i in range(1000): if flip(0.3): k = simulate_shots(n, x1) else: k = simulate_shots(n, x2) ks.append(k) Explanation: The answer is a value drawn from the mixture of the two distributions. Continuing the previous problem, let's estimate the distribution of k, the number of successful shots out of 10. Write a few lines of Python code to simulate choosing a random weapon and firing it. Write a loop that simulates the scenario and generates random values of k 1000 times. Store the values of k you generate and plot their distribution. End of explanation pmf = Pmf(ks) thinkplot.Hist(pmf) len(ks), np.mean(ks) Explanation: Here's what the distribution looks like. End of explanation xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000) Hist(xs) Explanation: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs: End of explanation ks = np.random.binomial(n, xs) Explanation: Then for each x we generate a k: End of explanation pmf = Pmf(ks) thinkplot.Hist(pmf) np.mean(ks) Explanation: And the results look similar. End of explanation from thinkbayes2 import MakeBinomialPmf pmf1 = MakeBinomialPmf(n, x1) pmf2 = MakeBinomialPmf(n, x2) metapmf = Pmf({pmf1:0.3, pmf2:0.7}) metapmf.Print() Explanation: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects: End of explanation ks = [metapmf.Random().Random() for _ in range(1000)] Explanation: Here's how we can draw samples from the meta-Pmf: End of explanation pmf = Pmf(ks) thinkplot.Hist(pmf) np.mean(ks) Explanation: And here are the results, one more time: End of explanation from thinkbayes2 import MakeMixture mix = MakeMixture(metapmf) thinkplot.Hist(mix) mix.Mean() Explanation: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x. We can compute the mixture analtically using thinkbayes2.MakeMixture: def MakeMixture(metapmf, label='mix'): Make a mixture distribution. Args: metapmf: Pmf that maps from Pmfs to probs. label: string label for the new Pmf. Returns: Pmf object. mix = Pmf(label=label) for pmf, p1 in metapmf.Items(): for k, p2 in pmf.Items(): mix[k] += p1 * p2 return mix The outer loop iterates through the Pmfs; the inner loop iterates through the items. So p1 is the probability of choosing a particular Pmf; p2 is the probability of choosing a value from the Pmf. In the example, each Pmf is associated with a value of x (probability of hitting a target). The inner loop enumerates the values of k (number of targets hit after 10 shots). End of explanation
8,270
Given the following text description, write Python code to implement the functionality described below step by step Description: Variational Autoencoder Tutorial in TensorFlow David Zoltowski The tutorial is organized in the following manner Step1: When defining the model we will define many sets of weight and bias parameters (variables). I have copied this function from the TensorFlow website to define normal distributed weight variables, truncated at 2 standard deviations, and constant bias variables. From the TensorFlow website Step2: The final part of the set-up is to define the size of different parts of our network. We specifiy the input size to be the length of a image stacked into a vector, the number of hidden units in each of our hidden layers, and the dimensionality of the latent space. Step3: 2. Building the computational graph. We use the placeholder construct in TensorFlow to parametrize the graph to accept inputs (TensorFlow.org). The placeholders fixes a node in the computational graph with no value, but a value which we will specify later. We define the placeholder with the type of input, here a floating point tf.float32, and size of input. Our input to placeholders will be a mini-batch of images. We specify the size following the TensorFlow webiste Step4: The next stage of the VAE is to build the decoder network, also called the inference network or approximate posterior. This network will map the image vector to two vectors of size "latent_dim" Step5: Now we need to read out the second hidden layer to the mean vector and log variance vector. I introduce two more sets of weights and biases corresponding to each output vector. The output vectors are not passed through a nonlinearity. I chose to output the log variance so that I did not have to worry about positivity constraints. Step6: The preceding layers will output a mean vector and log variance vector for every input image in the batch, corresponding to the mean and variance parameters of the approximate posterior for each image. We will train the parameters of the variatonal autoencoder - both the parameters of the approximate posterior q(z|x) and generative model p(x|z) which we have yet to define - using stochastic estimates of the evidence lower bound (ELBO) Step7: Now that we have a sample z from the approximate posterior q(z|x), we pass z through a decoder network to reconstruct the image. I defined the decoder network similarly to the same way as the encoder network, with two hidden layers. However, this time the input to the network is of dimensionality "latent_dim" and the output of dimension 784 x 1, the image size, is passed through a sigmoidal function to be in [0,1]. Step8: The last part of the computational graph that we need to specify is the training objective. The objective consists of two parts Step9: A loss that I have observed being used is the cross entropy of the output reconstruction and the true image value, which is mathematically equivalent to the Bernoulli log likelihood of each pixel of image x given each parameter x_hat. However, we are confused as to why this loss is being use, as the pixel values are not binary. Step10: The KL divergence between two normal distributions q(z|x) and p(z) has an exact form. Step11: The training objective "loss" is the mean of the reconstruction loss and KL divergence loss across the images in the mini-batch. We will train the variables in our model to minimize the loss, using the tf.train API. I have defined a training step below which will use the Adam optimizer. Step12: 3. Training the model. The training code was modified from https Step13: Visualization This piece of code takes a test image from MNIST (x_sample) and evaluates the reconstruction x_hat when x_sample is fed into the VAE.
Python Code: import numpy as np import tensorflow as tf # import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) Explanation: Variational Autoencoder Tutorial in TensorFlow David Zoltowski The tutorial is organized in the following manner: 1. Set-up: importing packages, data, defining useful functions, and parameter settings 2. Building the computational graph: defining the series of operations we perform on an input. This is where we build the encoder and decoder networks and implement the reparametrization trick. Additionally, we define the loss function used to optimize the parameters of the graph. 3. Training the model: we train the model using stochastic gradient descent on mini-batches of data, optimizing the loss function. 4. Visualization: visualize the ability of the trained models to reconstruct test images and sampling new images from the prior. 1. Set-up. First, import the necessary packages, numpy and tensorflow, and the data on which we will train the model, MNIST. End of explanation # define functions to create weight and bias variables, from TensorFlow.org def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) Explanation: When defining the model we will define many sets of weight and bias parameters (variables). I have copied this function from the TensorFlow website to define normal distributed weight variables, truncated at 2 standard deviations, and constant bias variables. From the TensorFlow website: "Variables allow us to add trainable parameters to a graph. They are constructed with a type and initial value" e.g. tf.Variable([0.0],dtype=tf.float32). End of explanation latent_dim = 20; # size of latent space input_size = 784; # size of input image vector hidden_size = 500; # size of hidden layers in neural network Explanation: The final part of the set-up is to define the size of different parts of our network. We specifiy the input size to be the length of a image stacked into a vector, the number of hidden units in each of our hidden layers, and the dimensionality of the latent space. End of explanation x = tf.placeholder(tf.float32, shape=[None, 784]) Explanation: 2. Building the computational graph. We use the placeholder construct in TensorFlow to parametrize the graph to accept inputs (TensorFlow.org). The placeholders fixes a node in the computational graph with no value, but a value which we will specify later. We define the placeholder with the type of input, here a floating point tf.float32, and size of input. Our input to placeholders will be a mini-batch of images. We specify the size following the TensorFlow webiste: "We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be of any length.)" End of explanation # encoder network W1 = weight_variable([input_size,hidden_size]) b1 = bias_variable([hidden_size]) h1 = tf.nn.sigmoid(tf.matmul(x, W1) + b1) W2 = weight_variable([hidden_size,hidden_size]) b2 = bias_variable([hidden_size]) h2 = tf.nn.sigmoid(tf.matmul(h1,W2) + b2) Explanation: The next stage of the VAE is to build the decoder network, also called the inference network or approximate posterior. This network will map the image vector to two vectors of size "latent_dim": a mean vector and a log variance vector. I created a network with two hidden layers. I defined two sets of hidden layer weights, W1 and W2, and two sets of hidden layer biases, b1 and b2. The weights are matrices with dimensionality "input layer x current layer" and the biases are vectors with dimensionality "current layer". For each layer, I used the "tf.matmul" function to multiply the preceding input and the weight matrices, and then added the biases. Finally, I passed the sum W * x + b through a sigmoidal nonlinearity using "tf.nn.sigmoid". End of explanation # get mean W_hidden_mean = weight_variable([hidden_size, latent_dim]) b_hidden_mean = bias_variable([latent_dim]) hidden_mean = tf.matmul(h2, W_hidden_mean) + b_hidden_mean # get sigma - log variances W_hidden_sigma = weight_variable([hidden_size, latent_dim]) b_hidden_sigma = bias_variable([latent_dim]) hidden_log_sigma_sqr = tf.matmul(h2, W_hidden_sigma) + b_hidden_sigma Explanation: Now we need to read out the second hidden layer to the mean vector and log variance vector. I introduce two more sets of weights and biases corresponding to each output vector. The output vectors are not passed through a nonlinearity. I chose to output the log variance so that I did not have to worry about positivity constraints. End of explanation # sample noise with same shape as log variance vectors eps = tf.random_normal(tf.shape(hidden_log_sigma_sqr), 0, 1, dtype=tf.float32) # get a sample from the approximate posterior for each input # add mean and the std (sqrt of the exponentiated log variances) pointwise times noise hidden_sample = hidden_mean + tf.multiply(tf.sqrt(tf.exp(hidden_log_sigma_sqr)),eps) Explanation: The preceding layers will output a mean vector and log variance vector for every input image in the batch, corresponding to the mean and variance parameters of the approximate posterior for each image. We will train the parameters of the variatonal autoencoder - both the parameters of the approximate posterior q(z|x) and generative model p(x|z) which we have yet to define - using stochastic estimates of the evidence lower bound (ELBO): E_q(z|x)[log p(x|z)] - D_KL[q(z|x)||p(z)], where p(z) is N(0,I). We will estimate the ELBO expectation using one sample from q(z|x) for each data point. This is where we use the reparametrization trick obtain lower variance gradients. Instead, we will draw noise eps from a zero mean, identity covariance Gaussian and map it to a sample from each q(z|x) using the mean and log variance vectors: z = mu + eps.* variance. We draw a noise vector eps and map it to a sample from the posterior q(z|x) for each input image x in the mini-batch with the following lines of code. End of explanation # decoder network - map the hidden sample to an output of size = image size W3 = weight_variable([latent_dim,hidden_size]) b3 = bias_variable([hidden_size]) h3 = tf.nn.sigmoid(tf.matmul(hidden_sample, W3) + b3) W4 = weight_variable([hidden_size,hidden_size]) b4 = bias_variable([hidden_size]) h4 = tf.nn.sigmoid(tf.matmul(h3,W4) + b4) # output x_hat, the reconstruction mean W_out = weight_variable([hidden_size,input_size]) b_out = bias_variable([input_size]) x_hat = tf.nn.sigmoid(tf.matmul(h4, W_out) + b_out) Explanation: Now that we have a sample z from the approximate posterior q(z|x), we pass z through a decoder network to reconstruct the image. I defined the decoder network similarly to the same way as the encoder network, with two hidden layers. However, this time the input to the network is of dimensionality "latent_dim" and the output of dimension 784 x 1, the image size, is passed through a sigmoidal function to be in [0,1]. End of explanation # reconstruction loss is squared error between reconstruction and image (MLE in N(mu(x),sigma^2 I)) reconstruction_loss = tf.reduce_sum(tf.square(x-x_hat)/0.5,1) Explanation: The last part of the computational graph that we need to specify is the training objective. The objective consists of two parts: the reconstruction loss - corresponding go log p(x|z) - and the KL divergence D_KL( q(z|x) || p(z) ). I will set the reconstruction loss to be the sum of squared errors between each pixel of the input image x and output of the network x_hat. This corresponds to log p(x|z) up to an additive constant when p(x|z) is N(mu(z),I). End of explanation # another loss people use is the cross entropy of the output reconstruction and true image value # reconstruction loss is Bernoulli log likelihood of each pixel of image x given x hat (output of decoder) #reconstruction_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_hat) + (1-x) * tf.log(1e-10 + 1 - x_hat),1) Explanation: A loss that I have observed being used is the cross entropy of the output reconstruction and the true image value, which is mathematically equivalent to the Bernoulli log likelihood of each pixel of image x given each parameter x_hat. However, we are confused as to why this loss is being use, as the pixel values are not binary. End of explanation # KL divegence between the approximate posterior and prior kl_divergence = -0.5 * tf.reduce_sum(1 + hidden_log_sigma_sqr - tf.square(hidden_mean) - tf.exp(hidden_log_sigma_sqr), 1) Explanation: The KL divergence between two normal distributions q(z|x) and p(z) has an exact form. End of explanation # avg_loss is the mean across images x in the batch loss = tf.reduce_mean(reconstruction_loss + kl_divergence); # train step train_step = tf.train.AdamOptimizer(1e-3).minimize(loss) Explanation: The training objective "loss" is the mean of the reconstruction loss and KL divergence loss across the images in the mini-batch. We will train the variables in our model to minimize the loss, using the tf.train API. I have defined a training step below which will use the Adam optimizer. End of explanation # saver to save model saver = tf.train.Saver() # train network sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) n_samples = mnist.train.num_examples # Training cycle training_epochs = 50 batch_size = 100 display_step = 1 for epoch in range(training_epochs): avg_loss = 0. total_batch = int(n_samples / batch_size) # Loop over all batches for i in range(total_batch): x_batch = mnist.train.next_batch(batch_size) current_loss = loss.eval(feed_dict={x:x_batch[0]}) #/ n_samples * batch_size # Compute average loss avg_loss += current_loss / n_samples * batch_size # Fit training using batch data train_step.run(feed_dict={x: x_batch[0]}) # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "loss=", "{:.9f}".format(avg_loss)) #saver.save(sess, 'davidz_vae_150_epochs') #saver = tf.train.import_meta_graph('davidz_vae_150_epochs.meta') #saver.restore(sess,tf.train.latest_checkpoint('./')) Explanation: 3. Training the model. The training code was modified from https://jmetzen.github.io/2015-11-27/vae.html. End of explanation # reconstruct import matplotlib.pyplot as plt %matplotlib inline x_sample = mnist.test.next_batch(1)[0] x_reconstruct = x_hat.eval(feed_dict={x:x_sample}) plt.figure plt.subplot(1, 2, 1) plt.imshow(x_sample.reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Test input") plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(x_reconstruct.reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Reconstruction") plt.colorbar() plt.tight_layout() # generate eps = np.random.normal(0,1,size=latent_dim) eps = np.array(eps).reshape(1,latent_dim) import matplotlib.pyplot as plt %matplotlib inline x_reconstruct = x_hat.eval(feed_dict={hidden_sample:eps}) plt.figure plt.imshow(x_reconstruct.reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Test input") plt.colorbar() plt.tight_layout() # to add normalizing flows, need to estimate entropy of log posterior rather than compute KL explicitly, # compute probability of latent variable explicitly rather than in KL, # compute Jacobian determinant terms for penalizing ELBO # ensure flows are parametrized in a good way Explanation: Visualization This piece of code takes a test image from MNIST (x_sample) and evaluates the reconstruction x_hat when x_sample is fed into the VAE. End of explanation
8,271
Given the following text description, write Python code to implement the functionality described below step by step Description: <div style="width Step1: Try reading extracted data with Xarray Step2: Try plotting the LambertConformal data with Cartopy
Python Code: import matplotlib.pyplot as plt import numpy as np %matplotlib inline # Resolve the latest HRRR dataset from siphon.catalog import get_latest_access_url hrrr_catalog = "http://thredds.ucar.edu/thredds/catalog/grib/NCEP/HRRR/CONUS_2p5km/catalog.xml" latest_hrrr_ncss = get_latest_access_url(hrrr_catalog, "NetcdfSubset") # Set up access via NCSS from siphon.ncss import NCSS ncss = NCSS(latest_hrrr_ncss) # Create a query to ask for all times in netcdf4 format for # the Temperature_surface variable, with a bounding box query = ncss.query() query.all_times().accept('netcdf4').variables('Temperature_height_above_ground') query.lonlat_box(north=45, south=41, east=-67, west=-77) # Get the raw bytes and write to a file. data = ncss.get_data_raw(query) with open('test.nc', 'wb') as outf: outf.write(data) Explanation: <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Extract HRRR data using Unidata's Siphon package and Xarray</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> End of explanation import xarray as xr nc = xr.open_dataset('test.nc') nc var='Temperature_height_above_ground' ncvar = nc[var] ncvar grid = nc[ncvar.grid_mapping] grid lon0 = grid.longitude_of_central_meridian lat0 = grid.latitude_of_projection_origin lat1 = grid.standard_parallel earth_radius = grid.earth_radius Explanation: Try reading extracted data with Xarray End of explanation import cartopy import cartopy.crs as ccrs #cartopy wants meters, not km x = ncvar.x.data*1000. y = ncvar.y.data*1000. #globe = ccrs.Globe(ellipse='WGS84') #default globe = ccrs.Globe(ellipse='sphere', semimajor_axis=grid.earth_radius) crs = ccrs.LambertConformal(central_longitude=lon0, central_latitude=lat0, standard_parallels=(lat0,lat1), globe=globe) print(ncvar.x.data.shape) print(ncvar.y.data.shape) print(ncvar.data.shape) # find the correct time dimension name for d in ncvar.dims: if "time" in d: timevar = d nc[timevar].data[6] istep = 6 fig = plt.figure(figsize=(12,8)) ax = plt.axes(projection=ccrs.PlateCarree()) mesh = ax.pcolormesh(x,y,ncvar[istep,::].data.squeeze(), transform=crs,zorder=0) ax.coastlines(resolution='10m',color='black',zorder=1) gl = ax.gridlines(draw_labels=True) gl.xlabels_top = False gl.ylabels_right = False plt.title(nc[timevar].data[istep]); Explanation: Try plotting the LambertConformal data with Cartopy End of explanation
8,272
Given the following text description, write Python code to implement the functionality described below step by step Description: Variable Names Before moving on, let's say a few things about variable names. If you want to name some object in Python you have to abide by a few 'concrete' rules Step1: In addition to the above ‘concrete’ there are also a few guidelines that describe "best practices" Step2: As for using underscores and captial letters in names, well, thats to do with classes, which I do not cover in this lecture series. All you need to know right now is that underscores and capital letters are best avoided. Anyway, here are a few more examples Step3: So in the above examples we saw a few terrible names and two reasonable ones. “sales_tax” is my top-pick because the name tells use something about our data and at just two words long it is concise. In C# the usual naming convention is to use snake case. So a variable in C# should be called 'salesTax'. In Python we tend to use underscores to split up words. And thats another reason why 'sales_tax' is my top-pick. The last thing I would like to say about variable names is that very good names can negate the need for comments. Pep8 (the main style guideline for Python) recommends that we use in-line comments sparingly and comments should also avoid stating the obvious; needless comments only serve as a distraction. Don't get it twisted; we should write comments, but they should be good ones. The real point here is that good code should explain itself. Here, let me show you can example. Step4: So this bit of code isn't actually that bad; the variable name "price" is a reasonably good one (short, readable, descriptive) and the comment is helpful. All in all, this code is decent, but that doesn’t mean we cannot do even better Step5: All we did here is change our variable name to be a bit more descriptive. And now since our variable name states that this figure includes tax we don't need the comment! Show, don't tell. The point of today was to make you aware that programming is not just about writing code that machines can understand; Good code is understood by both man AND machine. Homework Below is three lines of python code, the only thing you have seen yet is the '*' symbol which means multiplication. The Syntax
Python Code: print = 10.20 # Rule 1: "print" is a reserved word in Python and therefore we cannot use it. 0sales = 10.20 # Rule 2: cannot start a name with a number. this has spaces = 10.20 # Rule 2: space character is punctation. thisnamehasa+init = 10.20 # Rule 3: Fails because "+" is a special character in python (addition) Explanation: Variable Names Before moving on, let's say a few things about variable names. If you want to name some object in Python you have to abide by a few 'concrete' rules: The name should not be already be in use (unless you want to 'delete' the previous value). The name should not start with a number or punctuation marks (with the notable exception of the underscore "_" character) The name cannot contain any "special characters" or spaces. Below are a few examples of the above three rules in action. End of explanation golden_ratio = 1.61803398874989484820 # okay name GOLDEN_RATIO = 1.61803398874989484820 # better name Explanation: In addition to the above ‘concrete’ there are also a few guidelines that describe "best practices": Variable names should be concise yet relevant, descriptive too. The best names are not just relevant and descriptive, there are also elegant. The best names are easy to read and type. Try to avoid possible confusion; "Print" is a legal name, but easily mistaken for the "print" command. Don’t use ALL CAPS (unless you know what you are doing). Don't start a name with an underscore character or double underscore (unless you know what you are doing). Don't start a name with a capital letter (unless you know what you are doing). For the last three bullet points I said "unless you know what you are doing", let me explain that quickly. By convention, variable names in ALLCAPS are supposed to have a special meaning, and that is these are values that should NOT be changed by the program at runtime. Mathematical constants are a pretty good example; for most applications you probably don't want to change these values while the program is running. For example, under what circumstances would you want to change the value of PI while the code is running? In python we denote constants by naming them in ALLCAPS, and I would recommend you follow this already agreed upon practice; after all, if everyone uses the same naming conventions then all code is that little bit easier to read. End of explanation # superbad (nonsense name, mixed case, starts with underscore): _QWeRTy = 10.20 # bad (still nonsense): qwerty = 10.20 # poor (descriptive, but mixed case is very annoying): SaLestAX = 10.20 # better (most boxes ticked!): salestax = 10.20 # even better: sales_tax = 10.20 # WAAAYYYY over the top (descriptive doesn't mean "write a novel!") this_is_the_sales_tax_for_the_beer_I_bought_on_a_summers_day_in_1965_or_was_it_1967_I_cant_remember = 10.20 Explanation: As for using underscores and captial letters in names, well, thats to do with classes, which I do not cover in this lecture series. All you need to know right now is that underscores and capital letters are best avoided. Anyway, here are a few more examples: End of explanation price = 1000 # including sales tax Explanation: So in the above examples we saw a few terrible names and two reasonable ones. “sales_tax” is my top-pick because the name tells use something about our data and at just two words long it is concise. In C# the usual naming convention is to use snake case. So a variable in C# should be called 'salesTax'. In Python we tend to use underscores to split up words. And thats another reason why 'sales_tax' is my top-pick. The last thing I would like to say about variable names is that very good names can negate the need for comments. Pep8 (the main style guideline for Python) recommends that we use in-line comments sparingly and comments should also avoid stating the obvious; needless comments only serve as a distraction. Don't get it twisted; we should write comments, but they should be good ones. The real point here is that good code should explain itself. Here, let me show you can example. End of explanation price_after_tax = 1000 Explanation: So this bit of code isn't actually that bad; the variable name "price" is a reasonably good one (short, readable, descriptive) and the comment is helpful. All in all, this code is decent, but that doesn’t mean we cannot do even better: End of explanation # WTF does this do? Place your bets now... cake = 3.14 diameter = 2 belt_size = cake * diameter Explanation: All we did here is change our variable name to be a bit more descriptive. And now since our variable name states that this figure includes tax we don't need the comment! Show, don't tell. The point of today was to make you aware that programming is not just about writing code that machines can understand; Good code is understood by both man AND machine. Homework Below is three lines of python code, the only thing you have seen yet is the '*' symbol which means multiplication. The Syntax: {number}*{number} Your task comes in three parts: Figure out what the code is supposed to do. (this might take some googling. Hint: think circles!) Add a comment or two, explaining what it does. change the name "belt_size" and "cake" to something more meaningful. After you have renamed the variables, do you still need the comments in order to understand the code? End of explanation
8,273
Given the following text description, write Python code to implement the functionality described below step by step Description: Лабораторные работы DES DES - блочный алгоритм для симметричного шифрования. - Работает на блоках данных по 64 бита - Размер ключа - 64 бита (56 бит + 8 проверочных (parity bits)) - Использует 16 раундов шифрования сетью Фейстеля, для каждого раунда генерируется свой подключ - Если нужно зашифровать данные, размерном больше 64-х бит, используются слудующие режими работы (mode of operation) Step1: Работа алгоритма Step2: Хеш-функция - 8-ми байтная - Переменные a, b, c и d после генерации складываются в шестнадцатеричном виде в порядке d, c, a, b. Код функции Step3: Демонстрация работы Step4: Коллизия Поиск 4-х байтных коллизий для хеш-функции Step5: Кривая вероятности коллизии на выбранном интервале Расчет вероятностей коллизии для хеш-выборок заданного размера с использованием принципа парадокса дней рождения. Step6: Протокол Диффи-Хеллмана Позволяет двум или более сторонам получить общий секретный ключ, используя незащищенный от прослушивания канал связи. Полученный ключ используется для шифрования дальнейшего обмена с помощью алгоритмов симметричного шифрования. Использует операции возведения в степень по модулю и симметричность модульной арифметики. Код алгоритма Step7: Работа алгоритма Step8: Алгоритм RSA RSA - ассиметричный криптографический алгоритм, использующий открытые и закрытые ключи. - Открытый ключ - пара (e, N), где e - открытая экспонента, N - результат выполнения функции Эйлера - Закрытый ключ - пара (d, N), где d - число, мультипликативно обратное открытой экспоненте, N - результат выполнения функции Эйлера - Данные шифруются открытым ключем, а расшифровываются закрытым. - Функция Эйлера - выражение вида (p - 1) * (q - 1), где p и q - простые случайные числа - Шифрование происходит за счет операций возведения в степень по модулю с открытыми данными и ключем Код алгоритма Step9: Работа алгоритма Step10: ЭЦП Реализация упрощенной электронной цифровой подписи. $$H_i = (H_{i-1} + M_i)^2 \mod n,H_0 = 0$$ $$S = H ^ d \mod n$$ $$H' = S^e \mod n$$ Код алгоритма Step11: Демонстрация работы ЭЦП Step12: Практические занятия Шифр Цезаря Шифрование Step13: Расшифровывание Step14: Демонстрация работы алгоритма Step15: Криптоанализ Взломать шифр Цезаря можно, используя частоты букв алфавита и вычисление наименьшей энтропии Код для взлома шифров на английском языке Step16: Демонстрация работы кода Step17: Шифр Виженера Шифр Виженера — метод полиалфавитного шифрования буквенного текста с использованием ключевого слова. Код алгоритма Step18: Tabula Recta - таблицы Виженера Step19: Демонстрация работы алгоритма Step20: Шифр Вернама Алгоритм существует в нескольких вариантах, например, оригинальный вариант с исключающим ИЛИ по сообщению и секретному ключу, а также шифр Вернама по модулю m, в котором знаки открытого текста, шифрованного текста и ключа принимают значения из кольца вычетов множества знаков алфавита. Код алгоритма Step21: Демонстрация работы
Python Code: import bitarray import itertools from collections import deque class DES(object): _initial_permutation = [ 58, 50, 42, 34, 26, 18, 10, 2, 60, 52, 44, 36, 28, 20, 12, 4, 62, 54, 46, 38, 30, 22, 14, 6, 64, 56, 48, 40, 32, 24, 16, 8, 57, 49, 41, 33, 25, 17, 9, 1, 59, 51, 43, 35, 27, 19, 11, 3, 61, 53, 45, 37, 29, 21, 13, 5, 63, 55, 47, 39, 31, 23, 15, 7 ] _final_permutation = [ 40, 8, 48, 16, 56, 24, 64, 32, 39, 7, 47, 15, 55, 23, 63, 31, 38, 6, 46, 14, 54, 22, 62, 30, 37, 5, 45, 13, 53, 21, 61, 29, 36, 4, 44, 12, 52, 20, 60, 28, 35, 3, 43, 11, 51, 19, 59, 27, 34, 2, 42, 10, 50, 18, 58, 26, 33, 1, 41, 9, 49, 17, 57, 25 ] _expansion_function = [ 32, 1, 2, 3, 4, 5, 4, 5, 6, 7, 8, 9, 8, 9, 10, 11, 12, 13, 12, 13, 14, 15, 16, 17, 16, 17, 18, 19, 20, 21, 20, 21, 22, 23, 24, 25, 24, 25, 26, 27, 28, 29, 28, 29, 30, 31, 32, 1 ] _permutation = [ 16, 7, 20, 21, 29, 12, 28, 17, 1, 15, 23, 26, 5, 18, 31, 10, 2, 8, 24, 14, 32, 27, 3, 9, 19, 13, 30, 6, 22, 11, 4, 25 ] _pc1 = [ 57, 49, 41, 33, 25, 17, 9, 1, 58, 50, 42, 34, 26, 18, 10, 2, 59, 51, 43, 35, 27, 19, 11, 3, 60, 52, 44, 36, 63, 55, 47, 39, 31, 23, 15, 7, 62, 54, 46, 38, 30, 22, 14, 6, 61, 53, 45, 37, 29, 21, 13, 5, 28, 20, 12, 4 ] _pc2 = [ 14, 17, 11, 24, 1, 5, 3, 28, 15, 6, 21, 10, 23, 19, 12, 4, 26, 8, 16, 7, 27, 20, 13, 2, 41, 52, 31, 37, 47, 55, 30, 40, 51, 45, 33, 48, 44, 49, 39, 56, 34, 53, 46, 42, 50, 36, 29, 32 ] _left_rotations = [ 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1 ] _sbox = [ # S1 [ [14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7], [0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8], [4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0], [15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13] ], # S2 [ [15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10], [3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5], [0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15], [13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9] ], # S3 [ [10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8], [13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1], [13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7], [1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12] ], # S4 [ [7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15], [13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9], [10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4], [3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14] ], # S5 [ [2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9], [14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6], [4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14], [11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3] ], # S6 [ [12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11], [10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8], [9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6], [4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13] ], # S7 [ [4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1], [13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6], [1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2], [6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12] ], # S8 [ [13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7], [1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2], [7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8], [2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11] ] ] def __init__(self, key): self.key = key def encrypt(self, message): padded = self.pkcs7_padding(message, pad=True) result = [] for block in padded: result += self.encrypt_64bit(''.join(str(i) for i in block)) return result def decrypt(self, message, msg_in_bits=False): bits_array_msg = [] if msg_in_bits: bits_array_msg = message else: bits_array_msg = self._string_to_bitsarray(message) if len(bits_array_msg) % 64 != 0: raise ValueError('Ciphered code must be a multiple of 64') blocks_lst = [ bits_array_msg[i:i + 64] for i in range(0, len(bits_array_msg), 64) ] result = [] for block in blocks_lst: decrypted = self.decrypt_64bit(block, msg_in_bits=True) bl = list( ''.join(chr(int( ''.join( map(str, decrypted[i:i + 8])), 2)) for i in range(0, len(decrypted), 8))) bl = self._unpad(bl) result += bl return ''.join(result) def encrypt_64bit(self, message): return self.crypt(message, encrypt=True) def decrypt_64bit(self, message, msg_in_bits=False): return self.crypt(message, encrypt=False, msg_in_bits=msg_in_bits) def crypt(self, message, encrypt=True, msg_in_bits=False): bits_array_msg = [] if msg_in_bits: bits_array_msg = message else: bits_array_msg = self._string_to_bitsarray(message) bits_array_key = self._string_to_bitsarray(self.key) if len(bits_array_msg) != 64: raise ValueError('Message must be 64 bit!') if len(bits_array_key) != 64: raise ValueError('Key must be 64 bit!') # Compute 16 48-bit subkeys subkeys = self.get_subkeys(bits_array_key) # Convert the message using the initial permutation block msg = [bits_array_msg[i - 1] for i in self._initial_permutation] L, R = msg[:32], msg[32:] if encrypt: for i in range(16): prev_r = R r_feistel = self.feistel_function(R, subkeys[i]) R = [L[i] ^ r_feistel[i] for i in range(32)] L = prev_r else: for i in reversed(range(16)): prev_l = L l_feistel = self.feistel_function(L, subkeys[i]) L = [R[i] ^ l_feistel[i] for i in range(32)] R = prev_l before_final_permute = L + R return [before_final_permute[i - 1] for i in self._final_permutation] def pkcs7_padding(self, message, block_size=8, pad=True): msg = list(message) blocks_lst = [ msg[i:i + block_size] for i in range(0, len(msg), block_size) ] s = block_size return [ self._pad(b, s) if len(b) < block_size else b for b in blocks_lst ] if pad else blocks_lst def feistel_function(self, r_32bit, subkey_48bit): r_48bit = [r_32bit[i - 1] for i in self._expansion_function] subkey_xor_r = [r_48bit[i] ^ subkey_48bit[i] for i in range(48)] # Divide subkey_xor_r into 8 6-bit blocks for computing s-boxes b_6_bit_blocks = [subkey_xor_r[i:i + 6] for i in range(0, 48, 6)] # Compute 8 s-boxes and concatenate them into 32-bit vector after_sboxes_32bit = list(itertools.chain(*[ self.compute_s_box( self._sbox[i], b_6_bit_blocks[i]) for i in range(8) ])) # Compute the permutation and return the 32-bit block return [int(after_sboxes_32bit[i - 1]) for i in self._permutation] def compute_s_box(self, sbox, b_6_bit): row = int(''.join(str(x) for x in [b_6_bit[0], b_6_bit[5]]), 2) col = int(''.join(str(x) for x in b_6_bit[1:5]), 2) s_box_res = sbox[row][col] return list('{0:04b}'.format(s_box_res)) def get_subkeys(self, bits_array_key): # Extract 8 parity bits from the key (8, 16, 24, 32, 40, 48, 56, 64) # key_56bit = bits_array_key # del key_56bit[7::8] # Compute Permuted Choice 1 on the key key_56bit = [bits_array_key[i - 1] for i in self._pc1] # Split the key into two 28-bit subkeys key_56_left, key_56_right = [ key_56bit[i:i + 28] for i in range(0, 56, 28) ] # Compute 16 48-bit keys using left rotations and permuted choice 2 subkeys_48bit = [] C, D = key_56_left, key_56_right for i in range(16): C_deque, D_deque = deque(C), deque(D) C_deque.rotate(-self._left_rotations[i]) D_deque.rotate(-self._left_rotations[i]) C, D = list(C_deque), list(D_deque) CD = C + D subkeys_48bit.append([CD[i - 1] for i in self._pc2]) return subkeys_48bit def _string_to_bitsarray(self, string): ba = bitarray.bitarray() ba.fromstring(string) return [1 if i else 0 for i in ba.tolist()] def _pad(self, arr, block_size): z = block_size - len(arr) return arr + [z] * z def _unpad(self, arr): if str(arr[-1]).isdigit(): arr_str = ''.join(str(i) for i in arr) i = j = int(arr[-1]) while arr_str[-1] == str(j) and i > 0: arr_str = arr_str[:-1] i -= 1 return list(arr_str) else: return arr Explanation: Лабораторные работы DES DES - блочный алгоритм для симметричного шифрования. - Работает на блоках данных по 64 бита - Размер ключа - 64 бита (56 бит + 8 проверочных (parity bits)) - Использует 16 раундов шифрования сетью Фейстеля, для каждого раунда генерируется свой подключ - Если нужно зашифровать данные, размерном больше 64-х бит, используются слудующие режими работы (mode of operation): - ECB (electronic code book) - шифрование 64-битных по-порядку, не зависимо друг от друга - CBC (cipher block chaining) - каждый 64-битный блок открытого текста (кроме первого) складывается по модулю 2 с предыдущим результатом шифрования - CFB (cipher feed back) / OFB (output feed back) - схожы с CBC, но используют другие похожие схемы с xor - Если блок данных меньше 64-х бит, используется паддинг, например, PKCS5 или, в обобщенном виде, PKCS7 Алгоритм End of explanation d = DES('qwertyui') cipher = d.encrypt('hello world!') print("Ciphered bits:\n", cipher) deciphered = d.decrypt(cipher, msg_in_bits=True) print("Deciphered text:\n", deciphered) Explanation: Работа алгоритма End of explanation def hash_function(s=b''): a, b, c, d = 0xa0, 0xb1, 0x11, 0x4d for byte in bytearray(s): a ^= byte b = b ^ a ^ 0x55 c = b ^ 0x94 d = c ^ byte ^ 0x74 return format(d << 24 | c << 16 | a << 8 | b, '08x') Explanation: Хеш-функция - 8-ми байтная - Переменные a, b, c и d после генерации складываются в шестнадцатеричном виде в порядке d, c, a, b. Код функции: End of explanation str1 = 'hello' str2 = 'hello!' str3 = 'Hello World' print('Hash for %s:\t\t' % str1, hash_function(bytes(str1, 'ascii'))) print('Hash for %s:\t' % str2, hash_function(bytes(str2, 'ascii'))) print('Hash for %s:\t' % str3, hash_function(bytes(str3, 'ascii'))) Explanation: Демонстрация работы: End of explanation from random import choice, seed ascii = ''.join([chr(i) for i in range(33, 127)]) seed(37) found = {} for j in range(5000): # Build a 4 byte random string s = bytes(''.join([choice(ascii) for _ in range(4)]), 'ascii') h = hash_function(s) if h in found: v = found[h] if v == s: # Same hash, but from the same source string continue print(h, found[h], s) found[h] = s Explanation: Коллизия Поиск 4-х байтных коллизий для хеш-функции: End of explanation %matplotlib inline import math import numpy as np import matplotlib.pyplot as plt # Calculate the probability of collision among 10000 keys using Birthday Paradox Principle n = num_of_all_hashes = 8 ** 8 # 16777216 keys = 10000 probUnique = 1.0 keys_arr = np.array(range(1, keys)) coll_probs = [] for k in range(1, keys): probUnique = probUnique * (n - (k - 1)) / n coll_probs.append((1 - math.exp(-0.5 * k * (k - 1) / n))) plt.plot(keys_arr, coll_probs) plt.show() Explanation: Кривая вероятности коллизии на выбранном интервале Расчет вероятностей коллизии для хеш-выборок заданного размера с использованием принципа парадокса дней рождения. End of explanation from hashes.hash_function import hash_function from binascii import hexlify try: import ssl random_function = ssl.RAND_bytes random_provider = "Python SSL" except (AttributeError, ImportError): import OpenSSL random_function = OpenSSL.rand.bytes random_provider = "OpenSSL" class DiffieHellman(object): def __init__(self, generator=2, prime=11, key_length=540, private_key=None): self.generator = generator self.key_length = key_length self.prime = prime if private_key: self.private_key = private_key else: self.private_key = self.gen_private_key(self.key_length) self.public_key = self.gen_public_key() def get_random(self, bits): _rand = 0 _bytes = bits while _rand.bit_length() < bits: _rand = int.from_bytes(random_function(_bytes), byteorder='big') return _rand def gen_private_key(self, bits): return self.get_random(bits) def gen_public_key(self): return pow(self.generator, self.private_key, self.prime) def gen_secret(self, private_key, other_key): return pow(other_key, private_key, self.prime) def gen_key(self, other_key): self.shared_secret = self.gen_secret(self.private_key, other_key) try: _shared_secret_bytes = self.shared_secret.to_bytes( self.shared_secret.bit_length() // 8 + 1, byteorder='big' ) except AttributeError: _shared_secret_bytes = str(self.shared_secret) self.key = hash_function(_shared_secret_bytes) def get_key(self): return self.key def get_shared_secret(self): return self.shared_secret def show_params(self): print('Parameters:') print('Prime [{0}]: {1}'.format(self.prime.bit_length(), self.prime)) print( 'Generator [{0}]: {1}\n' .format(self.generator.bit_length(), self.generator)) print( 'Private key [{0}]: {1}\n' .format(self.private_key.bit_length(), self.private_key)) print( 'Public key [{0}]: {1}' .format(self.public_key.bit_length(), self.public_key)) def show_results(self): print('Results:') print( 'Shared secret [{0}]: {1}' .format(self.shared_secret.bit_length(), self.shared_secret)) print( 'Shared key [{0}]: {1}'.format(len(self.key), hexlify(bytes(self.key, 'ascii')))) Explanation: Протокол Диффи-Хеллмана Позволяет двум или более сторонам получить общий секретный ключ, используя незащищенный от прослушивания канал связи. Полученный ключ используется для шифрования дальнейшего обмена с помощью алгоритмов симметричного шифрования. Использует операции возведения в степень по модулю и симметричность модульной арифметики. Код алгоритма End of explanation p = 11 g = 2 d1 = DiffieHellman(generator=g, prime=p, key_length=3) d2 = DiffieHellman(generator=g, prime=p, key_length=3) d1.gen_key(d2.public_key) d2.gen_key(d1.public_key) d1.show_params() d1.show_results() d2.show_params() d2.show_results() if d1.get_key() == d2.get_key(): print('Shared keys match!') print('Key: ', hexlify(bytes(d1.key, 'ascii'))) print('Hashed key: ', d1.get_key()) else: print("Shared secrets didn't match!") print("Shared secret A: ", d1.gen_secret(d2.public_key)) print("Shared secret B: ", d2.gen_secret(d1.public_key)) Explanation: Работа алгоритма: End of explanation import json from random import randint from base64 import b64encode, b64decode class RSA(object): def __init__(self): self.p, self.q = self.gen_p_q() self.N = self.p * self.q self.phi = self.euler_function(self.p, self.q) self.public_key_pair = (self.gen_public_exponent(), self.N) self.private_key_pair = (self.gen_private_exponent(), self.N) def encrypt(self, message, public_key_pair): return self.crypt(message, public_key_pair, encrypt=True) def decrypt(self, cipher, private_key_pair): return self.crypt(cipher, private_key_pair, encrypt=False) def crypt(self, message, key_pair, encrypt=True): if encrypt: msg = [ord(c) for c in message] r = [pow(i, key_pair[0], key_pair[1]) for i in msg] return b64encode(bytes(json.dumps(r), 'utf8')).decode('utf8'), r else: msg = json.loads(b64decode(message).decode('utf8')) r = [pow(i, key_pair[0], key_pair[1]) for i in msg] return ''.join(chr(i) for i in r), r def get_public_key_pair(self): return self.public_key_pair def get_private_key_pair(self): return self.private_key_pair def get_phi(self): return self.phi def get_p_q(self): return self.p, self.q def gen_public_exponent(self): for e in reversed(range(self.phi)): if self.fermat_primality(e): self.e = e return e def gen_private_exponent(self): self.d = self.mod_multiplicative_inv(self.e, self.phi)[0] return self.d def euler_function(self, p, q): return (p - 1) * (q - 1) def gen_p_q(self): p_c, q_c = randint(2, 100000), randint(2, 100000) p = q = None gen1 = gen2 = self.eratosthenes_sieve() bigger = max(p_c, q_c) for i in range(bigger): if p_c > 0: p = next(gen1) if q_c > 0: q = next(gen2) p_c -= 1 q_c -= 1 return p, q def fermat_primality(self, n): if n == 2: return True if not n & 1: return False return pow(2, n - 1, n) == 1 def extended_euclide(self, b, n): # u*a + v*b = gcd(a, b) # return g, u, v x0, x1, y0, y1 = 1, 0, 0, 1 while n != 0: q, b, n = b // n, n, b % n x0, x1 = x1, x0 - q * x1 y0, y1 = y1, y0 - q * y1 return b, x0, y0 def mod_multiplicative_inv(self, a, b): g, u, v = self.extended_euclide(a, b) return b + u, a - v def eratosthenes_sieve(self): D = {} q = 2 while True: if q not in D: yield q D[q * q] = [q] else: for p in D[q]: D.setdefault(p + q, []).append(p) del D[q] q += 1 def print_info(self): print('p = %d\nq = %d' % (self.p, self.q)) print('N = %d\nphi = %d' % (self.N, self.phi)) print('e = %d\nd = %d' % (self.e, self.d)) Explanation: Алгоритм RSA RSA - ассиметричный криптографический алгоритм, использующий открытые и закрытые ключи. - Открытый ключ - пара (e, N), где e - открытая экспонента, N - результат выполнения функции Эйлера - Закрытый ключ - пара (d, N), где d - число, мультипликативно обратное открытой экспоненте, N - результат выполнения функции Эйлера - Данные шифруются открытым ключем, а расшифровываются закрытым. - Функция Эйлера - выражение вида (p - 1) * (q - 1), где p и q - простые случайные числа - Шифрование происходит за счет операций возведения в степень по модулю с открытыми данными и ключем Код алгоритма: End of explanation rsa = RSA() message = 'hello!' cipher = rsa.encrypt(message, rsa.get_public_key_pair()) print('Cipher:\n', cipher[0]) deciphered = rsa.decrypt(cipher[0], rsa.get_private_key_pair()) print('Deciphered text:\n', deciphered[0]) rsa.print_info() Explanation: Работа алгоритма: End of explanation import re class Signature(object): def __init__(self, public_key, private_key, n): self.public_key = public_key self.private_key = private_key self.n = n def sign(self, message, private_key): H = self.hash_function(message) signature = pow(H, private_key, self.n) return '@' + str(signature) + '@' + message def verify(self, message, public_key): regex = re.compile('@\d+@') match = regex.search(message) if match: signature = int(match.group(0).strip('@')) msg = regex.sub('', message) H1 = pow(signature, public_key, self.n) H2 = self.hash_function(msg) return H1 == H2 def hash_function(self, message): H = 0 for c in [ord(c) for c in message]: H = (H + c)**2 % self.n return H def get_public_key(self): return self.public_key def get_private_key(self): return self.private_key Explanation: ЭЦП Реализация упрощенной электронной цифровой подписи. $$H_i = (H_{i-1} + M_i)^2 \mod n,H_0 = 0$$ $$S = H ^ d \mod n$$ $$H' = S^e \mod n$$ Код алгоритма: End of explanation public_key = 5 private_key = 29 n = 91 signature = Signature(public_key, private_key, n) message = 'hello!' signed_message = signature.sign(message, signature.get_private_key()) # sign the message print('Initial message: ', message) print('Signed message: ', signed_message) if signature.verify(signed_message, public_key): print('Verification successful! Message was not modified.\n') else: print('Verification error! Message was modified.\n') modified_message = signed_message + ' hi!' # modify signed message print('Modified message: ', modified_message) if signature.verify(modified_message, public_key): print('Verification successful! Message was not modified.') else: print('Verification error! Message was modified') Explanation: Демонстрация работы ЭЦП: End of explanation def cipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=0): result = "" alphabet = alphabet.lower() n = len(alphabet) for char in text: if char.isalpha(): new_char = alphabet[(alphabet.find(char.lower()) + key) % n] result += new_char if char.islower() else new_char.upper() else: result += char return result Explanation: Практические занятия Шифр Цезаря Шифрование: End of explanation def decipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=0): result = "" alphabet = alphabet.lower() n = len(alphabet) for char in text: if char.isalpha(): new_char = alphabet[(alphabet.find(char.lower()) - key + n) % n] result += new_char if char.islower() else new_char.upper() else: result += char return result Explanation: Расшифровывание: End of explanation alphabet = 'АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ' text = 'Съешь же ещё этих мягких французских булок, да выпей чаю.' key = 3 ciphered_phrase = cipher(text, alphabet, key) print(ciphered_phrase) deciphered_phrase = decipher(ciphered_phrase, alphabet, key) print(deciphered_phrase) Explanation: Демонстрация работы алгоритма: End of explanation import numpy as np ENGLISH_FREQS = [ 0.08167, 0.01492, 0.02782, 0.04253, 0.12702, 0.02228, 0.02015, 0.06094, 0.06966, 0.00153, 0.00772, 0.04025, 0.02406, 0.06749, 0.07507, 0.01929, 0.00095, 0.05987, 0.06327, 0.09056, 0.02758, 0.00978, 0.02360, 0.00150, 0.01974, 0.00074 ] # Returns the cross-entropy of the given string with respect to # the English unigram frequencies, which is a positive # floating-point number. def get_entropy(str): sum, ignored = 0, 0 for c in str: if c.isalpha(): sum += np.log(ENGLISH_FREQS[ord(c.lower()) - 97]) else: ignored += 1 return -sum / np.log(2) / (len(str) - ignored) # Returns the entropies when the given string is decrypted with # all 26 possible shifts, where the result is an array of tuples # (int shift, float enptroy) - # e.g. [(0, 2.01), (1, 4.95), ..., (25, 3.73)]. def get_all_entropies(str): result = [] for i in range(0, 26): result.append((i, get_entropy(decipher(str, key=i)))) return result def cmp_to_key(mycmp): 'Convert a cmp= function into a key= function' class K(object): def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): return mycmp(self.obj, other.obj) < 0 def __gt__(self, other): return mycmp(self.obj, other.obj) > 0 def __eq__(self, other): return mycmp(self.obj, other.obj) == 0 def __le__(self, other): return mycmp(self.obj, other.obj) <= 0 def __ge__(self, other): return mycmp(self.obj, other.obj) >= 0 def __ne__(self, other): return mycmp(self.obj, other.obj) != 0 return K def comparator(x, y): if x[1] < y[1]: return -1 elif x[1] > y[1]: return 1 elif x[0] < y[0]: return -1 elif x[0] > y[0]: return 1 else: return 0 Explanation: Криптоанализ Взломать шифр Цезаря можно, используя частоты букв алфавита и вычисление наименьшей энтропии Код для взлома шифров на английском языке: End of explanation text = 'hello' ciphered = cipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=14) print('Initial text: ', text) print('Ciphered text: ', ciphered) entropies = get_all_entropies(ciphered) entropies.sort(key=cmp_to_key(comparator)) best_shift = entropies[0][0] cracked_val = decipher(ciphered, key=best_shift) print('\nBest guess:') print('%d rotations\nDeiphered text: %s\n' % (best_shift, cracked_val)) print('=========\nFull circle:') for i in range(0, 26): print('%d -\t%s' % (i, decipher(ciphered, key=i))) Explanation: Демонстрация работы кода: End of explanation import string class Vigenere(object): def __init__(self, key): self.key = key self.tabula_recta = self.generate_tabula_recta() def encrypt(self, msg): msg_l = len(msg) key = self.adjust_key(self.key, msg_l) return ''.join(self.tabula_recta[msg[i]][key[i]] for i in range(msg_l)) def decrypt(self, msg): msg_l, t = len(msg), self.tabula_recta key = self.adjust_key(self.key, msg_l) return ''.join( list(t[key[i]].keys())[ list(t[key[i]].values()).index(msg[i]) ] for i in range(msg_l) ) def generate_tabula_recta(self): alphabet = a = list(string.ascii_uppercase) n = len(alphabet) tabula_recta = dict() for index, row_c in enumerate(alphabet): tabula_recta[row_c] = dict() for col_c in alphabet: tabula_recta[row_c][col_c] = a[(a.index(col_c) + index) % n] return tabula_recta def adjust_key(self, key, length): key_len = len(key) return ''.join([key[(i + key_len) % key_len] for i in range(length)]) def get_key(self): return self.key def get_tabula_recta(self): return self.tabula_recta def pretty_print(self, d, space=3, fill='-'): strs = ''.join('{{{0}:^{1}}}'.format( str(i), str(space)) for i in range(len(d) + 1) ) std = sorted(d) print(strs.format(" ", *std)) for x in std: print(strs.format(x, *(d[x].get(y, fill) for y in std))) Explanation: Шифр Виженера Шифр Виженера — метод полиалфавитного шифрования буквенного текста с использованием ключевого слова. Код алгоритма: End of explanation v = Vigenere('TEST') v.pretty_print(v.get_tabula_recta(), space=3) Explanation: Tabula Recta - таблицы Виженера: End of explanation key = 'LEMON' message = 'ATTACKATDAWN' vigenere = Vigenere(key) cipher = vigenere.encrypt(message) deciphered = vigenere.decrypt(cipher) print('Key:\t\t', key) print('Message:\t', message) print('Ciphered:\t', cipher) print('Deciphered:\t', deciphered) Explanation: Демонстрация работы алгоритма: End of explanation import string import random class OneTimePad(object): def __init__(self, key=None, alphabet=string.ascii_uppercase): self.alphabet = alphabet self.key = key def crypt(self, msg, mode='original', encrypt=True): if not self.key: self.key = self.generate_secure_key(len(msg)) if mode == 'original': return self.crypt_original(msg) else: return self.crypt_by_mod(msg, encrypt) def crypt_original(self, msg): return ''.join( chr(ord(msg[i]) ^ ord(self.key[i])) for i in range(len(msg)) ) def crypt_by_mod(self, msg, encrypt=True): if encrypt: return self.encrypt_by_mod(msg) else: return self.decrypt_by_mod(msg) def encrypt_by_mod(self, msg): a, l, a_l = self.alphabet, len(msg), len(self.alphabet) return ''.join( a[(a.index(msg[i]) + a.index(self.key[i])) % a_l] for i in range(l) ) def decrypt_by_mod(self, msg): a, l, a_l = self.alphabet, len(msg), len(self.alphabet) return ''.join( a[(a.index(msg[i]) - a.index(self.key[i])) % a_l] for i in range(l) ) def generate_secure_key(self, length): return ''.join( random.SystemRandom().choice(self.alphabet) for _ in range(length) ) Explanation: Шифр Вернама Алгоритм существует в нескольких вариантах, например, оригинальный вариант с исключающим ИЛИ по сообщению и секретному ключу, а также шифр Вернама по модулю m, в котором знаки открытого текста, шифрованного текста и ключа принимают значения из кольца вычетов множества знаков алфавита. Код алгоритма: End of explanation print('===== The original XOR version =====') message = 'ALLSWELLTHATENDSWELL' key = 'EVTIQWXQVVOPMCXREPYZ' pad = OneTimePad(key) cipher = pad.crypt(message) deciphered = pad.crypt(cipher) print('Original:\t', message) print('The key:\t', key) print('Cipher\t\t', [ord(c) for c in cipher]) print('Deciphered:\t', deciphered) print('\n======= The modulo version =======') message = 'HELLO' pad = OneTimePad() cipher = pad.crypt('HELLO', mode='mod', encrypt=True) deciphered = pad.crypt(cipher, mode='mod', encrypt=False) print('Original:\t', message) print('The key:\t', pad.key) print('Cipher\t\t', cipher) print('Deciphered:\t', deciphered) Explanation: Демонстрация работы: End of explanation
8,274
Given the following text description, write Python code to implement the functionality described below step by step Description: SVHN Preprocessing This notebook implements SVHN pre-processing. The key steps are Step1: The following code reads the images and crops according to the steps above. Then it encodes the label for each of these pieces of data. Most of this code migrated to the python script file. Step2: Let's visualize the training and test data sets. I will pull out four random images from each and apply the cropping function. Step3: Repeat for the test data too. Step4: The following code tests my extraction and cropping routines for the entire dataset.
Python Code: import scipy.ndimage as img import scipy.misc as misc import h5py import numpy as np import matplotlib.pyplot as plt import random as rnd import os import sklearn.preprocessing as skproc from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import scale import DigitStructFile import cPickle as pkl %matplotlib inline train_struct_loc = '/Users/pjmartin/Documents/Udacity/MachineLearningProgram/Project5/udacity-mle-project5/data/train/digitStruct.mat' test_struct_loc = '/Users/pjmartin/Documents/Udacity/MachineLearningProgram/Project5/udacity-mle-project5/data/test/digitStruct.mat' train_loc_root = '/Users/pjmartin/Documents/Udacity/MachineLearningProgram/Project5/udacity-mle-project5/data/train/' test_loc_root = '/Users/pjmartin/Documents/Udacity/MachineLearningProgram/Project5/udacity-mle-project5/data/test/' img_loc1 = train_loc_root + '1.png' img_loc1_test = test_loc_root + '1.png' # read in as B/W # img1 = img.imread(img_loc1) img1 = img.imread(img_loc1,mode='L') img1_test = img.imread(img_loc1_test,mode='L') plt.imshow(img1) print np.shape(img1) print img1[0][0] plt.imshow(img1_test) print np.shape(img1_test) train_digitstruct = DigitStructFile(train_struct_loc) test_digitstruct = DigitStructFile(test_struct_loc) train1_dict = train_digitstruct.getDigitStructure(0) print train1_dict test1_dict = test_digitstruct.getDigitStructure(0) print test1_dict # Now, lets mess with the 1 image to crop to the bounding box. def crop_dims(img_dict): mintop = int(np.min(img_dict['top'])) top = int(mintop - np.ceil(0.1*mintop)) minleft = int(np.min(img_dict['left'])) left = int(minleft - np.ceil(0.1*minleft)) total_height = int(np.max(img_dict['height'])) height = int(total_height + np.ceil(0.05*total_height)) total_width = int(np.sum(img_dict['width'])) width = int(total_width + np.ceil(0.15*total_width)) bottom = mintop + height right = minleft + width return [top, left, bottom, right] # Tries to crop a square... def square_dims(img_dict): mintop = np.min(img_dict['top']) minleft = np.min(img_dict['left']) height = np.max(img_dict['height']) width = np.sum(img_dict['width']) center_from_left = minleft + np.floor(width / 2.0) center_from_top = mintop + np.floor(height / 2.0) max_dim = max([height, width]) + 0.1*(max([height, width])) new_left = int(max([0, center_from_left - np.floor(max_dim/2.0)])) new_top = int(max([0, center_from_top - np.floor(max_dim/2.0)])) return [new_top, new_left, int(new_top + max_dim), int(new_left + max_dim)] train1_dims = crop_dims(train1_dict) test1_dims = crop_dims(test1_dict) print train1_dims print test1_dims train1_square_dims = square_dims(train1_dict) test1_square_dims = square_dims(test1_dict) print train1_square_dims print test1_square_dims # img1_crop = img1[min_top1:bottom1,min_left1:right1] img1_crop = img1[train1_dims[0]:train1_dims[2],train1_dims[1]:train1_dims[3]] img1_crop_square = img1[train1_square_dims[0]:train1_square_dims[2],train1_square_dims[1]:train1_square_dims[3]] img1_test_crop = img1_test[test1_dims[0]:test1_dims[2],test1_dims[1]:test1_dims[3]] img1_test_crop_square = img1_test[test1_square_dims[0]:test1_square_dims[2],test1_square_dims[1]:test1_square_dims[3]] plt.imshow(img1_crop) plt.imshow(img1_crop_square) plt.imshow(misc.imresize(img1_crop_square, (32,32))) # Crop to 32x32 # img1_rs = misc.imresize(img1_crop, (32,32)) # plt.imshow(img1_rs) plt.imshow(img1_test_crop) plt.imshow(img1_test_crop_square) img1_test_rs = misc.imresize(img1_test_crop, (32,32)) plt.imshow(img1_test_rs) Explanation: SVHN Preprocessing This notebook implements SVHN pre-processing. The key steps are: Read in some sample images and play with them. Pull in full data (a) Extract labels and dimensions for each bounding box (height, width, left, top); (b) Resize and crop around the bounding box for each image; (c) Store in something more useful for python, such as a pickle file. End of explanation # This function extracts and crops the indexed image. def extract_and_crop(img_loc, img_dict, resize): curr_img = img.imread(img_loc + img_dict['name'],mode='L') img_shape = np.shape(curr_img) bb_top = np.min(img_dict['top']) bb_left = np.min(img_dict['left']) bb_height = np.max(img_dict['height']) bb_twidth = np.sum(img_dict['width']) # Add some pixel buffer before cropping. min_top = int( bb_top - 0.1*bb_top ) min_left = int( bb_left - 0.1*bb_left ) if min_left < 0: min_left = 0 if min_top < 0: min_top = 0 # ... a little less on the height total_height = int( bb_height + 0.05*bb_height ) total_width = int( bb_twidth + 0.15*bb_twidth ) curr_img_crop = curr_img[min_top:min_top+total_height,min_left:min_left+total_width] img_rs = misc.imresize(curr_img_crop, (resize,resize)) return img_rs # This function takes a digit struct and creates a one hot encoding of the data. def extract_label(img_dict, encoder, max_len): # Build the label data with one hot encoding. street_label = np.array(img_dict['label']).astype(int) # Replace any instances of 10 with 0 - needed for one-hot encoding. street_label[street_label == 10] = 0 curr_len = np.shape(street_label)[0] len_onehot = encoder.fit_transform(curr_len) y_onehot = np.concatenate((len_onehot, encoder.fit_transform(street_label.reshape(-1,1))),axis=0) # Create the padding for MAX_LENGTH - curr_len if max_len - curr_len > 0: nodigit_padding = np.array([10 for i in range(max_len-curr_len)]) padding_onehot = encoder.fit_transform(nodigit_padding.reshape(-1,1)) y_onehot = np.concatenate((y_onehot, padding_onehot), axis=0) return y_onehot def generate_svhn_dataset(file_loc, n_vals, n_labels, crop_size, max_len): # Load from the digitstruct mat file. fname = os.path.join(file_loc, "digitStruct.mat") digitstruct = DigitStructFile(fname) data_len = len(digitstruct.digitStructName) X = np.zeros((data_len, crop_size, crop_size)) y = np.zeros((data_len, n_labels, n_vals)) invalid_idxs = [] # Encoder for label generation. enc = skproc.OneHotEncoder(n_values=n_vals,sparse=False) for i in range(data_len): curr_dict = digitstruct.getDigitStructure(i) street_num_len = len(np.array(curr_dict['label'])) if i % 1000 == 0: print "Processed through imgae " + str(i) if street_num_len <= 5 and street_num_len > 0: # Extract the label curr_y = extract_label(curr_dict, enc, max_len) curr_X = extract_and_crop(file_loc, curr_dict, crop_size) y[i,:,:] = curr_y X[i,:,:] = curr_X else: invalid_idxs.append(i) print "Invalid number! Index = " + str(i) X = np.delete(X,invalid_idxs,axis=0) y = np.delete(y,invalid_idxs,axis=0) return { 'data' : X, 'labels' : y } def pickle_svhn(name, dataset): fname = "svhn_" + name + ".pkl" svhn_pkl_file = open(fname, 'wb') pkl.dump(dataset, svhn_pkl_file, -1) svhn_pkl_file.close() def load_svhn_pkl(fname): svhn_pkl_file = open(fname, 'rb') loaded_dataset = pkl.load(svhn_pkl_file) svhn_pkl_file.close() return loaded_dataset Explanation: The following code reads the images and crops according to the steps above. Then it encodes the label for each of these pieces of data. Most of this code migrated to the python script file. End of explanation ds_train_fname = os.path.join(train_loc_root, "digitStruct.mat") train_digitstruct = DigitStructFile(ds_train_fname) train_len = len(train_digitstruct.digitStructName) n_samples = 4 train_image_idxs = rnd.sample(range(train_len), n_samples) print "Indices: " + str(train_image_idxs) X_train_sample = np.zeros((n_samples, 32, 32)) y_train_sample = np.zeros((n_samples, 6, 11)) label_enc = skproc.OneHotEncoder(n_values=11,sparse=False) idx = 0 for i in train_image_idxs: train_dict = train_digitstruct.getDigitStructure(i) street_num_len = len(np.array(train_dict['label'])) if street_num_len <= 5 and street_num_len > 0: curr_y = extract_label(train_dict, label_enc, 5) curr_X = extract_and_crop(train_loc_root, train_dict, 32) y_train_sample[idx,:,:] = curr_y X_train_sample[idx,:,:] = curr_X idx = idx + 1 plt.imshow(X_train_sample[1]) y_train_sample[1] train_img_loc = train_loc_root + str(train_image_idxs[0]+1) + '.png' train_img = img.imread(train_img_loc) plt.imshow(train_img) Explanation: Let's visualize the training and test data sets. I will pull out four random images from each and apply the cropping function. End of explanation ds_test_fname = os.path.join(test_loc_root, "digitStruct.mat") test_digitstruct = DigitStructFile(ds_test_fname) test_len = len(test_digitstruct.digitStructName) test_image_idxs = rnd.sample(range(test_len), 4) print "Indices: " + str(test_image_idxs) X_test_sample = np.zeros((n_samples, 32, 32)) y_test_sample = np.zeros((n_samples, 6, 11)) label_enc = skproc.OneHotEncoder(n_values=11,sparse=False) idx = 0 for i in test_image_idxs: test_dict = test_digitstruct.getDigitStructure(i) street_num_len = len(np.array(test_dict['label'])) if street_num_len <= 5 and street_num_len > 0: curr_y = extract_label(test_dict, label_enc, 5) curr_X = extract_and_crop(test_loc_root, test_dict, 32) y_test_sample[idx,:,:] = curr_y X_test_sample[idx,:,:] = curr_X idx = idx + 1 plt.imshow(X_test_sample[3]) y_test_sample[3] test_img_loc = test_loc_root + str(test_image_idxs[3]+1) + '.png' test_img = img.imread(test_img_loc) plt.imshow(test_img) Explanation: Repeat for the test data too. End of explanation # Load and process the training data! train_dataset = generate_svhn_dataset(train_loc_root, 11, 6, 32, 5) print np.shape(train_dataset['data']) print np.shape(train_dataset['labels']) plt.imshow(train_dataset['data'][0]) # Send this data set to pkl pickle_svhn("train", train_dataset) # Now the test data! test_dataset = generate_svhn_dataset(test_loc_root, 11, 6, 32, 5) print np.shape(test_dataset['data']) print np.shape(test_dataset['labels']) plt.imshow(test_dataset['data'][0]) pickle_svhn("test", test_dataset) Explanation: The following code tests my extraction and cropping routines for the entire dataset. End of explanation
8,275
Given the following text description, write Python code to implement the functionality described below step by step Description: Multi-task recommenders Learning Objectives 1. Training a model which focuses on ratings. 2. Training a model which focuses on retrieval. 3. Training a joint model that assigns positive weights to both ratings & retrieval models. Introduction In the basic retrieval notebook we built a retrieval system using movie watches as positive interaction signals. In many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns. Integrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance. In addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as transfer learning. For example, this paper shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data. In this jupyter notebook, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings). Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Imports Let's first get our imports out of the way. Step1: NOTE Step2: Preparing the dataset We're going to use the Movielens 100K dataset. Step3: And repeat our preparations for building vocabularies and splitting the data into a train and a test set Step4: A multi-task model There are two critical parts to multi-task recommenders Step5: Rating-specialized model Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings. Step6: The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not Step7: We get the opposite result
Python Code: # Installing the necessary libraries. !pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets Explanation: Multi-task recommenders Learning Objectives 1. Training a model which focuses on ratings. 2. Training a model which focuses on retrieval. 3. Training a joint model that assigns positive weights to both ratings & retrieval models. Introduction In the basic retrieval notebook we built a retrieval system using movie watches as positive interaction signals. In many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns. Integrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance. In addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as transfer learning. For example, this paper shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data. In this jupyter notebook, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings). Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Imports Let's first get our imports out of the way. End of explanation # Importing the necessary modules import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs Explanation: NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding. End of explanation ratings = tfds.load('movielens/100k-ratings', split="train") movies = tfds.load('movielens/100k-movies', split="train") # Select the basic features. ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], "user_rating": x["user_rating"], }) movies = movies.map(lambda x: x["movie_title"]) Explanation: Preparing the dataset We're going to use the Movielens 100K dataset. End of explanation # Randomly shuffle data and split between train and test. tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) movie_titles = movies.batch(1_000) user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) unique_user_ids = np.unique(np.concatenate(list(user_ids))) Explanation: And repeat our preparations for building vocabularies and splitting the data into a train and a test set: End of explanation class MovielensModel(tfrs.models.Model): def __init__(self, rating_weight: float, retrieval_weight: float) -> None: # We take the loss weights in the constructor: this allows us to instantiate # several model objects with different loss weights. super().__init__() embedding_dimension = 32 # User and movie models. self.movie_model: tf.keras.layers.Layer = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) self.user_model: tf.keras.layers.Layer = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) # A small model to take in user and movie embeddings and predict ratings. # We can make this as complicated as we want as long as we output a scalar # as our prediction. self.rating_model = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation="relu"), tf.keras.layers.Dense(128, activation="relu"), tf.keras.layers.Dense(1), ]) # The tasks. self.rating_task: tf.keras.layers.Layer = tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()], ) self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(self.movie_model) ) ) # The loss weights. self.rating_weight = rating_weight self.retrieval_weight = retrieval_weight def call(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features["user_id"]) # And pick out the movie features and pass them into the movie model. movie_embeddings = self.movie_model(features["movie_title"]) return ( user_embeddings, movie_embeddings, # We apply the multi-layered rating model to a concatentation of # user and movie embeddings. self.rating_model( tf.concat([user_embeddings, movie_embeddings], axis=1) ), ) def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: ratings = features.pop("user_rating") user_embeddings, movie_embeddings, rating_predictions = self(features) # We compute the loss for each task. rating_loss = self.rating_task( labels=ratings, predictions=rating_predictions, ) retrieval_loss = self.retrieval_task(user_embeddings, movie_embeddings) # And combine them using the loss weights. return (self.rating_weight * rating_loss + self.retrieval_weight * retrieval_loss) Explanation: A multi-task model There are two critical parts to multi-task recommenders: They optimize for two or more objectives, and so have two or more losses. They share variables between the tasks, allowing for transfer learning. In this jupyter notebook, we will define our models as before, but instead of having a single task, we will have two tasks: one that predicts ratings, and one that predicts movie watches. The user and movie models are as before: ```python user_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add 1 to account for the unknown token. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) movie_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) ``` However, now we will have two tasks. The first is the rating task: python tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()], ) Its goal is to predict the ratings as accurately as possible. The second is the retrieval task: python tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128) ) ) As before, this task's goal is to predict which movies the user will or will not watch. Putting it together We put it all together in a model class. The new component here is that - since we have two tasks and two losses - we need to decide on how important each loss is. We can do this by giving each of the losses a weight, and treating these weights as hyperparameters. If we assign a large loss weight to the rating task, our model is going to focus on predicting ratings (but still use some information from the retrieval task); if we assign a large loss weight to the retrieval task, it will focus on retrieval instead. End of explanation # Here, configuring the model with losses and metrics. # TODO 1: Here is your code. model = MovielensModel(rating_weight=1.0, retrieval_weight=0.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() # Training the ratings model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.") Explanation: Rating-specialized model Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings. End of explanation # Here, configuring the model with losses and metrics. # TODO 2: Here is your code. model = MovielensModel(rating_weight=0.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) # Training the retrieval model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.") Explanation: The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches. Retrieval-specialized model Let's now try a model that focuses on retrieval only. End of explanation # Here, configuring the model with losses and metrics. # TODO 3: Here is your code. model = MovielensModel(rating_weight=1.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) # Training the joint model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.") Explanation: We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings. Joint model Let's now train a model that assigns positive weights to both tasks. End of explanation
8,276
Given the following text description, write Python code to implement the functionality described below step by step Description: Red bipartita de usuarios y palabras Autor Step2: Número de nodos y enlaces Código para leer el archivo de texto con la red y a partir del mismo obtener un arreglo con los enlaces y sus respectivos pesos, un arreglo con el total de nodos, un arreglo con los nodos de usuarios y un arreglo con los nodos de palabras. Step3: Proyecciones Código propio para hallar el total de enlaces de las proyecciones y validar con las obtenidas por medio de la librería NetworkX Step4: Proyecciones usando NetworkX Código para hallar las proyecciones con NetworkX. Se puede validar con el código anterior que se obtienen el mismo número de enlaces. Step5: Distribución de grados para la red bipartita completa Se grafica la distribución de grados de la red bipartita completa para validar que tenga un comportamiento cercano a la realidad donde los nodos con grados más altos son pocos y los nodos de grado bajo forman un gran pico, es decir, que se acerque a una distribución power law. Step6: <img src="imagenes/distribucion_total_network_gephi.png"> Distribución de grados para la red proyectada de usuarios Step7: <img src="imagenes/distribucion_users_network_gephi.png"> Distribución de grados para la red proyectada de palabras Step8: <img src="imagenes/distribucion_words_network_gephi.png"> Power Law Distribution para la red bipartita completa (Log) Step9: Power Law Distribution para la red proyectada de usuarios (Log) Step10: Power Law Distribution para la red proyectada de palabras (Log) Step11: Se puede observar que tanto la red original como las proyecciones tienen distribuciones que se acercan a power law, lo cual es un buen indicio de que las redes pueden entragar datos interesantes y reales. Componentes de las redes A continuación se hallan los componentes gigantes de las redes, es decir, de la red bipartita completa y sus proyecciones. Las comunidades y preguntas se responderán a partir de los componentes gigantes que son los que tienen información relevante en estas redes. Muchas palabras usadas una sóla vez por uno o pocos usuarios quedan apartadas formando pequeños componentes que no son relevantes. Además varios de estos componentes pueden ser palabras que sólo un usario uso, por lo que se vuelven nodos de grado cero en la proyección de palabras. Se debe mencionar que durante este estudio se miraron los tamaños de los componentes encontrados y la diferencia en tamaño con el componente gigante justificaba su omisión. Step12: Coeficiente de Clustering Step13: Los coeficientes de clustering obtenidos anteriormente tienen sentido y dan un buen indicio para encontrar comunidades. Es normal que el coeficiente sea cero en la red bipartita pues esta no tiene triangulos, y este coeficiente tiende a medir los triangulos que se pueden formar en una red. Para las proyecciones es un buen valor pues si este número es significativamente mayor que el coeficiente de clustering ofrecido por una red aleatoria con las mismas caracteristicas, significa que hay un buen indice de encontrar comunidades. Archivos con las redes Los siguientes metodos sirven para guardar las redes en formatos de archivo que Gephi pueda leer. Step14: Comunidades A continuación se hallan comunidades en la proyección de palabras con el fin de responder las preguntas planteadas. Se comparan las comunidades encontradas con gephi y con el metodo de Louvain, las cuales usan el mismo método. Esto se hace con el fin de validar que las comunidades encontradas si tengan sentido o se parezcan, ya que el metodo puede arrojar diferentes resultados en las corridas (optimos locales). No se hallan comunidades en la proyección de usuarios porque visualmente las palabras brindan más información del tipo de comunidad y a partir de ellas se pueden obtener los usuarios que las usaron, es decir, a la hora de segmentar el mercado, la relación de las palabras, reflejará la relación de los usuarios.
Python Code: # Librerías necesarias para correr el proyecto import networkx as nx from networkx.algorithms import bipartite import matplotlib.pyplot as plt import seaborn as sns import numpy as np import operator import community from scipy.stats import powerlaw %matplotlib inline sns.set() Explanation: Red bipartita de usuarios y palabras Autor: Camilo Torres Botero Profesor: Sergio Pulido Tamayo Red bipartita de palabras obtenidas a partir de tweets, compuesta de dos tipos de nodos: palabras y usuarios. El objetivo de esta red es encontrar relaciones entre usuarios a partir de las palabras, dando peso a las palabras más usadas. Se esperan encontrar comunidades en las palabras que puedan clasificarse en tópicos, de los cuales se pueden sacar grupos de personas con gustos similares o temas comunes y afines. Los grupos obtenidos pueden generar interés en empresas que buscan clientes potenciales, lanzamientos de nuevos productos o segmentación del mercado. Las siguientes son las preguntas que se pretende responder con este estudio: <div class="alert alert-warning" role="alert">¿Qué comunidades se pueden encontrar entre usuarios a partir del lenguaje que usan?</div> <div class="alert alert-warning" role="alert">¿Qué temas se pueden identificar como más comentados por los usuarios?</div> <div class="alert alert-warning" role="alert">¿Es posible identificar algún segmento de mercado en las comunidades obtenidas?</div> <div class="alert alert-warning" role="alert">¿Es posible pensar en un producto que se pueda ofrecer a alguna de las comunidades obtenidas?</div> Limpieza de la base de datos La base de datos se encuenta almacenada en MongoDB debido a que cada tweet está en formato json. A través de un script en python se realiza la conexión a la base de datos con el fin de crear en un archivo de texto la red bipartita de usuarios y palabras. El script se encuentra en la carpeta del proyecto y se llama "get_network_data.py". Dentro de la base de datos existe una colección llamada "tweets_users" que contiene los usuarios y por cada usuario las palabras que han usado en los tweets. Las palabras seleccionadas para cada usuario fueron sustantivos, adjetivos y emojis. Esto se realizó por medio de una herramienta de Standford llamada POS Tagger (Part-Of-Speech Tagger) con la cual se analizaba cada texto y se hacía la respectiva separación. Se puede ver información y documentación de la herramienta en el siguiente enlace: https://nlp.stanford.edu/software/tagger.shtml Cada usuario puede usar una palabra o un emoji más de una vez lo cual se usa como el peso en los enlaces de la red, igualmente este peso es tenido en cuenta en le momento de realizar las proyecciones de la red. A continuación se muestra un ejemplo de un registro de la colección "tweets_users": <div class="alert alert-info" role="alert"> <small> <pre> { "_id" : ObjectId("58e6c881a9f8a04c22c5a237"), "id" : "1070154709", "name" : "Daniela Fuentes✨", "text_emojis_nouns" : [ "JAJAJAJJAJAJAJAJAJAJJAJAJJA", "❤", "mejor", "Feliz", "cumpleaños", "bendiciones", "😆", "💜", "💜", "✨", "✨", "día", "@villalobossebas", "❤", "Justo", "tantoo", "tengas", "lindo", "❤" ], "emojis" : [ "❤", "😆", "💜", "💜", "✨", "❤" ], "text_types" : { "nouns" : [ "JAJAJAJJAJAJAJAJAJAJJAJAJJA", "Feliz", "cumpleaños", "bendiciones", "día", "@villalobossebas" ], "determiners" : [ "El", "un" ], "adjectives" : [ "mejor", "✨", "Justo", "tantoo", "tengas", "lindo", "❤" ], "prepositions" : [ "de", "por", "de" ], "pronouns" : [ "todos", "❤", "esto", "te", "que" ], "adverbs" : [ "siempre", "@JoseGaleanoR" ], "punctuation" : [ ",", ",", ",", "," ], "conjunctions" : [ "cuando" ], "verbs" : [ "estoy", "apuntó", "dormir", "llega", "amo" ] } } </pre> </small> </div> Dentro del campo "text_emojis_nouns" se encuentran ya separados los sustantivos, adjetivos y emojis necesarios para armar la red. Como se puede ver en el json, hay palabras que deben ser limpiadas de los registros debido a que el POS tagger las clasificó como adjetivos o sustantivos de manera erronea. Por ejemplo los "jajaja" o algunos adverbios o verbos mal clasificados. La limpieza de todos estos caracteres ruidosos para la red se realizó en varias fases: El primer paso es hacer una limpieza en el momento de la creación del archivo de texto con los enlaces, es decir, en el archivo "get_network_data.py" remover la mayor cantidad de palabras ruidosas que se pueda. Para esto se usó un libería de python llamada "Natural Language Toolkit" (NLTK) con la cual se pueden hacer diferentes procesamientos para trabajar con lenguaje natural en python. Entre estos procesos está la detección de stopwords o palabras de parada que comunmente son preposiciones o conectores. Además de remover las palabras de parada, se quitaron también las palabras que comenzaban por '@' debido a que son usuarios mencionados en tweets y que fueron clasificados como sustantivos o adjetivos. Se puede obtener información de la librería NLTK en el siguiente enlace: http://www.nltk.org/ El segundo paso de la limpieza fue crear una red con NetworkX utilizando el archivo entregado por el paso anterior. A partir de esta red se creó la proyección de palabras y se analizó cada una de ellas organizadas por grado de mayor a menor. Con este proceso se encontraron palabras y caracteres que eran ruidosos para la red. Este fue un proceso manual y dió como resultado dos arreglos, uno de palabras y otro de caracteres a remover. A continuación se muestran los arreglos obtenidos: remove_words: ['vez', 'día', 'cosas', 'dia', 'asi', '🏻', 'días', 'parte', 'man', 'fin', 'necesito', 'aqui', 'ser', 'pra', '🏼', 'sera', 'hey', 'fav', "'ll", 'aja', 'qlo', 'sdds', 'dejaste', 'sas', 'fiz', 'dan', 'heauheauehae', 'lok', 'ami', 'hablando', 'jfghdf', 'dfkjgd', '12am', "'re", 'aya', 'u.u', '🏽', '▪', '#jct', 'jasldjaslñfjasdñlasjfasd', '2016/6/25', 'xxi', 'ehh', 'aww', 'comí', 'mrc', 'prrrra', 'aksjsks', 'háblame', 'rts', '・', 'pasatela', 'hahsahsa', 'años', 'tweets', 'twitter', 'twitiir', 'twittera', '#twitteroff', 'retwits', '#twitter', '#cd', 'twiter', 'twits', 'twitt', 'twitero', 'tweetdeck', 'retweet', 'twets', 'coisas', 'delosmismoscreadoresdeperreopagomelo', 'yo-yo-yo'] remove_chars: ['jaja', 'jeje', 'kkk', 'haha', 'aaa', 'eee', 'ooo', 'iii', 'yyy', 'uff', 'ddd', 'jiji', 'zzz', 'sss', 'rrr', 'uuu', '???', 'nnn'] Por último se volvió a correr el script "get_network_data.py" agregando los arreglos obtenidos en el punto anterior y validando en la creación de los enlaces que la palabra correspondiente no se econtrara dentro del arreglo "remove_words" o que no contuviera alguna de las cadenas de caracteres del arreglo "remove_chars". Los pasos mencionados en esta limpieza se pueden observar en el script contenido en la carpeta del proyecto. Con esta limpieza se borraron en total 8332 palabras ruidosas para la red. El resultado de este proceso es un archivo de texto llamado "edges_file_total_network_weights_undirected.txt" a partir del cual se puede crear la red en python y el cual también puede ser utilizado para graficar la red en bipartita completa en gephi. Librerías usadas End of explanation En el arreglo all_edges se guardan los enlaces de la red con pesos. El arreglo all_edges_aux sirve para contar el número de veces que aparece un enlace y así agregarlo como peso del enlace en all_edges all_edges = [] all_edges_aux = [] # Archivo de texto que contiene los enlaces de la red (se encuenta en la carpeta del proyecto) with open('edges_file_total_network_weights_undirected.txt', encoding="utf8") as f: next(f) for line in f: edge = line.replace('\n', '').split('\t') # Condicional para validar si ya existe un enlace y agregarlo como peso if (edge[0], edge[1]) in all_edges_aux: edge_index = all_edges_aux.index((edge[0], edge[1])) # Se suma 1 al peso anterioir que tenía el enlace all_edges[edge_index] = (edge[0], edge[1], all_edges[edge_index][2] + 1) else: # Se agrega el nuevo enlace al auxiliar y al original. En el arreglo original de enlaces se inicia # con un peso de 1 all_edges.append((edge[0], edge[1],1)) all_edges_aux.append((edge[0], edge[1])) # Total de enlaces de la red total_count_edges = len(all_edges) # Arreglos para almacenar nodos all_nodes = set() users_nodes = set() words_nodes = set() # Proceso para obtener los nodos de la red for x,y,w in all_edges: users_nodes.add(x) words_nodes.add(y) all_nodes.add(x) all_nodes.add(y) print("El tamaño de la red está definido por los siguientes valores:") print("Total de enlaces:", len(all_edges)) print("Total de nodos:", len(all_nodes)) print("Total de nodos usuario:", len(users_nodes)) print("Total de nodos de palabras:", len(words_nodes)) Explanation: Número de nodos y enlaces Código para leer el archivo de texto con la red y a partir del mismo obtener un arreglo con los enlaces y sus respectivos pesos, un arreglo con el total de nodos, un arreglo con los nodos de usuarios y un arreglo con los nodos de palabras. End of explanation # Sets para guardar los enlaces de la proyección de usuarios y palabras respectivamente projection_users = set() projection_words = set() # Proyección de usuarios # Se recorren todos los nodos de palabras para sacar los usuarios que las usan for word in words_nodes: possible_nodes = set() # Si la palabra está en un enlace se agrega el usuario que la usó en un set de posibles nodos for edge in all_edges: if word in edge: possible_nodes.add(edge[0]) # Se recorren los posibles nodos obtenidos anteriormente, se valida que no existan los enlaces y se agregan for node1 in possible_nodes: for node2 in possible_nodes: if (node1, node2) not in projection_users and (node2, node1) not in projection_users: projection_users.add((node1, node2)) if (node1 != node2) else 0 # Proyección de palabras # Se recorren todos los nodos de usuarios para sacar las palabras que usan for user in users_nodes: possible_nodes = set() # Si el usuario está en un enlace se agrega la palabra que usó en un set de posibles nodos for edge in all_edges: if user in edge: possible_nodes.add(edge[1]) # Se recorren las posibles palabras, se valida que no existan los enlaces y se argegan for node1 in possible_nodes: for node2 in possible_nodes: if (node1, node2) not in projection_words and (node2, node1) not in projection_words: projection_words.add((node1, node2)) if (node1 != node2) else 0 # Resultado de las proyecciones print("Proyeccion usuarios total enlaces:",len(projection_users)) print("Proyeccion palabras total enlaces:",len(projection_words)) Explanation: Proyecciones Código propio para hallar el total de enlaces de las proyecciones y validar con las obtenidas por medio de la librería NetworkX End of explanation # Se crea el grafo bipartito con NetworkX tweets_users_words_graph = nx.Graph() # Se agregan los nodos para ambas particiones, es decir, usuarios y palabras # Para esto es necesario importar el módulo bipartite de NetworkX tweets_users_words_graph.add_nodes_from(users_nodes, bipartite=0) tweets_users_words_graph.add_nodes_from(words_nodes, bipartite=1) # Se agregan los enlaces obtenidos anteriormente con sus respectivos pesos tweets_users_words_graph.add_weighted_edges_from(all_edges) # Función usada para sumar los pesos de la red bipartita en las proyecciones. # Esta función es enviada en los parametros del metodo generic_weighted_projected_graph del módulo bipartite # Tomada de la documentación: # https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.bipartite.projection.generic_weighted_projected_graph.html def my_weight(G, u, v, weight='weight'): w = 0 for nbr in set(G[u]) & set(G[v]): w += G.edge[u][nbr].get(weight, 1) + G.edge[v][nbr].get(weight, 1) return w # Se hallan las proyecciones de usuarios y palabras projected_graph_users = bipartite.generic_weighted_projected_graph(tweets_users_words_graph, users_nodes, weight_function=my_weight) projected_graph_words = bipartite.generic_weighted_projected_graph(tweets_users_words_graph, words_nodes, weight_function=my_weight) # Resultados de las proyecciones. Se imprimen los primeros 10 enlaces de cada proyeccion para validar que tengan # los pesos. Se imprimen también el número de enlaces de cada proyección para validar con el punto anterior. print("Proyección grafo usuarios total enlaces:",projected_graph_users.edges(data=True)[0:10]) print() print("Proyección grafo palabras total enlaces:",projected_graph_words.edges(data=True)[0:10]) print() print("Proyección grafo usuarios total enlaces:",len(projected_graph_users.edges())) print() print("Proyección grafo palabras total enlaces:",len(projected_graph_words.edges())) Explanation: Proyecciones usando NetworkX Código para hallar las proyecciones con NetworkX. Se puede validar con el código anterior que se obtienen el mismo número de enlaces. End of explanation # Gráfica de la distribución de grados en python degrees_tweets_users_words_graph = list(tweets_users_words_graph.degree().values()) sns.distplot(degrees_tweets_users_words_graph) Explanation: Distribución de grados para la red bipartita completa Se grafica la distribución de grados de la red bipartita completa para validar que tenga un comportamiento cercano a la realidad donde los nodos con grados más altos son pocos y los nodos de grado bajo forman un gran pico, es decir, que se acerque a una distribución power law. End of explanation degrees_projected_graph_users = list(projected_graph_users.degree().values()) degrees_projected_graph_users = np.array(degrees_projected_graph_users) degrees_projected_graph_users = degrees_projected_graph_users[np.nonzero(degrees_projected_graph_users)] sns.distplot(degrees_projected_graph_users) Explanation: <img src="imagenes/distribucion_total_network_gephi.png"> Distribución de grados para la red proyectada de usuarios End of explanation degrees_projected_graph_words = list(projected_graph_words.degree().values()) degrees_projected_graph_words = np.array(degrees_projected_graph_words) degrees_projected_graph_words = degrees_projected_graph_words[np.nonzero(degrees_projected_graph_words)] sns.distplot(degrees_projected_graph_words) Explanation: <img src="imagenes/distribucion_users_network_gephi.png"> Distribución de grados para la red proyectada de palabras End of explanation logs_tweets_users_words_graph = np.log(degrees_tweets_users_words_graph) plt.hist(logs_tweets_users_words_graph, log=True) e, l, s = powerlaw.fit(degrees_tweets_users_words_graph) e, l, s Explanation: <img src="imagenes/distribucion_words_network_gephi.png"> Power Law Distribution para la red bipartita completa (Log) End of explanation logs_projected_graph_users = np.log(degrees_projected_graph_users) plt.hist(logs_projected_graph_users, log=True) e_users, l_users, s_users = powerlaw.fit(degrees_projected_graph_users) e_users, l_users, s_users Explanation: Power Law Distribution para la red proyectada de usuarios (Log) End of explanation logs_projected_graph_words = np.log(degrees_projected_graph_words) plt.hist(logs_projected_graph_words, log=True) e_words, l_words, s_words = powerlaw.fit(degrees_projected_graph_words) e_words, l_words, s_words Explanation: Power Law Distribution para la red proyectada de palabras (Log) End of explanation # Código para hallar el componente gigante de la red de usuarios y palabras tweets_users_words_components = sorted(nx.connected_component_subgraphs(tweets_users_words_graph), key = len, reverse=True) giant_component = tweets_users_words_components[0] giant_component_edges = giant_component.edges() print("Nodos del componente gigante de la red de usuarios y palabras:", len(giant_component.nodes())) print("Enlaces del componente gigante de la red de usuarios y palabras:", len(giant_component_edges)) print() # Código para hallar el componente gigante de la red de usuarios projected_graph_users_components = sorted(nx.connected_component_subgraphs(projected_graph_users), key = len, reverse=True) giant_component_users = projected_graph_users_components[0] giant_component_users_edges = giant_component_users.edges() print("Nodos del componente gigante de la red de usuarios:", len(giant_component_users.nodes())) print("Enlaces del componente gigante de la red de usuarios:", len(giant_component_users_edges)) print() # Código para hallar el componente gigante de la red de palabras projected_graph_words_components = sorted(nx.connected_component_subgraphs(projected_graph_words), key = len, reverse=True) giant_component_words = projected_graph_words_components[0] giant_component_words_edges = giant_component_words.edges() print("Nodos del componente gigante de la red de usuarios:", len(giant_component_words.nodes())) print("Enlaces del componente gigante de la red de palabras:", len(giant_component_words_edges)) # Este fragmento de código comentado fue usado para el proceso de limpieza, ordenando las palabras por grado # degrees_giant_component_words = sorted(giant_component_words.degree().items(), key=operator.itemgetter(1), reverse=True) # words_file = open("palabras_ordenadas_por_grado.txt", 'w', encoding="utf8") # for k in degrees_giant_component_words: # print(k, file=words_file) Explanation: Se puede observar que tanto la red original como las proyecciones tienen distribuciones que se acercan a power law, lo cual es un buen indicio de que las redes pueden entragar datos interesantes y reales. Componentes de las redes A continuación se hallan los componentes gigantes de las redes, es decir, de la red bipartita completa y sus proyecciones. Las comunidades y preguntas se responderán a partir de los componentes gigantes que son los que tienen información relevante en estas redes. Muchas palabras usadas una sóla vez por uno o pocos usuarios quedan apartadas formando pequeños componentes que no son relevantes. Además varios de estos componentes pueden ser palabras que sólo un usario uso, por lo que se vuelven nodos de grado cero en la proyección de palabras. Se debe mencionar que durante este estudio se miraron los tamaños de los componentes encontrados y la diferencia en tamaño con el componente gigante justificaba su omisión. End of explanation avg_clustering_tweets_users_words_graph = nx.average_clustering(tweets_users_words_graph) print("Promedio del coeficiente de clustering de la red de usuarios y palabras:", avg_clustering_tweets_users_words_graph) avg_clustering_projected_graph_users = nx.average_clustering(projected_graph_users) print("Promedio del coeficiente de clustering de la red de usuarios:", avg_clustering_projected_graph_users) N_users = len(projected_graph_users.nodes()) edges_user = projected_graph_users.edges() p_users = (2*len(edges_user))/(N_users*(N_users-1)) random_graph_users = nx.gnp_random_graph(N_users,p_users) avg_clustering_random_graph_users = nx.average_clustering(random_graph_users) print("Promedio del coeficiente de clustering de la red aleatoria de usuarios:", avg_clustering_random_graph_users) avg_clustering_projected_graph_words = nx.average_clustering(projected_graph_words) print("Promedio del coeficiente de clustering de la red de palabras:", avg_clustering_projected_graph_words) N_words = len(projected_graph_words.nodes()) edges_words = projected_graph_words.edges() p_words = (2*len(edges_words))/(N_words*(N_words-1)) random_graph_words = nx.gnp_random_graph(N_words,p_words) avg_clustering_random_graph_words = nx.average_clustering(random_graph_words) print("Promedio del coeficiente de clustering de la red aleatoria de palabras:", avg_clustering_random_graph_words) Explanation: Coeficiente de Clustering End of explanation def export_network_file(network_edges, file_name): network_edges_file = open(file_name, 'w') print('Source\tTarget\tType', file=network_edges_file) for k, v in sorted(network_edges): print(k+'\t'+v+'\tUndirected', file=network_edges_file) # export_network_file(giant_component_users_edges, 'giant_component_users_weights_edges_file.txt') # export_network_file(giant_component_words_edges, 'giant_component_words_weights_edges_file.txt') def export_to_gephi(G, file_name): nx.write_gexf(G, file_name) # export_to_gephi(giant_component, "giant_component_gephi.gexf") # export_to_gephi(giant_component_words, "giant_component_words_gephi.gexf") # export_to_gephi(giant_component_users, "giant_component_users_gephi.gexf") Explanation: Los coeficientes de clustering obtenidos anteriormente tienen sentido y dan un buen indicio para encontrar comunidades. Es normal que el coeficiente sea cero en la red bipartita pues esta no tiene triangulos, y este coeficiente tiende a medir los triangulos que se pueden formar en una red. Para las proyecciones es un buen valor pues si este número es significativamente mayor que el coeficiente de clustering ofrecido por una red aleatoria con las mismas caracteristicas, significa que hay un buen indice de encontrar comunidades. Archivos con las redes Los siguientes metodos sirven para guardar las redes en formatos de archivo que Gephi pueda leer. End of explanation # Código para hallar las comunidades usando NetworkX y el módulo community partition = community.best_partition(giant_component_words) # Se agregan las comunidades escogidas como atributos de los nodos # Esto se hace con el fin de pintar las comunidades obtenidas con NetworkX en Gephi nx.set_node_attributes(giant_component_words, 'louvain', partition) # El código que está comentado a continuación exporta el archivo con las comunidades obtenidas # export_to_gephi(giant_component_words, "giant_component_words_gephi_louvain.gexf") Explanation: Comunidades A continuación se hallan comunidades en la proyección de palabras con el fin de responder las preguntas planteadas. Se comparan las comunidades encontradas con gephi y con el metodo de Louvain, las cuales usan el mismo método. Esto se hace con el fin de validar que las comunidades encontradas si tengan sentido o se parezcan, ya que el metodo puede arrojar diferentes resultados en las corridas (optimos locales). No se hallan comunidades en la proyección de usuarios porque visualmente las palabras brindan más información del tipo de comunidad y a partir de ellas se pueden obtener los usuarios que las usaron, es decir, a la hora de segmentar el mercado, la relación de las palabras, reflejará la relación de los usuarios. End of explanation
8,277
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: BCC Source ID: BCC-ESM1 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:39 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
8,278
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute site visits This notebook computes the number of visits at each sample site from an excel file exported from an online database (arcgis online) of site visits. Required pacakges <a href="https Step1: Import statements Step2: Compute site visits Step3: Export to csv
Python Code: excel_filepath = "" csv_output_filepath = "" Explanation: Compute site visits This notebook computes the number of visits at each sample site from an excel file exported from an online database (arcgis online) of site visits. Required pacakges <a href="https://github.com/pydata/pandas">pandas</a> Variable declarations excel_filepath – path to excel file with site visit records <br /> csv_output_filepath – path to export csv file End of explanation import pandas Explanation: Import statements End of explanation file = pandas.read_excel(excel_filepath) file = file.rename_axis({'NAME':'Name', 'TIME1':'Time', 'WEATHER1':'Weather', 'TEMPERATURE1':'Temperature'}, 1) data = file[['ID', 'Name', 'Time', 'Weather', 'Temperature']].sort_values(by=['ID', 'Name', 'Time']) data['ID'] = data['ID'].map('{:g}'.format) data['Name'] = [n.strip() for n in data['Name']] counts = data['ID'].value_counts().sort_index() names = data[['ID', 'Name']].drop_duplicates().set_index('ID') count = names.join(counts).rename_axis({'ID':'count'}, 1) count data Explanation: Compute site visits End of explanation data.to_csv(csv_output_filepath) Explanation: Export to csv End of explanation
8,279
Given the following text description, write Python code to implement the functionality described below step by step Description: MNIST in Keras with Tensorboard This sample trains an "MNIST" handwritten digit recognition model on a GPU or TPU backend using a Keras model. Data are handled using the tf.data.Datset API. This is a very simple sample provided for educational purposes. Do not expect outstanding TPU performance on a dataset as small as MNIST. Parameters Step1: Imports Step3: TPU/GPU detection Step4: Colab-only auth for this notebook and the TPU Step5: tf.data.Dataset Step6: Let's have a look at the data Step7: Keras model Step8: Train and validate the model Step9: Visualize predictions
Python Code: BATCH_SIZE = 64 LEARNING_RATE = 0.002 # GCS bucket for training logs and for saving the trained model # You can leave this empty for local saving, unless you are using a TPU. # TPUs do not have access to your local instance and can only write to GCS. BUCKET="" # a valid bucket name must start with gs:// training_images_file = 'gs://mnist-public/train-images-idx3-ubyte' training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte' validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte' validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte' Explanation: MNIST in Keras with Tensorboard This sample trains an "MNIST" handwritten digit recognition model on a GPU or TPU backend using a Keras model. Data are handled using the tf.data.Datset API. This is a very simple sample provided for educational purposes. Do not expect outstanding TPU performance on a dataset as small as MNIST. Parameters End of explanation import os, re, math, json, time import PIL.Image, PIL.ImageFont, PIL.ImageDraw import numpy as np import tensorflow as tf from matplotlib import pyplot as plt from tensorflow.python.platform import tf_logging print("Tensorflow version " + tf.__version__) Explanation: Imports End of explanation try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines print("Number of accelerators: ", strategy.num_replicas_in_sync) # adjust batch size and learning rate for distributed computing global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs. learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync #@title visualization utilities [RUN ME] This cell contains helper functions used for visualization and downloads only. You can skip reading it. There is very little useful Keras/Tensorflow code here. # Matplotlib config plt.rc('image', cmap='gray_r') plt.rc('grid', linewidth=0) plt.rc('xtick', top=False, bottom=False, labelsize='large') plt.rc('ytick', left=False, right=False, labelsize='large') plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white') plt.rc('text', color='a8151a') plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf") # pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO) def dataset_to_numpy_util(training_dataset, validation_dataset, N): # get one batch from each: 10000 validation digits, N training digits unbatched_train_ds = training_dataset.unbatch() # This is the TF 2.0 "eager execution" way of iterating through a tf.data.Dataset for v_images, v_labels in validation_dataset: break for t_images, t_labels in unbatched_train_ds.batch(N): break validation_digits = v_images.numpy() validation_labels = v_labels.numpy() training_digits = t_images.numpy() training_labels = t_labels.numpy() # these were one-hot encoded in the dataset validation_labels = np.argmax(validation_labels, axis=1) training_labels = np.argmax(training_labels, axis=1) return (training_digits, training_labels, validation_digits, validation_labels) # create digits from local fonts for testing def create_digits_from_local_fonts(n): font_labels = [] img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1 font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25) font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25) d = PIL.ImageDraw.Draw(img) for i in range(n): font_labels.append(i%10) d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2) font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded) font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28]) return font_digits, font_labels # utility to display a row of digits with their predictions def display_digits(digits, predictions, labels, title, n): plt.figure(figsize=(13,3)) digits = np.reshape(digits, [n, 28, 28]) digits = np.swapaxes(digits, 0, 1) digits = np.reshape(digits, [28, 28*n]) plt.yticks([]) plt.xticks([28*x+14 for x in range(n)], predictions) for i,t in enumerate(plt.gca().xaxis.get_ticklabels()): if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red plt.imshow(digits) plt.grid(None) plt.title(title) # utility to display multiple rows of digits, sorted by unrecognized/recognized status def display_top_unrecognized(digits, predictions, labels, n, lines): idx = np.argsort(predictions==labels) # sort order: unrecognized first for i in range(lines): display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n], "{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n) Explanation: TPU/GPU detection End of explanation #IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence #if IS_COLAB_BACKEND: # from google.colab import auth # auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets Explanation: Colab-only auth for this notebook and the TPU End of explanation def read_label(tf_bytestring): label = tf.io.decode_raw(tf_bytestring, tf.uint8) label = tf.reshape(label, []) label = tf.one_hot(label, 10) return label def read_image(tf_bytestring): image = tf.io.decode_raw(tf_bytestring, tf.uint8) image = tf.cast(image, tf.float32)/256.0 image = tf.reshape(image, [28*28]) return image def load_dataset(image_file, label_file): imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16) imagedataset = imagedataset.map(read_image, num_parallel_calls=16) labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8) labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16) dataset = tf.data.Dataset.zip((imagedataset, labelsdataset)) return dataset def get_training_dataset(image_file, label_file, batch_size): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset dataset = dataset.shuffle(5000, reshuffle_each_iteration=True) dataset = dataset.repeat() # Mandatory for Keras for now dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size) return dataset def get_validation_dataset(image_file, label_file): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset dataset = dataset.repeat() # Mandatory for Keras for now dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch return dataset # instantiate the datasets training_dataset = get_training_dataset(training_images_file, training_labels_file, global_batch_size) validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file) Explanation: tf.data.Dataset: parse files and prepare training and validation datasets Please read the best practices for building input pipelines with tf.data.Dataset End of explanation N = 24 (training_digits, training_labels, validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N) display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N) display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N) font_digits, font_labels = create_digits_from_local_fonts(N) Explanation: Let's have a look at the data End of explanation # This model trains to 99.4%— sometimes 99.5%— accuracy in 10 epochs (with a batch size of 64) def make_model(): model = tf.keras.Sequential( [ tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=True, activation='relu'), tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2), tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(200, use_bias=True, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler loss='categorical_crossentropy', metrics=['accuracy']) return model with strategy.scope(): # the new way of handling distribution strategies in Tensorflow 1.14+ model = make_model() # print model layers model.summary() # set up Tensorboard logs timestamp = time.strftime("%Y-%m-%d-%H-%M-%S") log_dir=os.path.join(BUCKET, 'mnist-logs', timestamp) tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq=50*global_batch_size) print("Tensorboard loggs written to: ", log_dir) Explanation: Keras model: 3 convolutional layers, 2 dense layers End of explanation EPOCHS = 10 steps_per_epoch = 60000//global_batch_size # 60,000 items in this dataset print("Step (batches) per epoch: ", steps_per_epoch) history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS, validation_data=validation_dataset, validation_steps=1, callbacks=[tb_callback], verbose=1) Explanation: Train and validate the model End of explanation # recognize digits from local fonts probabilities = model.predict(font_digits, steps=1) predicted_labels = np.argmax(probabilities, axis=1) display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N) # recognize validation digits probabilities = model.predict(validation_digits, steps=1) predicted_labels = np.argmax(probabilities, axis=1) display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7) Explanation: Visualize predictions End of explanation
8,280
Given the following text description, write Python code to implement the functionality described below step by step Description: 01 - Raw Scraping What we do here is the scraping of the Swiss Parliament website from the hierarchy described in the metadata we were given. We were able to create a typical URL from which we start our queries for the different fields, as we will describe below. This URL is saved in the base_url.txt and has the form https Step1: 1. Key Parsing Functions Before starting, we just check whether the path we will use as our data folder exists. If it does, we store it there. Step3: parse_data is the first parsing function that we use. Given a completed url, it fetches a dictionary which contains everything related to a field, and formats it into a DataFrame. For instance, given a get query associated to the LegislativePeriod, it will get all the fields in it (e.g. ID, Language, LegislativePeriodNumber, ...), then format it in a DataFrame where the ID will be the row, and the other fields the column. Important note Step5: save_data is our most important function here. It allows us to form the url with which we want to make our query, then calls the parse_data function and finally stores the resuting DataFrame to the corresponding directory. TODO Step6: 2. Actual Parsing of the Data Now that our parsing functions were defined, we can use them to retrieve the data we need from the website. Every cell will perform the query for one specific field, and the data we retrieve will allow us to go further down into the data tree (cf. the visualisation of the hierarchy of the data with XOData) and retrieve the fields we are interested with. We especially need to get the Transcripts and Voting fields, as the first describes all the discussions that happen during a session at the Parliament and the second all the results of the votes that happen during a session. Those are the two essential components to the machine learning we'll do later on. We process from the highest "branch" of the tree of data, namely the Legislative Period. This is the year during which the Parliament met, which each time two sessions. 2.1. Saving Legislative Data Every cell will roughly work the same, we give first the specific link to access the field if it requires it (otherwise we use the base_url that we described before). We set the directory in which we save our data and then retrieve it. We need to retrieve the IDs from the Legislative Period in order to be able to keep on querying, as querying with an invalid Legislative Period ID will lead to the crashing of the request. Step7: We see that the IDs are continuous, ranging from 37 to 50. We will then query all the data which have their LegislativePeriodID between those two bounds. The first line gives us the link we build to make the query. 2.2 Saving Vote Data A close field that we can access from the LegislativePeriod is the Vote field. It is the interface between an object that is voted at the parliament at a specific time and the results of this vote, which are available in the Voting field. We will then query all the votes that happened for the given Legislative Periods we're considering. We see below that the IDs range from 1 to 17983, which isn't unique as an object will be voted several times, go between the National Council and the State Council iteratively until the issue is accepted. Step8: 2.3. Saving Session Data The Session field helps us identify precisely when an object was voted. It is due to the fact that, for a given Legislative Period, which basically is a year, there are several Sessions, usually a winter and a summer one, and there might be some special ones as well. Step9: 2.4 Saving Voting Data. The routine below is not very efficient and is operated manually, being ran several times in order to obtain all the Voting items. This is why it shouldn't be ran just in its current state. The complications come from the fact that are the Voting IDs aren't contiguous, and that querying an unexisting ID will make the query crash. Moreover, we can only query the IDs two by two, we would otherwise receive a timeout as a response. This is why we need to do the following. It is due to the fact that each ID encapsulates a lot of Data. Step10: 2.5 Saving Meeting Data. This field depends on the session, and records every Meeting that each chambers of the parliament has during a session. It is necessary for us to have it in order to be able to access the Transcript field later, which is the transcription of every Subject that is discussed during a Meeting of any of the chambers during a Session of a Legislative Period. Nothing very surprising here. We access it from the Session field, that's why we need to record the session_id. Step11: 2.6 Saving Subject Data. The field, as we described above, contains all the Subjects which are discussed during a single Meeting, and we hence need the meeting_id list to be able to query it. It is the last step before being able to access to the much desired Transcript data and retrieve it in a coherent way.. Step12: 2.7. Saving Transcript Data Now that we have the list of the Subject_id, we are finally able to get the Transcript field, a record that everything discussed at the parliament, on which we will base our Natural Language Processing Analysis later on. The query is a bit complicated. TODO
Python Code: from bs4 import BeautifulSoup import urllib.request import pandas as pd import html5lib from lxml import * import numpy as np import xmljson from xmljson import badgerfish as bf from json import * import xml.etree.ElementTree as ET from io import StringIO import webbrowser import requests import os as os Explanation: 01 - Raw Scraping What we do here is the scraping of the Swiss Parliament website from the hierarchy described in the metadata we were given. We were able to create a typical URL from which we start our queries for the different fields, as we will describe below. This URL is saved in the base_url.txt and has the form https://ws.parlament.ch/odata.svc/[]?$filter=()%20gt%20{0}L%20and%20()%20lt%20{1}L%20and%20Language%20eq%20%27FR%27 with a lot of missing fields that will be completed for a specific query further down. 0. Usual Imports A lot of libraries are required to successfully scrap the data and put them to csv files. To install pip install xmljson End of explanation if not os.path.exists("../datas"): os.makedirs("../datas") if not os.path.exists("../datas/scrap"): os.makedirs("../datas/scrap") directory = '../datas/scrap' Explanation: 1. Key Parsing Functions Before starting, we just check whether the path we will use as our data folder exists. If it does, we store it there. End of explanation def parse_data(base_url) : Fetches and parses data from the base_url given as parameter. Then formats it into a DataFrame which is returned. The quadruple for loop is due to the particular form of the website. @param base_url : the precise get request we want to formulate. @return data : a DataFrame which contains the formatted result from the query. with urllib.request.urlopen(base_url) as url: s = url.read() root = ET.fromstring(s) dict_ = {} base = "{http://www.w3.org/2005/Atom}" for child in root.iter(base+'entry'): for children in child.iter(base+'content') : for properties in children : for subject in properties : #print(subject.text) s = subject.tag.split('}') if s[1] in dict_ : dict_[s[1]].append(subject.text) else : dict_[s[1]] = [subject.text] data = pd.DataFrame(dict_) return data Explanation: parse_data is the first parsing function that we use. Given a completed url, it fetches a dictionary which contains everything related to a field, and formats it into a DataFrame. For instance, given a get query associated to the LegislativePeriod, it will get all the fields in it (e.g. ID, Language, LegislativePeriodNumber, ...), then format it in a DataFrame where the ID will be the row, and the other fields the column. Important note : The Language field, present in every field, will systematically be filtered with the French entries (FR), because the only thing that it changes is for instance the name of the partys that will be in German instead of french and so on. The same goes for the other languages. End of explanation def save_data(parent_id, directory, id_name, parent_name = None, subject = None, url = None) : Forms the correct url to use to query the website. Then, fetches the data and parses them into a usable csv file. @param parent_id : the ID of the parent field (i.e. the last one we parsed and from which we got the ID -> necessary to make the query on the correct IDs) @param directory : the directory in which the parsing will be saved @param id_name : the name of the parent field for exporting purposes @param parent_name : the name of the parent field formatted for the query @param subject : The topic we're currently parsing. @param url : the specific url for the topic we're treating, if we need a special one (otherwise we load the base_url) @return index : the indices on which our data range @return data : the formatted DataFrame containing all the infos that were scraped. if url == None : with open('base_url.txt', 'r') as myfile: url=myfile.read() if subject != None : url = url.replace('[]',subject) if parent_name != None : url = url.replace('()',parent_name) url = url.replace('{0}',str(np.maximum(min(parent_id)-1,0))) url = url.replace('{1}',str(max(parent_id)+1)) print(url) data = parse_data(url) # The website might return empty data. In the case where that happens, we return nothing. # In the case where useful data is returned, we save it to a specific location with a name given # by the parameters we passed. if not data.empty : if not os.path.exists(directory): os.makedirs(directory) index = list(map(int, data['ID'].unique().tolist())) data.to_csv(directory+'/'+id_name+ 'id_'+str(min(parent_id))+'-'+str(max(parent_id))+'.csv') return index ,data else : return None Explanation: save_data is our most important function here. It allows us to form the url with which we want to make our query, then calls the parse_data function and finally stores the resuting DataFrame to the corresponding directory. TODO : describe why the completing is done like that TODO : understand the id_ "game". End of explanation legislative_url ="https://ws.parlament.ch/odata.svc/LegislativePeriod?$filter=LegislativePeriodNumber%20gt%20{0}%20and%20LegislativePeriodNumber%20lt%20{1}" base_legi_directory = directory+ "/legi" legi_periode_id, _ = save_data([0,100], base_legi_directory,'legi',None,None,legislative_url) print(legi_periode_id) Explanation: 2. Actual Parsing of the Data Now that our parsing functions were defined, we can use them to retrieve the data we need from the website. Every cell will perform the query for one specific field, and the data we retrieve will allow us to go further down into the data tree (cf. the visualisation of the hierarchy of the data with XOData) and retrieve the fields we are interested with. We especially need to get the Transcripts and Voting fields, as the first describes all the discussions that happen during a session at the Parliament and the second all the results of the votes that happen during a session. Those are the two essential components to the machine learning we'll do later on. We process from the highest "branch" of the tree of data, namely the Legislative Period. This is the year during which the Parliament met, which each time two sessions. 2.1. Saving Legislative Data Every cell will roughly work the same, we give first the specific link to access the field if it requires it (otherwise we use the base_url that we described before). We set the directory in which we save our data and then retrieve it. We need to retrieve the IDs from the Legislative Period in order to be able to keep on querying, as querying with an invalid Legislative Period ID will lead to the crashing of the request. End of explanation base_vote_directory= directory + "/Vote" vote_id , _ = save_data(legi_periode_id,base_vote_directory,'legi','IdLegislativePeriod','Vote') print(str(min(vote_id))+' '+str(max(vote_id))) Explanation: We see that the IDs are continuous, ranging from 37 to 50. We will then query all the data which have their LegislativePeriodID between those two bounds. The first line gives us the link we build to make the query. 2.2 Saving Vote Data A close field that we can access from the LegislativePeriod is the Vote field. It is the interface between an object that is voted at the parliament at a specific time and the results of this vote, which are available in the Voting field. We will then query all the votes that happened for the given Legislative Periods we're considering. We see below that the IDs range from 1 to 17983, which isn't unique as an object will be voted several times, go between the National Council and the State Council iteratively until the issue is accepted. End of explanation base_session_directory= directory+ "/Session" session_id, _ = save_data(legi_periode_id,base_session_directory,'Legi','LegislativePeriodNumber','Session') print(str(min(session_id))+' '+str(max(session_id))) Explanation: 2.3. Saving Session Data The Session field helps us identify precisely when an object was voted. It is due to the fact that, for a given Legislative Period, which basically is a year, there are several Sessions, usually a winter and a summer one, and there might be some special ones as well. End of explanation base_voting_directory= directory+ "/Voting" # Iterate over some specific range ot data for i in range(np.int16((5005-4811)/2)+1) : # Particular URL to get the Voting field from. url ="https://ws.parlament.ch/odata.svc/Voting/$count?$filter=Language%20eq%20%27FR%27%20and%20IdSession%20ge%20[1]%20and%20IdSession%20le%20[2]" session_id.sort() # ID of the Voting we query id_ = 4811+2*i # Query items two by two take_id = [id_,id_+1] url = url.replace('[1]',str(min(take_id))) url = url.replace('[2]',str(max(take_id))) with urllib.request.urlopen(url) as url: s = url.read() print("count equals ====>" + str(s)) vote_id , _ = save_data(take_id,base_voting_directory,'Session','IdSession','Voting') print(str(min(vote_id))+' '+str(max(vote_id))) Explanation: 2.4 Saving Voting Data. The routine below is not very efficient and is operated manually, being ran several times in order to obtain all the Voting items. This is why it shouldn't be ran just in its current state. The complications come from the fact that are the Voting IDs aren't contiguous, and that querying an unexisting ID will make the query crash. Moreover, we can only query the IDs two by two, we would otherwise receive a timeout as a response. This is why we need to do the following. It is due to the fact that each ID encapsulates a lot of Data. End of explanation base_transcript_directory= directory+ "/Meeting" meeting_id,_ = save_data(session_id,base_transcript_directory,'Session','IdSession','Meeting') print(str(min(meeting_id))+' '+str(max(meeting_id))) Explanation: 2.5 Saving Meeting Data. This field depends on the session, and records every Meeting that each chambers of the parliament has during a session. It is necessary for us to have it in order to be able to access the Transcript field later, which is the transcription of every Subject that is discussed during a Meeting of any of the chambers during a Session of a Legislative Period. Nothing very surprising here. We access it from the Session field, that's why we need to record the session_id. End of explanation base_subject_directory= directory+ "/Subject" subject_id, _ = save_data(meeting_id,base_subject_directory,'Meeting','IdMeeting','Subject') print(str(min(Subject_id))+' '+str(max(Subject_id))) Explanation: 2.6 Saving Subject Data. The field, as we described above, contains all the Subjects which are discussed during a single Meeting, and we hence need the meeting_id list to be able to query it. It is the last step before being able to access to the much desired Transcript data and retrieve it in a coherent way.. End of explanation base_transcript_directory= directory+ "/Transcript" max_transcript_id = 206649 transcript_id = [0] while max(transcript) < max_transcript_id : transcript, transcript = save_data(subject_id,base_transcript_directory,'Subject','IdSubject','Transcript') max_id = max(list(map(int,transcript['IdSubject']))) subject_id = [i for i in subject_id if i > max_id] print(str(min(transcript))+' '+str(max(transcript))) legislative_url ="https://ws.parlament.ch/odata.svc/[]?$filter=()%20gt%20{0}L%20and%20()%20lt%20{1}L%20and%20Language%20eq%20%27FR%27" base_legi_directory = directory+ "/MemberCouncil" legi_periode_id =[5000] for i in range(10): legi_periode_id, _ = save_data([max(legi_periode_id),max(legi_periode_id)+1000], base_legi_directory,'MemberCouncil','ID','MemberCouncil',legislative_url) #print(legi_periode_id) Explanation: 2.7. Saving Transcript Data Now that we have the list of the Subject_id, we are finally able to get the Transcript field, a record that everything discussed at the parliament, on which we will base our Natural Language Processing Analysis later on. The query is a bit complicated. TODO : Explain why End of explanation
8,281
Given the following text description, write Python code to implement the functionality described below step by step Description: Save this file as studentid1_studentid2_lab#.ipynb (Your student-id is the number shown on your student card.) E.g. if you work with 3 people, the notebook should be named Step1: Lab 3 Step2: Part 1 Step3: 1. Sampling from the Gaussian process prior (30 points) We will implement Gaussian process regression using the kernel function in Bishop Eqn. 6.63. 1.1 k_n_m( xn, xm, thetas ) (5 points) To start, implement function k_n_m(xn, xm, thetas) that takes scalars $x_n$ and $x_m$, and a vector of $4$ thetas, and computes the kernel function Bishop Eqn. 6.63 (10 points). NB Step4: 1.2 computeK( X1, X2, thetas ) (10 points) Eqn 6.60 is the marginal distribution of mean output of $N$ data vectors Step5: 1.3 Plot function samples (15 points) Now sample mean functions at the x_test locations for the theta values in Bishop Figure 6.5, make a figure with a 2 by 3 subplot and make sure the title reflects the theta values (make sure everything is legible). In other words, sample $\by_i \sim \mathcal{N}(0, \mathbf{K}_{\theta})$. Make use of numpy.random.multivariate_normal(). On your plots include the expected value of $\by$ with a dashed line and fill_between 2 standard deviations of the uncertainty due to $\mathbf{K}$ (the diagonal of $\mathbf{K}$ is the variance of the model uncertainty) (15 points). Step6: 2. Predictive distribution (35 points) So far we have sampled mean functions from the prior. We can draw actual data $\bt$ two ways. The first way is generatively, by first sampling $\by | \mathbf{K}$, then sampling $\bt | \by, \beta$ (Eqns 6.60 followed by 6.59). The second way is to integrate over $\by$ (the mean draw) and directly sample $\bt | \mathbf{K}, \beta$ using Eqn 6.61. This is the generative process for $\bt$. Note that we have not specified a distribution over inputs $\bx$; this is because Gaussian processes are conditional models. Because of this we are free to generate locations $\bx$ when playing around with the GP; obviously a dataset will give us input-output pairs. Once we have data, we are interested in the predictive distribution (note Step7: 2.2 gp_log_likelihood(...) (10 points) To learn the hyperparameters, we would need to compute the log-likelihood of the of the training data. Implicitly, this is conditioned on the value setting for $\mathbf{\theta}$. Write a function gp_log_likelihood(x_train, t_train, theta, C=None, invC=None, beta=None), where C and invC can be stored and reused. It should return the log-likelihood, C and invC (10 points) Step8: 2.3 Plotting (10 points) Repeat the 6 plots above, but this time conditioned on the training points. Use the periodic data generator to create 2 training points where x is sampled uniformly between $-1$ and $1$. For these plots, feel free to use the provided function "gp_plot". Make sure you put the parameters in the title and this time also the log-likelihood. Try to understand the two types of uncertainty! If you do not use gp_plot(...), please add a fill between for the model and target noise. (10 points) Step9: 2.4 More plotting (5 points) Repeat the 6 plots above, but this time conditioned a new set of 10 training points. (5 points) Step10: Part 2 Step11: b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector t containing the labels. Write a function create_X_and_t(X1, X2) it should return the combined data set X and the corresponding target vector t. Step12: 2.2 Finding the support vectors (15 points) Finally we going to use a SVM to obtain the decision boundary for which the margin is maximized. We have to solve the optimization problem \begin{align} \arg \min_{\bw, b} \frac{1}{2} \lVert \bw \rVert^2, \end{align} subject to the constraints \begin{align} t_n(\bw^T \phi(\bx_n) + b) \geq 1, n = 1,...,N. \end{align} In order to solve this constrained optimization problem, we introduce Lagrange multipliers $a_n \geq 0$. We obtain the dual representation of the maximum margin problem in which we maximize \begin{align} \sum_{n=1}^N a_n - \frac{1}{2}\sum_{n=1}^N\sum_{m=1}^N a_n a_m t_n t_m k(\bx_n, \bx_m), \end{align} with respect to a subject to the constraints \begin{align} a_n &\geq 0, n=1,...,N,\ \sum_{n=1}^N a_n t_n &= 0. \end{align} This takes the form of a quadratic programming problem in which we optimize a quadratic function of a subject to a set of inequality constraints. a) (5 points) In this example we will use a linear kernel $k(\bx, \bx') = \bx^T\bx'$. Write a function computeK(X) that computes the kernel matrix $K$ for the 2D dataset $X$. Step13: Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes \begin{align} \min_{\ba} \frac{1}{2} \ba^T K' \ba - 1^T\ba, \end{align} subject to the constraints \begin{align} a_n &\geq 0, n=1,...,N,\ \bt^T\ba &= 0. \end{align} Where \begin{align} K'{nm} = t_n t_m k(\bx_n, \bx_m), \end{align} and in the special case of a linear kernel function, \begin{align} K'{nm} = t_n t_m k(\bx_n, \bx_m) = k(t_n \bx_n, t_m \bx_m). \end{align} To solve the quadratic programming problem we will use a python module called cvxopt. You first have to install the module in your virtual environment (you have to activate it first), using the following command Step14: 2.3 Plot support vectors (5 points) Now that we have obtained the lagrangian multipliers $\ba$, we use them to find our support vectors. Repeat the plot from 2.1, this time use a third color to indicate which samples are the support vectors. Step15: 2.4 Plot the decision boundary (10 Points) The decision boundary is fully specified by a (usually very small) subset of training samples, the support vectors. Make use of \begin{align} \bw &= \sum_{n=1}^N a_n t_n \mathbf{\phi}(\bx_n)\ b &= \frac{1}{N_S}\sum_{n \in S} (t_n - \sum_{m \in S} a_m t_m k(\bx_n, \bx_m)), \end{align} where $S$ denotes the set of indices of the support vectors, to calculate the slope and intercept of the decision boundary. Generate a last plot that contains the two subsets, support vectors and decision boundary.
Python Code: NAME = "Michelle Appel" NAME2 = "Verna Dankers" NAME3 = "Yves van Montfort" EMAIL = "michelle.appel@student.uva.nl" EMAIL2 = "verna.dankers@student.uva.nl" EMAIL3 = "yves.vanmontfort@student.uva.nl" Explanation: Save this file as studentid1_studentid2_lab#.ipynb (Your student-id is the number shown on your student card.) E.g. if you work with 3 people, the notebook should be named: 12301230_3434343_1238938934_lab1.ipynb. This will be parsed by a regexp, so please double check your filename. Before you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your names and email adresses below. End of explanation %pylab inline plt.rcParams["figure.figsize"] = [20,10] Explanation: Lab 3: Gaussian Processes and Support Vector Machines Machine Learning 1, September 2017 Notes on implementation: You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant. Please write your answers right below the questions. Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline. Refer to last week's lab notes, i.e. http://docs.scipy.org/doc/, if you are unsure about what function to use. There are different correct ways to implement each problem! use the provided test boxes to check if your answers are correct End of explanation def true_mean_function(x): return np.cos(2*pi*(x+1)) def add_noise(y, sigma): return y + sigma*np.random.randn(len(y)) def generate_t(x, sigma): return add_noise(true_mean_function(x), sigma) sigma = 0.2 beta = 1.0 / pow(sigma, 2) N_test = 100 x_test = np.linspace(-1, 1, N_test) mu_test = np.zeros(N_test) y_test = true_mean_function(x_test) t_test = add_noise(y_test, sigma) plt.plot( x_test, y_test, 'b-', lw=2) plt.plot( x_test, t_test, 'go') plt.show() Explanation: Part 1: Gaussian Processes For part 1 we will be refer to Bishop sections 6.4.2 and 6.4.3. You may also want to refer to Rasmussen's Gaussian Process text which is available online at http://www.gaussianprocess.org/gpml/chapters/ and especially to the project found at https://www.automaticstatistician.com/index/ by Ghahramani for some intuition in GP. To understand Gaussian processes, it is highly recommended understand how marginal, partitioned Gaussian distributions can be converted into conditional Gaussian distributions. This is covered in Bishop 2.3 and summarized in Eqns 2.94-2.98. $\newcommand{\bt}{\mathbf{t}}$ $\newcommand{\bx}{\mathbf{x}}$ $\newcommand{\by}{\mathbf{y}}$ $\newcommand{\bw}{\mathbf{w}}$ $\newcommand{\ba}{\mathbf{a}}$ Periodic Data We will use the same data generating function that we used previously for regression. End of explanation def k_n_m(xn, xm, thetas): theta0, theta1, theta2, theta3 = thetas # Unpack thetas if(xn == xm): k = theta0 + theta2 + theta3*xn**2 else: k = theta0 * np.exp(-(theta1/2)*(xn-xm)**2) + theta2 + theta3*xn*xm return k Explanation: 1. Sampling from the Gaussian process prior (30 points) We will implement Gaussian process regression using the kernel function in Bishop Eqn. 6.63. 1.1 k_n_m( xn, xm, thetas ) (5 points) To start, implement function k_n_m(xn, xm, thetas) that takes scalars $x_n$ and $x_m$, and a vector of $4$ thetas, and computes the kernel function Bishop Eqn. 6.63 (10 points). NB: usually the kernel function will take $D$ by $1$ vectors, but since we are using a univariate problem, this makes things easier. End of explanation def computeK(x1, x2, thetas): K = np.zeros(shape=(len(x1), len(x2))) # Create empty array for xn, row in zip(x1, range(len(x1))): # Iterate over x1 for xm, column in zip(x2, range(len(x2))): # Iterate over x2 K[row, column] = k_n_m(xn, xm, thetas) # Add kernel to matrix return K x1 = [0, 0, 1] x2 = [0, 0, 1] thetas = [1, 2, 3, 1] K = computeK(x1, x2, thetas) ### Test your function x1 = [0, 1, 2] x2 = [1, 2, 3, 4] thetas = [1, 2, 3, 4] K = computeK(x1, x2, thetas) assert K.shape == (len(x1), len(x2)), "the shape of K is incorrect" Explanation: 1.2 computeK( X1, X2, thetas ) (10 points) Eqn 6.60 is the marginal distribution of mean output of $N$ data vectors: $p(\mathbf{y}) = \mathcal{N}(0, \mathbf{K})$. Notice that the expected mean function is $0$ at all locations, and that the covariance is a $N$ by $N$ kernel matrix $\mathbf{K}$. Write a function computeK(x1, x2, thetas) that computes the kernel matrix. Use k_n_m as part of an inner loop (of course, there are more efficient ways of computing the kernel function making better use of vectorization, but that is not necessary) (5 points). End of explanation import matplotlib.pyplot as plt # The thetas thetas0 = [1, 4, 0, 0] thetas1 = [9, 4, 0, 0] thetas2 = [1, 64, 0, 0] thetas3 = [1, 0.25, 0, 0] thetas4 = [1, 4, 10, 0] thetas5 = [1, 4, 0, 5] f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col', sharey='row') # Subplot setup all_thetas = [thetas0, thetas1, thetas2, thetas3, thetas4, thetas5] # List of all thetas all_plots = [ax1, ax2, ax3, ax4, ax5, ax6] # List of all plots for subplot_, theta_ in zip(all_plots, all_thetas): # Iterate over all plots and thetas K = computeK(x_test, x_test, theta_) # Compute K # Fix numerical error on eigenvalues 0 that are slightly negative (<e^15) min_eig = np.min(np.real(np.linalg.eigvals(K))) if min_eig < 0: K -= 10*min_eig * np.eye(*K.shape) mean = np.zeros(shape=len(K)) # Generate Means random = numpy.random.multivariate_normal(mean, K) # Generate random datapoints from multivariate normal uncertainties = [] for i in range(len(K)): uncertainties.append(K[i,i]) subplot_.plot(random) # Plot random generated x = np.arange(0, 100) # 100 Steps subplot_.fill_between(x, random, random + uncertainties, alpha=0.3, color='pink') # Plot uncertainty subplot_.fill_between(x, random, random - uncertainties, alpha=0.3, color='pink') # Plot uncertainty subplot_.plot(y_test, 'g--') # Plot ground truth subplot_.legend(['predicted', 'ground truth']) # Add legend subplot_.set_title(theta_) # Set title plt.show() Explanation: 1.3 Plot function samples (15 points) Now sample mean functions at the x_test locations for the theta values in Bishop Figure 6.5, make a figure with a 2 by 3 subplot and make sure the title reflects the theta values (make sure everything is legible). In other words, sample $\by_i \sim \mathcal{N}(0, \mathbf{K}_{\theta})$. Make use of numpy.random.multivariate_normal(). On your plots include the expected value of $\by$ with a dashed line and fill_between 2 standard deviations of the uncertainty due to $\mathbf{K}$ (the diagonal of $\mathbf{K}$ is the variance of the model uncertainty) (15 points). End of explanation def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None): # YOUR CODE HERE raise NotImplementedError() return mean_test, var_test, C ### Test your function N = 2 train_x = np.linspace(-1, 1, N) train_t = 2*train_x test_N = 3 test_x = np.linspace(-1, 1, test_N) theta = [1, 2, 3, 4] beta = 25 test_mean, test_var, C = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=None) assert test_mean.shape == (test_N,), "the shape of mean is incorrect" assert test_var.shape == (test_N, test_N), "the shape of var is incorrect" assert C.shape == (N, N), "the shape of C is incorrect" C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]]) _, _, C_out = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=C_in) assert np.allclose(C_in, C_out), "C is not reused!" Explanation: 2. Predictive distribution (35 points) So far we have sampled mean functions from the prior. We can draw actual data $\bt$ two ways. The first way is generatively, by first sampling $\by | \mathbf{K}$, then sampling $\bt | \by, \beta$ (Eqns 6.60 followed by 6.59). The second way is to integrate over $\by$ (the mean draw) and directly sample $\bt | \mathbf{K}, \beta$ using Eqn 6.61. This is the generative process for $\bt$. Note that we have not specified a distribution over inputs $\bx$; this is because Gaussian processes are conditional models. Because of this we are free to generate locations $\bx$ when playing around with the GP; obviously a dataset will give us input-output pairs. Once we have data, we are interested in the predictive distribution (note: the prior is the predictive distribution when there is no data). Consider the joint distribution for $N+1$ targets, given by Eqn 6.64. Its covariance matrix is composed of block components $C_N$, $\mathbf{k}$, and $c$. The covariance matrix $C_N$ for $\bt_N$ is $C_N = \mathbf{K}_N + \beta^{-1}\mathbf{I}_N$. We have just made explicit the size $N$ of the matrix; $N$ is the number of training points. The kernel vector $\mathbf{k}$ is a $N$ by $1$ vector of kernel function evaluations between the training input data and the test input vector. The scalar $c$ is a kernel evaluation at the test input. 2.1 gp_predictive_distribution(...) (10 points) Write a function gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None) that computes Eqns 6.66 and 6.67, except allow for an arbitrary number of test points (not just one) and now the kernel matrix is for training data. By having C as an optional parameter, we can avoid computing it more than once (for this problem it is unimportant, but for real problems this is an issue). The function should compute $\mathbf{C}$, $\mathbf{k}$, and return the mean, variance and $\mathbf{C}$. Do not forget: the computeK function computes $\mathbf{K}$, not $\mathbf{C}$! (10 points) End of explanation def gp_log_likelihood(x_train, t_train, theta, beta, C=None, invC=None): # YOUR CODE HERE raise NotImplementedError() return lp, C, invC ### Test your function N = 2 train_x = np.linspace(-1, 1, N) train_t = 2 * train_x theta = [1, 2, 3, 4] beta = 25 lp, C, invC = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=None) assert lp < 0, "the log-likelihood should smaller than 0" assert C.shape == (N, N), "the shape of var is incorrect" assert invC.shape == (N, N), "the shape of C is incorrect" C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]]) _, C_out, _ = gp_log_likelihood(train_x, train_t, theta, beta, C=C_in, invC=None) assert np.allclose(C_in, C_out), "C is not reused!" invC_in = np.array([[1.26260453, 0.15416407], [0.15416407, 1.26260453]]) _, _, invC_out = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=invC_in) assert np.allclose(invC_in, invC_out), "invC is not reused!" Explanation: 2.2 gp_log_likelihood(...) (10 points) To learn the hyperparameters, we would need to compute the log-likelihood of the of the training data. Implicitly, this is conditioned on the value setting for $\mathbf{\theta}$. Write a function gp_log_likelihood(x_train, t_train, theta, C=None, invC=None, beta=None), where C and invC can be stored and reused. It should return the log-likelihood, C and invC (10 points) End of explanation def gp_plot( x_test, y_test, mean_test, var_test, x_train, t_train, theta, beta ): # x_test: # y_test: the true function at x_test # mean_test: predictive mean at x_test # var_test: predictive covariance at x_test # t_train: the training values # theta: the kernel parameters # beta: the precision (known) # the reason for the manipulation is to allow plots separating model and data stddevs. std_total = np.sqrt(np.diag(var_test)) # includes all uncertainty, model and target noise std_model = np.sqrt(std_total**2 - 1.0/beta) # remove data noise to get model uncertainty in stddev std_combo = std_model + np.sqrt(1.0/beta) # add stddev (note: not the same as full) plt.plot(x_test, y_test, 'b', lw=3) plt.plot(x_test, mean_test, 'k--', lw=2) plt.fill_between(x_test, mean_test+2*std_combo,mean_test-2*std_combo, color='k', alpha=0.25) plt.fill_between(x_test, mean_test+2*std_model,mean_test-2*std_model, color='r', alpha=0.25) plt.plot(x_train, t_train, 'ro', ms=10) # YOUR CODE HERE raise NotImplementedError() Explanation: 2.3 Plotting (10 points) Repeat the 6 plots above, but this time conditioned on the training points. Use the periodic data generator to create 2 training points where x is sampled uniformly between $-1$ and $1$. For these plots, feel free to use the provided function "gp_plot". Make sure you put the parameters in the title and this time also the log-likelihood. Try to understand the two types of uncertainty! If you do not use gp_plot(...), please add a fill between for the model and target noise. (10 points) End of explanation # YOUR CODE HERE raise NotImplementedError() Explanation: 2.4 More plotting (5 points) Repeat the 6 plots above, but this time conditioned a new set of 10 training points. (5 points) End of explanation # YOUR CODE HERE # raise NotImplementedError() np.random.seed(1) plt.rcParams["figure.figsize"] = [10,10] # Cov should be diagonal (independency) and have the same values (identical), i.e. a*I. def create_X(mean, sig, N): return np.random.multivariate_normal(mean, sig * np.identity(2), N) m1 = [1, 1]; m2 = [3, 3] s1 = 1/2; s2 = 1/2 N1 = 20; N2 = 30 X1 = create_X(m1, s1, N1) X2 = create_X(m2, s2, N2) plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.show() Explanation: Part 2: Support Vector Machines (45 points) As seen in Part 1: Gaussian Processes, one of the significant limitations of many such algorithms is that the kernel function $k(\bx_n , \bx_m)$ must be evaluated for all possible pairs $\bx_n$ and $\bx_m$ of training points, which can be computationally infeasible during training and can lead to excessive computation times when making predictions for new data points. In Part 2: Support Vector Machines, we shall look at kernel-based algorithms that have sparse solutions, so that predictions for new inputs depend only on the kernel function evaluated at a subset of the training data points. 2.1 Generating a linearly separable dataset (15 points) a) (5 points) First of all, we are going to create our own 2D toy dataset $X$. The dataset will consists of two i.i.d. subsets $X_1$ and $X_2$, each of the subsets will be sampled from a multivariate Gaussian distribution, \begin{align} X_1 \sim &\mathcal{N}(\mu_1, \Sigma_1)\ &\text{ and }\ X_2 \sim &\mathcal{N}(\mu_2, \Sigma_2). \end{align} In the following, $X_1$ will have $N_1=20$ samples and a mean $\mu_1=(1,1)$. $X_2$ will have $N_2=30$ samples and a mean $\mu_2=(3,3)$. Plot the two subsets in one figure, choose two colors to indicate which sample belongs to which subset. In addition you should choose, $\Sigma_1$ and $\Sigma_2$ in a way that the two subsets become linearly separable. (Hint: Which form has the covariance matrix for a i.i.d. dataset?) End of explanation def create_X_and_t(X1, X2): # YOUR CODE HERE # raise NotImplementedError() X1_len = X1.shape[0] X2_len = X2.shape[0] X = np.vstack((X1, X2)) t = np.hstack((-np.ones(X1_len), np.ones(X2_len))) # Shuffle data? indices = np.arange(X1_len + X2_len) np.random.shuffle(indices) return X[indices], t[indices] ### Test your function dim = 2 N1_test = 2 N2_test = 3 X1_test = np.arange(4).reshape((N1_test, dim)) X2_test = np.arange(6).reshape((N2_test, dim)) X_test, t_test = create_X_and_t(X1_test, X2_test) assert X_test.shape == (N1_test + N2_test, dim), "the shape of X is incorrect" assert t_test.shape == (N1_test + N2_test,), "the shape of t is incorrect" Explanation: b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector t containing the labels. Write a function create_X_and_t(X1, X2) it should return the combined data set X and the corresponding target vector t. End of explanation def computeK(X): # YOUR CODE HERE # raise NotImplementedError() K = np.dot(X, X.T).astype('float') return K dim = 2 N_test = 3 X_test = np.arange(6).reshape((N_test, dim)) K_test = computeK(X_test) assert K_test.shape == (N_test, N_test) Explanation: 2.2 Finding the support vectors (15 points) Finally we going to use a SVM to obtain the decision boundary for which the margin is maximized. We have to solve the optimization problem \begin{align} \arg \min_{\bw, b} \frac{1}{2} \lVert \bw \rVert^2, \end{align} subject to the constraints \begin{align} t_n(\bw^T \phi(\bx_n) + b) \geq 1, n = 1,...,N. \end{align} In order to solve this constrained optimization problem, we introduce Lagrange multipliers $a_n \geq 0$. We obtain the dual representation of the maximum margin problem in which we maximize \begin{align} \sum_{n=1}^N a_n - \frac{1}{2}\sum_{n=1}^N\sum_{m=1}^N a_n a_m t_n t_m k(\bx_n, \bx_m), \end{align} with respect to a subject to the constraints \begin{align} a_n &\geq 0, n=1,...,N,\ \sum_{n=1}^N a_n t_n &= 0. \end{align} This takes the form of a quadratic programming problem in which we optimize a quadratic function of a subject to a set of inequality constraints. a) (5 points) In this example we will use a linear kernel $k(\bx, \bx') = \bx^T\bx'$. Write a function computeK(X) that computes the kernel matrix $K$ for the 2D dataset $X$. End of explanation import cvxopt def compute_multipliers(X, t): # YOUR CODE HERE # raise NotImplementedError() K = computeK(np.dot(np.diag(t), X)) q = cvxopt.matrix(-np.ones_like(t, dtype='float')) G = cvxopt.matrix(np.diag(-np.ones_like(t, dtype='float'))) A = cvxopt.matrix(t).T h = cvxopt.matrix(np.zeros_like(t, dtype='float')) b = cvxopt.matrix(0.0) P = cvxopt.matrix(K) sol = cvxopt.solvers.qp(P, q, G, h, A, b) a = np.array(sol['x']) return a ### Test your function dim = 2 N_test = 3 X_test = np.arange(6).reshape((N_test, dim)) t_test = np.array([-1., 1., 1.]) a_test = compute_multipliers(X_test, t_test) assert a_test.shape == (N_test, 1) Explanation: Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes \begin{align} \min_{\ba} \frac{1}{2} \ba^T K' \ba - 1^T\ba, \end{align} subject to the constraints \begin{align} a_n &\geq 0, n=1,...,N,\ \bt^T\ba &= 0. \end{align} Where \begin{align} K'{nm} = t_n t_m k(\bx_n, \bx_m), \end{align} and in the special case of a linear kernel function, \begin{align} K'{nm} = t_n t_m k(\bx_n, \bx_m) = k(t_n \bx_n, t_m \bx_m). \end{align} To solve the quadratic programming problem we will use a python module called cvxopt. You first have to install the module in your virtual environment (you have to activate it first), using the following command: conda install -c conda-forge cvxopt The quadratic programming solver can be called as cvxopt.solvers.qp(P, q[, G, h[, A, b[, solver[, initvals]]]]) This solves the following problem, \begin{align} \min_{\bx} \frac{1}{2} \bx^T P \bx + q^T\bx, \end{align} subject to the constraints, \begin{align} G\bx &\leq h,\ A\bx &= b. \end{align} All we need to do is to map our formulation to the cvxopt interface. b) (10 points) Write a function compute_multipliers(X, t) that solves the quadratic programming problem using the cvxopt module and returns the lagrangian multiplier for every sample in the dataset. End of explanation # YOUR CODE HERE # raise NotImplementedError() np.random.seed(420) X, t = create_X_and_t(X1, X2) a_opt = compute_multipliers(X, t) sv_ind = np.nonzero(np.around(a_opt[:, 0])) X_sv = X[sv_ind] t_sv = t[sv_ind] a_sv = a_opt[sv_ind] plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3') plt.show() Explanation: 2.3 Plot support vectors (5 points) Now that we have obtained the lagrangian multipliers $\ba$, we use them to find our support vectors. Repeat the plot from 2.1, this time use a third color to indicate which samples are the support vectors. End of explanation # YOUR CODE HERE # raise NotImplementedError() w_opt = np.squeeze(np.dot(a_opt.T, np.dot(np.diag(t), X))) K_sv = computeK(X_sv) N_sv = size(sv_ind) atk_sv = np.dot(a_sv.T * t_sv, K_sv) b = np.sum(t_sv - atk_sv)/N_sv x_lim = np.array([1, 4]) y_lim = (-w_opt[0] * x_lim - b) / w_opt[1] plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3') plt.plot(x_lim, y_lim, c='black') plt.show() Explanation: 2.4 Plot the decision boundary (10 Points) The decision boundary is fully specified by a (usually very small) subset of training samples, the support vectors. Make use of \begin{align} \bw &= \sum_{n=1}^N a_n t_n \mathbf{\phi}(\bx_n)\ b &= \frac{1}{N_S}\sum_{n \in S} (t_n - \sum_{m \in S} a_m t_m k(\bx_n, \bx_m)), \end{align} where $S$ denotes the set of indices of the support vectors, to calculate the slope and intercept of the decision boundary. Generate a last plot that contains the two subsets, support vectors and decision boundary. End of explanation
8,282
Given the following text description, write Python code to implement the functionality described below step by step Description: In this Notebook Gaussian Naive Bayes is used on wisconsin cancer dataset to classify if it is Malignant or Benign In the following pandas is used for showing our dataset. We are going to download the csv file and load it to pandas dataframe Step1: The segment of df is given below. Which shows Upper 5 rows of our data Step2: So from the df.head() function you can see that column 1 contains the label which denotes if it is benign or malignant cancer. From column 2 to column 31 contains the features. So we are going to prepare our training set in the following lines. X will contain featuresets and y will contain labels of each row Step3: After that we have to encode labels of y for our training purpose Before encoding Step4: After encoding M = 1 and B = 0. Step5: In the following segment i'm going to split the dataset into Training and Test set with 80 Step6: And don't forget to standardize your featuresets Step7: So here we are. Time for fitting our estimator with the training data. Step8: y_pred holds the predicted label of your test set. Finally time to see the accuracy of our estimator.
Python Code: import pandas as pd df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data', header=None) Explanation: In this Notebook Gaussian Naive Bayes is used on wisconsin cancer dataset to classify if it is Malignant or Benign In the following pandas is used for showing our dataset. We are going to download the csv file and load it to pandas dataframe End of explanation df.head() Explanation: The segment of df is given below. Which shows Upper 5 rows of our data End of explanation X = df.loc[:, 2:].values y = df.loc[:, 1].values Explanation: So from the df.head() function you can see that column 1 contains the label which denotes if it is benign or malignant cancer. From column 2 to column 31 contains the features. So we are going to prepare our training set in the following lines. X will contain featuresets and y will contain labels of each row End of explanation y from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y = le.fit_transform(y) Explanation: After that we have to encode labels of y for our training purpose Before encoding End of explanation y Explanation: After encoding M = 1 and B = 0. End of explanation from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.20, random_state=1) Explanation: In the following segment i'm going to split the dataset into Training and Test set with 80:20 ratio End of explanation from sklearn.preprocessing import StandardScaler stdsc = StandardScaler() X_train_std = stdsc.fit_transform(X_train) X_test_std = stdsc.transform(X_test) Explanation: And don't forget to standardize your featuresets End of explanation from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf.fit(X_train_std, y_train) y_pred = clf.predict(X_test_std) Explanation: So here we are. Time for fitting our estimator with the training data. End of explanation from sklearn.metrics import accuracy_score accuracy_score(y_true=y_test, y_pred=y_pred) Explanation: y_pred holds the predicted label of your test set. Finally time to see the accuracy of our estimator. End of explanation
8,283
Given the following text description, write Python code to implement the functionality described below step by step Description: <div class="alert alert-block alert-info" style="margin-top Step1: <a id="ref0"></a> <h2 align=center>Get Some Data </h2> Create polynomial dataset objects Step2: Create a dataset object Step3: Get some validation data Step4: <a id="ref1"></a> <h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2> Step5: Create a three-layer neural network <code>model</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units. Double-click here for the solution. <!-- Your answer is below Step6: Train the model by using the Adam optimizer. See the unit on other optimizers. Use the mean square loss Step7: Initialize a dictionary that stores the training and validation loss for each model Step8: Run 500 iterations of batch gradient decent Step9: Make a prediction by using the test set assign <code>model</code> to yhat and <code>model_drop</code> to yhat_drop. Double-click here for the solution. <!-- Your answer is below Step10: You can see that the model using dropout does better at tracking the function that generated the data. Plot out the loss for training and validation data on both models
Python Code: import torch import matplotlib.pyplot as plt import torch.nn as nn import numpy as np Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a> <img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center"> <h1 align=center><font size = 5>Using Dropout in Regression Assignment </font></h1> # Table of Contents in this lab, you will see how adding dropout to your model will decrease overfitting with <code>nn.Sequential</code>. <div class="alert alert-block alert-info" style="margin-top: 20px"> <li><a href="#ref0">Make Some Data </a></li> <li><a href="#ref1">Create the Model and Cost Function the Pytorch way</a></li> <li><a href="#ref2">Batch Gradient Descent</a></li> <br> <p></p> Estimated Time Needed: <strong>20 min</strong> </div> <hr> Import all the libraries you need for this lab: End of explanation from torch.utils.data import Dataset, DataLoader class Data(Dataset): def __init__(self,N_SAMPLES = 40,noise_std=1,train=True): self.x = torch.linspace(-1, 1, N_SAMPLES).view(-1,1) self.f=self.x**2 if train!=True: torch.manual_seed(1) self.y = self.f+noise_std*torch.randn(self.f.size()) self.y=self.y.view(-1,1) torch.manual_seed(0) else: self.y = self.f+noise_std*torch.randn(self.f.size()) self.y=self.y.view(-1,1) def __getitem__(self,index): return self.x[index],self.y[index] def __len__(self): return self.len def plot(self): plt.figure(figsize=(6.1, 10)) plt.scatter(self.x.numpy(), self.y.numpy(), label="Samples") plt.plot(self.x.numpy(), self.f.numpy() ,label="True function",color='orange') plt.xlabel("x") plt.ylabel("y") plt.xlim((-1, 1)) plt.ylim((-2, 2.5)) plt.legend(loc="best") plt.show() Explanation: <a id="ref0"></a> <h2 align=center>Get Some Data </h2> Create polynomial dataset objects: End of explanation data_set=Data() data_set.plot() Explanation: Create a dataset object: End of explanation torch.manual_seed(0) validation_set=Data(train=False) Explanation: Get some validation data: End of explanation torch.manual_seed(4) Explanation: <a id="ref1"></a> <h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2> End of explanation model_drop.train() Explanation: Create a three-layer neural network <code>model</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units. Double-click here for the solution. <!-- Your answer is below: n_hidden = 30 model= torch.nn.Sequential( torch.nn.Linear(1, n_hidden), torch.nn.ReLU(), torch.nn.Linear(n_hidden, n_hidden), torch.nn.ReLU(), torch.nn.Linear(n_hidden, 1), ) --> Create a three-layer neural network <code>model_drop</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units. Apply dropout to all but the last layer and make the probability of dropout is 50%. Double-click here for the solution. <!-- Your answer is below: n_hidden = 300 model_drop= torch.nn.Sequential( torch.nn.Linear(1, n_hidden), torch.nn.Dropout(0.5), torch.nn.ReLU(), torch.nn.Linear(n_hidden, n_hidden), torch.nn.Dropout(0.5), torch.nn.ReLU(), torch.nn.Linear(n_hidden, 1), ) --> <a id="ref2"></a> <h2 align=center>Train the Model via Mini-Batch Gradient Descent </h2> Set the model using dropout to training mode; this is the default mode, but it's a good practice. End of explanation optimizer_ofit = torch.optim.Adam(model.parameters(), lr=0.01) optimizer_drop = torch.optim.Adam(model_drop.parameters(), lr=0.01) criterion = torch.nn.MSELoss() Explanation: Train the model by using the Adam optimizer. See the unit on other optimizers. Use the mean square loss: End of explanation LOSS={} LOSS['training data no dropout']=[] LOSS['validation data no dropout']=[] LOSS['training data dropout']=[] LOSS['validation data dropout']=[] Explanation: Initialize a dictionary that stores the training and validation loss for each model: End of explanation epochs=500 for epoch in range(epochs): #make a prediction for both models yhat = model(data_set.x) yhat_drop = model_drop(data_set.x) #calculate the lossf or both models loss = criterion(yhat, data_set.y) loss_drop = criterion(yhat_drop, data_set.y) #store the loss for both the training and validation data for both models LOSS['training data no dropout'].append(loss.item()) LOSS['validation data no dropout'].append(criterion(model(validation_set.x), validation_set.y).item()) LOSS['training data dropout'].append(loss_drop.item()) model_drop.eval() LOSS['validation data dropout'].append(criterion(model_drop(validation_set.x), validation_set.y).item()) model_drop.train() #clear gradient optimizer_ofit.zero_grad() optimizer_drop.zero_grad() #Backward pass: compute gradient of the loss with respect to all the learnable parameters loss.backward() loss_drop.backward() #the step function on an Optimizer makes an update to its parameters optimizer_ofit.step() optimizer_drop.step() Explanation: Run 500 iterations of batch gradient decent: End of explanation plt.figure(figsize=(6.1, 10)) plt.scatter(data_set.x.numpy(), data_set.y.numpy(), label="Samples") plt.plot(data_set.x.numpy(), data_set.f.numpy() ,label="True function",color='orange') plt.plot(data_set.x.numpy(),yhat.detach().numpy(),label='no dropout',c='r') plt.plot(data_set.x.numpy(),yhat_drop.detach().numpy(),label="dropout",c='g') plt.xlabel("x") plt.ylabel("y") plt.xlim((-1, 1)) plt.ylim((-2, 2.5)) plt.legend(loc="best") plt.show() Explanation: Make a prediction by using the test set assign <code>model</code> to yhat and <code>model_drop</code> to yhat_drop. Double-click here for the solution. <!-- Your answer is below: yhat=model(data_set.x) model_drop.eval() yhat_drop=model_drop(data_set.x), ) --> Plot predictions of both models. Compare them to the training points and the true function: End of explanation plt.figure(figsize=(6.1, 10)) for key, value in LOSS.items(): plt.plot(np.log(np.array(value)),label=key) plt.legend() plt.xlabel("iterations") plt.ylabel("Log of cost or total loss") Explanation: You can see that the model using dropout does better at tracking the function that generated the data. Plot out the loss for training and validation data on both models: End of explanation
8,284
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have the following dataframe:
Problem: import pandas as pd df = pd.DataFrame({'key1': ['a', 'a', 'b', 'b', 'a', 'c'], 'key2': ['one', 'two', 'one', 'two', 'one', 'two']}) def g(df): return df.groupby('key1')['key2'].apply(lambda x: (x=='one').sum()).reset_index(name='count') result = g(df.copy())
8,285
Given the following text description, write Python code to implement the functionality described below step by step Description: Let's read the data Step1: Let's check what's inside these files... Step2: Only proper rows with posts Step3: Let's parse this mess... Step4: Better Step5: Let's compute tag counts! Step6: Taking long? go to Step7: Shout if you're the first one here! Congrats! Puzzles
Python Code: ! gsutil ls gs://pyspark-workshop/so-posts lines = sc.textFile("gs://pyspark-workshop/so-posts/*") # or a smaller piece of them lines = sc.textFile("gs://pyspark-workshop/so-posts/Posts.xml-*a") Explanation: Let's read the data End of explanation lines.take(5) Explanation: Let's check what's inside these files... End of explanation rows = lines.filter(lambda x: x.lstrip().startswith('<row')) Explanation: Only proper rows with posts End of explanation import xml.etree.ElementTree as ET parsed = lines.map(lambda x: x.lstrip()).filter(lambda x: x.startswith('<row')).map(lambda x: ET.fromstring(x)) from pprint import pprint pprint(parsed.take(2)) Explanation: Let's parse this mess... End of explanation pprint(parsed.map(lambda x: x.attrib).take(3)) Explanation: Better: End of explanation def parse_tags(x): return x[1:-1].split("><") tags = parsed.map(lambda x: parse_tags(x.attrib['Tags']) if 'Tags' in x.attrib else []) tags.take(5) counts = tags.flatMap(lambda x: x).groupBy(lambda x: x).map(lambda x: (x[0], len(x[1]))) Explanation: Let's compute tag counts! End of explanation counts.sortBy(lambda x: x[1], ascending=False).take(10) Explanation: Taking long? go to: http://cluster-1-m:8088 and explore it (if you're using default cluster name). Did you know flatMap?? If yes, rewrite the statement before to use flatMap. End of explanation # if you hate xml (you do), then save it as json on hdfs! import json parsed.map(lambda x: json.dumps(x.attrib)).saveAsTextFile("posts.jsons") Explanation: Shout if you're the first one here! Congrats! Puzzles: Can you compute how many times someone asked about Python this month (you can compute posts with python tag only)? Can you measure Pythons monthly popularity over last year? Can you plot it? Can you do the same but only for main posts (questions)? (*) Can you find the question that has the most posts attached?? Do the same but use ranking by total score of subposts. End of explanation
8,286
Given the following text description, write Python code to implement the functionality described below step by step Description: Task In this assignment, you will implement simple algorithmic trading policies and modify a very simple backtester. In the following, you will find a Backtester1 function that gets as an input the historical price series of a single stock. At each time interval, the backtester calls for a new order (by calling the placeOrder function), along with the current opening price. Depending upon the current position of the customer (number of owned stocks and current cash), the customer decides upon a new investment. This is currently a random decision, merely deciding the percentage of capital to leave on a stocks. Consequently, the PlaceOrder function will return one of the following orders, to realize the decided position Step1: A reference implementation Step2: A cleaner implementation with the use of class constructs
Python Code: import pandas as pd import pandas.io.data as web import numpy as np import datetime msft = pd.read_csv("msft.csv", index_col=0, parse_dates=True) aapl = pd.read_csv("aapl.csv", index_col=0, parse_dates=True) msft['2012-01'] Explanation: Task In this assignment, you will implement simple algorithmic trading policies and modify a very simple backtester. In the following, you will find a Backtester1 function that gets as an input the historical price series of a single stock. At each time interval, the backtester calls for a new order (by calling the placeOrder function), along with the current opening price. Depending upon the current position of the customer (number of owned stocks and current cash), the customer decides upon a new investment. This is currently a random decision, merely deciding the percentage of capital to leave on a stocks. Consequently, the PlaceOrder function will return one of the following orders, to realize the decided position: Buy <number of stocks> Sell <number of stocks> PlaceOrder assumes that stocks can be traded only as integer multiples. Task 1 Modify the program to create plots of the total wealth over time, along with the value of the stocks and cash at each time point. Task 2 Add the following type of orders that a customer can issue * AddCapital <amount> * WithdrawCapital <amount> The backtester should always keep track of the position of the trader and make appropriate checks, such as only allowing buying stocks allowed by the current capital. Task 3 Add an interest_rate such that at the beginning of each trading day, the cash earns a fixed interest. Also, include transaction costs (a fixed percentage of the trade) to be deduced from each transaction. Task 4 Modify the system such that it allows for open selling, that is selling without actually owning any stock. At the end of the trading day, any open positions should be cleared by the closing price. Task 5 Think and implement a trading policy of your imagination, such as estimating the trend in the last few days, and coming up with a smarter decision than random. Compare your policy with the random policy in terms of earnings or losses. Task 6 Modify the program such that you allow for pairs trading. Modify the backtester such that you input a pair of stocks. Now the generated investment decisions must be portfolio. Repeat task 5 for the pairs case. Read some example Data End of explanation InitialCash = 1000 Position = {'Cash': InitialCash, 'Stocks': 0.0} def DecideTargetPosition(): ## Randomly decide a portfolio output percentage of capital to put into stocks return np.random.choice([0.0, 0.5, 1.0]) def Capital(price): return Position['Cash'] + Position['Stocks']*price def PlaceOrder(price): p = DecideTargetPosition() capital = Capital(price) numLots = np.floor(capital*p/price) TargetPosition = {'Cash': capital-numLots*price, 'Stocks': numLots} if TargetPosition['Stocks'] > Position['Stocks']: # Buy order = ('Buy', TargetPosition['Stocks']-Position['Stocks']) return order elif TargetPosition['Stocks'] < Position['Stocks']: # Sell order = ('Sell', -TargetPosition['Stocks']+Position['Stocks']) return order else: # Do nothing None def UpdatePosition(deltaCash, deltaStock): Position['Cash'] += deltaCash Position['Stocks'] += deltaStock return def BackTester1(series, interest_rate): openPrice = series['Open'] closePrice = series['Close'] for k in openPrice.keys(): price = openPrice[k] order = PlaceOrder(price) if order is None: continue print order if order[0]=='Buy': deltaCash = -price*order[1] deltaStock = order[1] UpdatePosition(deltaCash, deltaStock) elif order[0]=='Sell': deltaCash = price*order[1] deltaStock = -order[1] UpdatePosition(deltaCash, deltaStock) else: None price = closePrice[k] print k, Capital(price) InitialCash = 1000 Position = {'Cash': InitialCash, 'Stocks': 0.0} BackTester1(msft['2012-01'], 0.05) Explanation: A reference implementation End of explanation class Customer: def __init__(self, cash=10000): self.Position = {'Cash': cash, 'Stocks': 0.0} def DecideTargetPosition(self): ## Randomly decide a portfolio output percentage of capital to put into stocks return np.random.choice([0.0, 0.5, 1.0]) def Capital(self, price): return self.Position['Cash'] + self.Position['Stocks']*price def PlaceOrder(self, price): p = self.DecideTargetPosition() capital = self.Capital(price) numLots = np.floor(capital*p/price) TargetPosition = {'Cash': capital-numLots*price, 'Stocks': numLots} if TargetPosition['Stocks']>self.Position['Stocks']: # Buy return ('Buy', TargetPosition['Stocks']-self.Position['Stocks']) elif TargetPosition['Stocks']<self.Position['Stocks']: # Sell return ('Sell', -TargetPosition['Stocks']+self.Position['Stocks']) else: # Do nothing None def GetPosition(self): return self.Position def UpdatePosition(self, deltaCash, deltaStock): self.Position['Cash'] += deltaCash self.Position['Stocks'] += deltaStock return def BackTester(series, customer, interest_rate): openPrice = series['Open'] closePrice = series['Close'] for k in openPrice.keys(): price = openPrice[k] order = customer.PlaceOrder(price) if order is None: continue print order if order[0]=='Buy': deltaCash = -price*order[1] deltaStock = order[1] customer.UpdatePosition(deltaCash, deltaStock) elif order[0]=='Sell': deltaCash = price*order[1] deltaStock = -order[1] customer.UpdatePosition(deltaCash, deltaStock) else: None price = closePrice[k] print k, customer.Capital(price) Cash = 1000 cust = Customer(cash=Cash) BackTester(msft['2012-01'], cust, 0.05) Explanation: A cleaner implementation with the use of class constructs End of explanation
8,287
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: MIROC Source ID: NICAM16-7S Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
8,288
Given the following text description, write Python code to implement the functionality described below step by step Description: Authorization Following the Nest authorization documentation. Setup Get the values of Client ID and Client secret from the clients page and set them in the environment before running this IPython Notebook. The environment variable names should be DEN_CLIENT_ID and DEN_CLIENT_SECRET, respectively. Step2: Get Authorization URL Available per client. For Den it is Step5: Create Authorization URL Helper Step6: Get Authorization Code get_auth_url() returns a URL that should be visited in the browser to get an authorization code. For Den, this authorization code will be a PIN. Step7: Cut and paste that PIN here Step9: Get Access Token Use the pin code to request an access token. https Step10: POST to that URL to get a response containing an access token Step11: It seems like the access token can only be created once and has a 10 year expiration time.
Python Code: import os DEN_CLIENT_ID = os.environ["DEN_CLIENT_ID"] DEN_CLIENT_SECRET = os.environ["DEN_CLIENT_SECRET"] Explanation: Authorization Following the Nest authorization documentation. Setup Get the values of Client ID and Client secret from the clients page and set them in the environment before running this IPython Notebook. The environment variable names should be DEN_CLIENT_ID and DEN_CLIENT_SECRET, respectively. End of explanation import uuid def _get_state(): Get a unique id string. return str(uuid.uuid1()) _get_state() Explanation: Get Authorization URL Available per client. For Den it is: https://home.nest.com/login/oauth2?client_id=54033edb-04e0-4fc7-8306-5ed6cb7d7b1d&state=STATE Where STATE should be a value that is: Used to protect against cross-site request forgery attacks Format: any unguessable string We strongly recommend that you use a new, unique value for each call Create STATE helper End of explanation API_PROTOCOL = "https" API_LOCATION = "home.nest.com" from urlparse import SplitResult, urlunsplit from urllib import urlencode def _get_url(path, query, netloc=API_LOCATION): Get a URL for the given path and query. split = SplitResult(scheme=API_PROTOCOL, netloc=netloc, path=path, query=query, fragment="") return urlunsplit(split) def get_auth_url(client_id=DEN_CLIENT_ID): Get an authorization URL for the given client id. path = "login/oauth2" query = urlencode({"client_id": client_id, "state": _get_state()}) return _get_url(path, query) get_auth_url() Explanation: Create Authorization URL Helper End of explanation !open "{get_auth_url()}" Explanation: Get Authorization Code get_auth_url() returns a URL that should be visited in the browser to get an authorization code. For Den, this authorization code will be a PIN. End of explanation pin = "" Explanation: Cut and paste that PIN here: End of explanation def get_access_token_url(client_id=DEN_CLIENT_ID, client_secret=DEN_CLIENT_SECRET, code=pin): Get an access token URL for the given client id. path = "oauth2/access_token" query = urlencode({"client_id": client_id, "client_secret": client_secret, "code": code, "grant_type": "authorization_code"}) return _get_url(path, query, "api." + API_LOCATION) get_access_token_url() Explanation: Get Access Token Use the pin code to request an access token. https://developer.nest.com/documentation/cloud/authorization-reference/ End of explanation import requests r = requests.post(get_access_token_url()) print r.status_code assert r.status_code == requests.codes.OK r.json() Explanation: POST to that URL to get a response containing an access token: End of explanation access_token = r.json()["access_token"] access_token Explanation: It seems like the access token can only be created once and has a 10 year expiration time. End of explanation
8,289
Given the following text description, write Python code to implement the functionality described below step by step Description: Modeling and Simulation in Python Chapter 10 Example Step1: Pendulum This notebook solves the Spider-Man problem from spiderman.ipynb, demonstrating a different development process for physical simulations. In pendulum_sympy, we derive the equations of motion for a springy pendulum without drag, yielding Step3: Now here's a version of make_system that takes a Condition object as a parameter. make_system uses the given value of v_term to compute the drag coefficient C_d. Step4: Let's make a System Step6: To write the slope function, we can get the expressions for ax and ay directly from SymPy and plug them in. Step7: As always, let's test the slope function with the initial conditions. Step8: And then run the simulation. Step9: Visualizing the results We can extract the x and y components as Series objects. Step10: The simplest way to visualize the results is to plot x and y as functions of time. Step11: We can plot the velocities the same way. Step12: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion. Step13: We can also animate the trajectory. If there's an error in the simulation, we can sometimes spot it by looking at animations. Step15: Here's a function that encapsulates that code and runs the animation in (approximately) real time.
Python Code: # If you want the figures to appear in the notebook, # and you want to interact with them, use # %matplotlib notebook # If you want the figures to appear in the notebook, # and you don't want to interact with them, use # %matplotlib inline # If you want the figures to appear in separate windows, use # %matplotlib qt5 # to switch from one to another, you have to select Kernel->Restart %matplotlib notebook from modsim import * Explanation: Modeling and Simulation in Python Chapter 10 Example: Springy Pendulum Copyright 2017 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation condition = Condition(g = 9.8, m = 75, area = 1, rho = 1.2, v_term = 60, duration = 30, length0 = 100, angle = (270 - 45), k = 20) Explanation: Pendulum This notebook solves the Spider-Man problem from spiderman.ipynb, demonstrating a different development process for physical simulations. In pendulum_sympy, we derive the equations of motion for a springy pendulum without drag, yielding: $ \ddot{x} = \frac{k length_{0} x}{m \sqrt{x^{2} + y^{2}}} - \frac{k x}{m} $ $ \ddot{y} = - g + \frac{k length_{0} y}{m \sqrt{x^{2} + y^{2}}} - \frac{k y}{m} $ We'll use the same conditions we saw in spiderman.ipynb End of explanation def make_system(condition): Makes a System object for the given conditions. condition: Condition with height, g, m, diameter, rho, v_term, and duration returns: System with init, g, m, rho, C_d, area, and ts unpack(condition) theta = np.deg2rad(angle) x, y = pol2cart(theta, length0) P = Vector(x, y) V = Vector(0, 0) init = State(x=P.x, y=P.y, vx=V.x, vy=V.y) C_d = 2 * m * g / (rho * area * v_term**2) ts = linspace(0, duration, 501) return System(init=init, g=g, m=m, rho=rho, C_d=C_d, area=area, length0=length0, k=k, ts=ts) Explanation: Now here's a version of make_system that takes a Condition object as a parameter. make_system uses the given value of v_term to compute the drag coefficient C_d. End of explanation system = make_system(condition) system system.init Explanation: Let's make a System End of explanation def slope_func(state, t, system): Computes derivatives of the state variables. state: State (x, y, x velocity, y velocity) t: time system: System object with length0, m, k returns: sequence (vx, vy, ax, ay) x, y, vx, vy = state unpack(system) ax = k*length0*x/(m*sqrt(x**2 + y**2)) - k*x/m ay = -g + k*length0*y/(m*sqrt(x**2 + y**2)) - k*y/m return vx, vy, ax, ay Explanation: To write the slope function, we can get the expressions for ax and ay directly from SymPy and plug them in. End of explanation slope_func(system.init, 0, system) Explanation: As always, let's test the slope function with the initial conditions. End of explanation %time run_odeint(system, slope_func) Explanation: And then run the simulation. End of explanation xs = system.results.x ys = system.results.y Explanation: Visualizing the results We can extract the x and y components as Series objects. End of explanation newfig() plot(xs, label='x') plot(ys, label='y') decorate(xlabel='Time (s)', ylabel='Position (m)') Explanation: The simplest way to visualize the results is to plot x and y as functions of time. End of explanation vxs = system.results.vx vys = system.results.vy newfig() plot(vxs, label='vx') plot(vys, label='vy') decorate(xlabel='Time (s)', ylabel='Velocity (m/s)') Explanation: We can plot the velocities the same way. End of explanation newfig() plot(xs, ys, label='trajectory') decorate(xlabel='x position (m)', ylabel='y position (m)') Explanation: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion. End of explanation newfig() decorate(xlabel='x position (m)', ylabel='y position (m)', xlim=[-100, 100], ylim=[-200, -50], legend=False) for x, y in zip(xs, ys): plot(x, y, 'bo', update=True) sleep(0.01) Explanation: We can also animate the trajectory. If there's an error in the simulation, we can sometimes spot it by looking at animations. End of explanation def animate2d(xs, ys, speedup=1): Animate the results of a projectile simulation. xs: x position as a function of time ys: y position as a function of time speedup: how much to divide `dt` by # get the time intervals between elements ts = xs.index dts = np.diff(ts) dts = np.append(dts, 0) # decorate the plot newfig() decorate(xlabel='x position (m)', ylabel='y position (m)', xlim=[xs.min(), xs.max()], ylim=[ys.min(), ys.max()], legend=False) # loop through the values for x, y, dt in zip(xs, ys, dts): plot(x, y, 'bo', update=True) sleep(dt / speedup) animate2d(system.results.x, system.results.y) Explanation: Here's a function that encapsulates that code and runs the animation in (approximately) real time. End of explanation
8,290
Given the following text description, write Python code to implement the functionality described below step by step Description: author Step1: QC-filtered samples Step2: Per-study endemism Objective Step3: Per-sample endemism Step4: Abundance vs. prevalence Step5: Subset 2k
Python Code: import pandas as pd import numpy as np import locale import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline locale.setlocale(locale.LC_ALL, 'en_US') def list_otu_studies(df, index): return(set([x.split('.')[0] for x in df.loc[index]['list_samples'].split(',')])) locale.format("%d", 1255000, grouping=True) Explanation: author: lukethompson@gmail.com<br> date: 8 Oct 2017<br> language: Python 3.5<br> license: BSD3<br> sequence_prevalence.ipynb End of explanation path_otus = '../../data/sequence-lookup/otu_summary.emp_deblur_90bp.qc_filtered.rare_5000.tsv' # gunzip first num_samples = '24,910' num_studies = '96' df_otus = pd.read_csv(path_otus, sep='\t', index_col=0) df_otus['studies'] = [list_otu_studies(df_otus, index) for index in df_otus.index] df_otus['num_studies'] = [len(x) for x in df_otus.studies] df_otus.shape df_otus.num_samples.max() (df_otus.num_samples > 100).value_counts() / 307572 (df_otus.num_studies > 10).value_counts() / 307572 df_otus.num_studies.max() Explanation: QC-filtered samples End of explanation df_otus.num_studies.max() fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,4)) ax[0].hist(df_otus.num_studies, bins=np.concatenate(([], np.arange(1, 110, 1)))) ax[0].set_xlim([0, 98]) ax[0].set_xticks(np.concatenate((np.array([1.5]), np.arange(10.5, 92, 10)))) ax[0].set_xticklabels(['1', '10', '20', '30', '40', '50', '60', '70', '80', '90']); ax[1].hist(df_otus.num_studies, bins=np.concatenate(([], np.arange(1, 110, 1)))) ax[1].set_yscale('log') ax[1].set_ylim([5e-1, 1e6]) ax[1].set_xlim([0, 98]) ax[1].set_xticks(np.concatenate((np.array([1.5]), np.arange(10.5, 92, 10)))) ax[1].set_xticklabels(['1', '10', '20', '30', '40', '50', '60', '70', '80', '90']); fig.text(0.5, 0.0, 'Number of studies a tag sequence was observed in (out of %s)' % num_studies, ha='center', va='center') fig.text(0.0, 0.5, 'Number of tag sequences (out of %s)' % locale.format("%d", df_otus.shape[0], grouping=True), ha='center', va='center', rotation='vertical') exactly1 = df_otus.num_studies.value_counts()[1] num_otus = df_otus.shape[0] # fig.text(0.3, 0.51, '%s tag sequences (%.1f%%) found in only a single study\n\n\n\n\n\n%s tag sequences (%.1f%%) found in >1 study' % # (locale.format("%d", exactly1, grouping=True), # (exactly1/num_otus*100), # locale.format("%d", num_otus-exactly1, grouping=True), # ((num_otus-exactly1)/num_otus*100)), # ha='center', va='center', fontsize=10) plt.tight_layout() plt.savefig('hist_endemism_90bp_qcfiltered.pdf', bbox_inches='tight') Explanation: Per-study endemism Objective: Determine the number of OTUs that are study-dependent (or EMPO-dependent). For a given OTU, is it found in only one study's samples or in multiple studies (Venn diagram)? End of explanation fig = plt.figure(figsize=(12,4)) plt.subplot(121) mybins = np.concatenate(([], np.arange(1, 110, 1))) n, bins, patches = plt.hist(df_otus.num_samples, bins=mybins) plt.axis([0, 92, 0, 4.5e4]) plt.xticks(np.concatenate((np.array([1.5]), np.arange(10.5, 92, 10))), ['1', '10', '20', '30', '40', '50', '60', '70', '80', '90']); plt.subplot(122) mybins = np.concatenate(([], np.arange(1, max(df_otus.num_samples)+100, 100))) n, bins, patches = plt.hist(df_otus.num_samples, bins=mybins) plt.yscale('log') plt.axis([-100, 9200, 5e-1, 10e5]) plt.xticks([50, 1050, 2050, 3050, 4050, 5050, 6050, 7050, 8050, 9050], ['1-100', '1001-1100', '2001-2100', '3001-3100', '4001-4100', '5001-5100', '6001-6100', '7001-7100', '8001-8100', '9001-9100'], rotation=45, ha='right'); fig.text(0.5, 0.0, 'Number of samples a tag sequence was observed in (out of %s)' % num_samples, ha='center', va='center') fig.text(0.0, 0.6, 'Number of tag sequences (out of %s)' % locale.format("%d", df_otus.shape[0], grouping=True), ha='center', va='center', rotation='vertical') exactly1 = df_otus.num_samples.value_counts()[1] num_otus = df_otus.shape[0] # fig.text(0.3, 0.6, '%s sequences (%.1f%%) found in only a single sample\n\n\n\n\n\n%s sequences (%.1f%%) found in >1 sample' % # (locale.format("%d", exactly1, grouping=True), # (exactly1/num_otus*100), # locale.format("%d", num_otus-exactly1, grouping=True), # ((num_otus-exactly1)/num_otus*100)), # ha='center', va='center', fontsize=10) plt.tight_layout() plt.savefig('hist_otus_90bp_qcfiltered.pdf', bbox_inches='tight') Explanation: Per-sample endemism End of explanation plt.scatter(df_otus.num_samples, df_otus.total_obs, alpha=0.1) plt.xscale('log') plt.yscale('log') plt.xlabel('Number of samples a tag sequence was observed in (out of %s)' % num_samples) plt.ylabel('Total number of tag sequence observations') plt.savefig('scatter_otus_90bp_qcfiltered.png') Explanation: Abundance vs. prevalence End of explanation path_otus = '../../data/sequence-lookup/otu_summary.emp_deblur_90bp.subset_2k.rare_5000.tsv' # gunzip first num_samples = '2000' df_otus = pd.read_csv(path_otus, sep='\t', index_col=0) df_otus['studies'] = [list_otu_studies(df_otus, index) for index in df_otus.index] df_otus['num_studies'] = [len(x) for x in df_otus.studies] df_otus.num_samples.max() df_otus.num_samples.value_counts().head() plt.figure(figsize=(12,3)) mybins = np.concatenate(([-8.99, 1.01], np.arange(10, max(df_otus.num_samples), 10))) n, bins, patches = plt.hist(df_otus.num_samples, bins=mybins) plt.yscale('log') plt.axis([-10, 600, 5e-1, 1e6]) plt.xticks([-4, 5.5, 15.5, 104.5, 204.5, 304.5, 404.5, 474.5, 574.5], ['exactly 1', '2-9', '10-19', '100-109', '200-209', '300-309', '400-409', '470-479', '570-579'], rotation=45, ha='right', fontsize=9); plt.xlabel('Number of samples OTU observed in (out of %s)' % num_samples) plt.ylabel('Number of OTUs (out of %s)' % df_otus.shape[0]) plt.savefig('hist_otus_90bp_subset2k.pdf') plt.scatter(df_otus.num_samples, df_otus.total_obs, alpha=0.1) plt.xscale('log') plt.yscale('log') plt.xlabel('Number of samples OTU observed in (out of %s)' % num_samples) plt.ylabel('Total number of OTU observations') plt.savefig('scatter_otus_90bp_subset2k.png') fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,4)) ax[0].hist(df_otus.num_studies, bins=df_otus.num_studies.max()) ax[1].hist(df_otus.num_studies, bins=df_otus.num_studies.max()) ax[1].set_yscale('log') ax[1].set_ylim([5e-1, 1e6]) fig.text(0.5, 0.03, 'Number of studies OTU found in (out of %s)' % num_samples, ha='center', va='center') fig.text(0.07, 0.5, 'Number of OTUs', ha='center', va='center', rotation='vertical') exactly1 = df_otus.num_studies.value_counts()[1] num_otus = df_otus.shape[0] fig.text(0.3, 0.5, '%s OTUs (%.1f%%) found in only a single study\n\n\n\n\n\n\n\n\n%s OTUs (%.1f%%) found in >1 study' % (locale.format("%d", exactly1, grouping=True), (exactly1/num_otus*100), locale.format("%d", num_otus-exactly1, grouping=True), ((num_otus-exactly1)/num_otus*100)), ha='center', va='center', fontsize=10) plt.savefig('hist_endemism_90bp_subset2k.pdf') Explanation: Subset 2k End of explanation
8,291
Given the following text description, write Python code to implement the functionality described below step by step Description: Project (Option 2) - PageRank Authors Step3: Simulation time! We now want to empirically test what we solved above by modeling a random user hopping along those webpages. We will start the user at "1.html" and behave as per the Markov chain above. In the code below, we simulate this and keep track of the average amount of time a user spends in each state. We will expect that after enough iterations, the fraction of time spent in each state should approach the stationary distribution. We use the parse_links() method to parse all hyperlinks in a page. We use the library <a href="http Step4: <font color=blue>$\mathcal{Q}$2. Simulating a Random Walk</font> In the following code block, use the above functions to surf the web pages described by the Markov chain above. This code block may take a while to run. If it is taking more than a couple of minutes, maybe try reducing num_of_visits in order to at least get results. Also, running your code while connected to AirBears may help if you have a slow internet connection at home. Step5: Print your results Step6: Does this approximately match the invariant distribution you expected? Generalizing to the Web The toy websites given above conveniently form an irreducible Markov chain (look up what this means if you do not remember from class), but most of the web will not look like this. There will be fringes of the internet containing only self-loops, or some web pages which do not link to others at all. In order to account for such pathologies in the web, we need to make a more intelligent surfer. The simplest idea would be to just jump back to the starting page if there are no links found on the page you are on, and to always return to a "good" starting point with probability $p$ on every page. This is a very naive scheme, and there are many more intelligent methods by which you can sample from the distribution of the Internet, accounting for its pathologies and all. Ranking Berkeley Professors The following code is a (weak) attempt to rank the Berkeley faculty based on a crawler which begins on the EECS research homepage. Step7: <font color=blue>$\mathcal{Q}$3. Try your hand at applying the above idea to a website you personally visit (somewhat) frequently! Do a simple crawl (similar to the above) and see if you can figure out something interesting. (Keep it simple.)</font>
Python Code: import numpy as np from __future__ import division P = np.matrix([[0, 1/5, 1/5, 1/5, 1/5, 0, 0, 1/5], [1/2, 0, 0, 0, 1/2, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 1/3, 1/3, 1/3, 0, 0, 0, 0], [1/4,0,1/4,0,0,0,1/4,1/4], [1/4, 1/4, 0, 0, 1/4, 1/4, 0, 0], [1/5, 1/5, 1/5, 1/5, 0, 0, 1/5, 0]]) P_transpose = P.T print P # Your code here #I know that the steady state exists because the markov chain is aperiodic, ttma v, w = np.linalg.eig(P.T) steadyState = -w.T[np.isclose(np.absolute(v),1)] print steadyState np.isclose(steadyState.dot(P),steadyState) Explanation: Project (Option 2) - PageRank Authors: v1.0 (2014 Fall) Rishi Sharma *, Sahaana Suri *, Kangwook Lee **, Kannan Ramchandran **<br /> v1.1 (2015 Fall) Kabir Chandrasekher *, Max Kanwal *, Kangwook Lee **, Kannan Ramchandran **<br /> v1.2 (2016 Spring) Kabir Chandrasekher, Tony Duan, David Marn, Ashvin Nair, Kangwook Lee, Kannan Ramchandran <br /> v1.3 (2017 Spring) Tavor Baharav, Kabir Chandrasekhar, Sinho Chewi, Andrew Liu, Kamil Nar, David Wang, and Kannan Ramchandran Introduction From Wikipedia: PageRank is an algorithm used by Google Search to rank websites in their search engine results. PageRank was named after Larry Page, one of the founders of Google. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. There are four common frameworks by which academics view Google's PageRank algorithm. The first looks at the social impact, both positive and negative, of immediate access to previously unimaginable knowledge through one centralized terminal. The second, and most mathematical, sees PageRank as a computation of the Singular Value Decomposition (<a href="http://en.wikipedia.org/wiki/Singular_value_decomposition">SVD</a>) of the adjacency matrix of the graph formed by the internet, with particular emphasis paid to the first few singular vectors. The third, and most far-reaching, practical technical implication of Google's work is the implementation of algorithms and computation at enormous scale. Much of the computing infrastructure which operates at a global scale deployed today can trace its origins to Google's need to perform SVD on an object as enormous as the Internet. Finally, a more intuitive way to look at the PageRank algorithm is through the lens of a web crawler (or many web crawler) acting as an agent (or agents) in a Markov Chain the size of the web. We will investigate this viewpoint. This crawler is searching for an approximate "invariant" distribution (why does a true invariant distribution almost certainly not exist?) and will rank pages based on their "probability" in this generated distribution. In order to do so, our crawler chooses to follow a link uniformly at random from the page it is on in order to arrive at a new page, keeping tally of how many times it has visited each page. If this crawler runs for a really, really long time, the fraction of time it has spent on each webpage will approximately be the probability of being on that page (assuming we account for pathologies in the Markov chain which we will discuss soon). We then rank pages in decreasing order of probability. Alright, great! Let's do stuff. First, visit the following webpage, and see how many web pages can be reached by clicking the links on each page. http://www.eecs.berkeley.edu/~kw1jjang/ee126/1.html There are total of $8$ pages, and they are connected as follows. <img src="http://i.imgur.com/hdnUIJB.png" width=400px> Since we choose a link at uniform from each page, the probability of going between pages $x$ and $y$ is $\Large \frac{\text{# of pages from x to y}}{\text{# of pages leaving x}}$ Thus the Markov chain generated by the web pages above is <img src="http://i.imgur.com/esTlo1R.png" width=400px> and the transition matrix of the Markov chain is $$ \left( \begin{array}{cccccccc} 0 & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & 0 & 0 & \frac{1}{5} \ \frac{1}{2} & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 0 & 0 & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \ \frac{1}{4} & \frac{1}{4} & 0 & 0 & \frac{1}{4} & \frac{1}{4} & 0 & 0 \ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & 0 & 0 & \frac{1}{5} & 0 \end{array} \right) $$ <font color=blue>$\mathcal{Q}$1. Find the steady-state (invariant/stationary) distribution $\pi$ of the Markov chain above. How do you know that it exists? </font> <font color=blue> The Markov matrix is copied in code below. This might make your computation easier, but you can solve this in any way you wish. (Note: don't forget about the difference between right and left eigenvectors) </font> End of explanation import re import sys import urllib import urlparse import random from bs4 import BeautifulSoup class MyOpener(urllib.FancyURLopener): version = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15' def domain(url): Parse a url to give you the domain. # urlparse breaks down the url passed it, and you split the hostname up # ex: hostname="www.google.com" becomes ['www', 'google', 'com'] hostname = urlparse.urlparse(url).hostname.split(".") hostname = ".".join(len(hostname[-2]) < 4 and hostname[-3:] or hostname[-2:]) return hostname def parse_links(url, url_start): Return all the URLs on a page and return the start URL if there is an error or no URLS. url_list = [] myopener = MyOpener() try: # open, read, and parse the text using beautiful soup page = myopener.open(url) text = page.read() page.close() soup = BeautifulSoup(text, "html") # find all hyperlinks using beautiful soup for tag in soup.findAll('a', href=True): # concatenate the base url with the path from the hyperlink tmp = urlparse.urljoin(url, tag['href']) # we want to stay in the berkeley EECS domain (more relevant later)... if domain(tmp).endswith('berkeley.edu') and 'eecs' in tmp: url_list.append(tmp) if len(url_list) == 0: return [url_start] return url_list except: return [url_start] url_start = "http://www.eecs.berkeley.edu/~kw1jjang/ee126/1.html" parse_links(url_start,url_start) Explanation: Simulation time! We now want to empirically test what we solved above by modeling a random user hopping along those webpages. We will start the user at "1.html" and behave as per the Markov chain above. In the code below, we simulate this and keep track of the average amount of time a user spends in each state. We will expect that after enough iterations, the fraction of time spent in each state should approach the stationary distribution. We use the parse_links() method to parse all hyperlinks in a page. We use the library <a href="http://www.crummy.com/software/BeautifulSoup/">Beautiful Soup</a> in order to complete this portion of the lab in order to easiliy parse pages. Once you download the latest release, you must build and install setup.py. Alternatively, use pip or easy_install (<a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/">help</a>). Note that this code relies on having Python 2 installed -- it will not work with Python 3. End of explanation import random # the url we want to begin with url_start = "http://www.eecs.berkeley.edu/~kw1jjang/ee126/1.html" current_url = url_start # parameter to set the number of transitions you make/different pages you visit num_of_visits = 1000 # dictionary of pages visited so far visit_history = {} # initialize dictionary since we know exactly where we'll end up for i in range(1, 9): page = "http://www.eecs.berkeley.edu/~kw1jjang/ee126/" + str(i) + ".html" visit_history[page] = 0 for i in range(num_of_visits): # Your code here if i % 10 == 0: print i visit_history[current_url] += 1 current_url = np.random.choice(parse_links(current_url,url_start)) Explanation: <font color=blue>$\mathcal{Q}$2. Simulating a Random Walk</font> In the following code block, use the above functions to surf the web pages described by the Markov chain above. This code block may take a while to run. If it is taking more than a couple of minutes, maybe try reducing num_of_visits in order to at least get results. Also, running your code while connected to AirBears may help if you have a slow internet connection at home. End of explanation for i in range(1, 9): page = "http://www.eecs.berkeley.edu/~kw1jjang/ee126/" + str(i) + ".html" print 'Fraction of time staying on page %d is %f' % (i, float(visit_history[page])/num_of_visits) print steadyState/np.sum(steadyState) Explanation: Print your results: End of explanation url_start = "http://www.eecs.berkeley.edu/Research/" current_url = url_start num_of_visits = 200 #List of professors obtained from the EECS page profs = ['Abbeel','Agrawala','Alon','Anantharam','Arcak','Arias','Asanović','Bachrach','Bajcsy','Bodik','Bokor','Boser','Brewer','Canny','Chang-Hasnain','Culler','Darrell','Demmel','Fearing','Fox','Franklin','Garcia','Goldberg','Hartmann','Harvey','Hellerstein','Javey','Joseph','Katz','Keutzer','Liu','Klein','Kubiatowicz','Lee','Lustig','Maharbiz','Malik','Nguyen','Niknejad','Nikolic',"O'Brien",'Parekh','Patterson','Paxson','Pisano','Rabaey','Ramchandran','Roychowdhury','Russell','Sahai','Salahuddin','Sanders','Sangiovanni-Vincentelli','Sastry','Sen','Seshia','Shenker','Song','Song','Spanos','Stoica','Stojanovic','Tomlin','Tygar','Walrand','Wawrzynek','Wu','Yablonovitch','Yelick','Zakhor'] #Bad URLs help take care of some pathologies that ruin our surfing bad_urls = ['http://www.erso.berkeley.edu/','http://www.eecs.berkeley.edu/Rosters/roster.name.nostudentee.html','http://www.eecs.berkeley.edu/Resguide/admin.shtml#aliases','http://www.eecs.berkeley.edu/department/EECSbrochure/c1-s3.html'] #Creating a dictionary to keep track of how often we come across a professor profdict = {} for i in profs: profdict[i] = 0 for i in range(num_of_visits): print i , ' Visiting... ', current_url if random.random() < 0.95: #follow a link! url_list = parse_links(current_url, url_start) updated = False while not updated: current_url = random.choice(url_list) updated = True if current_url in bad_urls or "iris" in current_url or "Deptonly" in current_url or "anchor" in current_url or "erso" in current_url: #dealing with more pathologies: updated = False myopener = MyOpener() page = myopener.open(current_url) text = page.read() page.close() #Figuring out which professor is mentioned on a page. for p in profs: profdict[p]+= 1 if " " + p + " " in text else 0 #can use regex re.findall(i,text), but it's overkill else: #click the "home" button! current_url = url_start prof_ranks = [pair[0] for pair in sorted(profdict.items(), key = lambda item: item[1], reverse=True)] top_score = profdict[prof_ranks[0]] for i in range(len(prof_ranks)): print "%d %f: %s" % (i+1,profdict[prof_ranks[i]]/top_score, prof_ranks[i]) print 'Top score: ', top_score Explanation: Does this approximately match the invariant distribution you expected? Generalizing to the Web The toy websites given above conveniently form an irreducible Markov chain (look up what this means if you do not remember from class), but most of the web will not look like this. There will be fringes of the internet containing only self-loops, or some web pages which do not link to others at all. In order to account for such pathologies in the web, we need to make a more intelligent surfer. The simplest idea would be to just jump back to the starting page if there are no links found on the page you are on, and to always return to a "good" starting point with probability $p$ on every page. This is a very naive scheme, and there are many more intelligent methods by which you can sample from the distribution of the Internet, accounting for its pathologies and all. Ranking Berkeley Professors The following code is a (weak) attempt to rank the Berkeley faculty based on a crawler which begins on the EECS research homepage. End of explanation # Your code here Explanation: <font color=blue>$\mathcal{Q}$3. Try your hand at applying the above idea to a website you personally visit (somewhat) frequently! Do a simple crawl (similar to the above) and see if you can figure out something interesting. (Keep it simple.)</font> End of explanation
8,292
Given the following text description, write Python code to implement the functionality described below step by step Description: Example to demonstrate optimized backdoor variable search for Causal Identification This notebook compares the performance between causal identification using vanilla backdoor search and the optimized backdoor search and demonstrates the performance gains obtained by using the latter. Step1: Create Random Graph In this section, we create a random graph with the designated number of nodes (10 in this case). Step2: Testing optimized backdoor search In this section, we compare the runtimes for causal identification using vanilla backdoor search and the optimized backdoor search.
Python Code: import time import random from networkx.linalg.graphmatrix import adjacency_matrix import numpy as np import pandas as pd import networkx as nx import dowhy from dowhy import CausalModel from dowhy.utils import graph_operations import dowhy.datasets Explanation: Example to demonstrate optimized backdoor variable search for Causal Identification This notebook compares the performance between causal identification using vanilla backdoor search and the optimized backdoor search and demonstrates the performance gains obtained by using the latter. End of explanation n = 10 p = 0.5 G = nx.generators.random_graphs.fast_gnp_random_graph(n, p, directed=True) graph = nx.DiGraph([(u,v) for (u,v) in G.edges() if u<v]) nodes = [] for i in graph.nodes: nodes.append(str(i)) adjacency_matrix = np.asarray(nx.to_numpy_matrix(graph)) graph_dot = graph_operations.adjacency_matrix_to_graph(adjacency_matrix, nodes) graph_dot = graph_operations.str_to_dot(graph_dot.source) print("Graph Generated.") df = pd.DataFrame(columns=nodes) print("Dataframe Generated.") Explanation: Create Random Graph In this section, we create a random graph with the designated number of nodes (10 in this case). End of explanation start = time.time() # I. Create a causal model from the data and given graph. model = CausalModel(data=df,treatment=str(random.randint(0,n-1)),outcome=str(random.randint(0,n-1)),graph=graph_dot) time1 = time.time() print("Time taken for initializing model =", time1-start) # II. Identify causal effect and return target estimands identified_estimand = model.identify_effect() time2 = time.time() print("Time taken for vanilla identification =", time2-time1) # III. Identify causal effect using the optimized backdoor implementation identified_estimand = model.identify_effect(optimize_backdoor=True) end = time.time() print("Time taken for optimized backdoor identification =", end-time2) Explanation: Testing optimized backdoor search In this section, we compare the runtimes for causal identification using vanilla backdoor search and the optimized backdoor search. End of explanation
8,293
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualising statistical significance thresholds on EEG data MNE-Python provides a range of tools for statistical hypothesis testing and the visualisation of the results. Here, we show a few options for exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based permutation approaches (here with Threshold-Free Cluster Enhancement); and how to visualise the results. The underlying data comes from [1]; we contrast long vs. short words. TFCE is described in [2]. References .. [1] Dufau, S., Grainger, J., Midgley, KJ., Holcomb, PJ. A thousand words are worth a picture Step1: If we have a specific point in space and time we wish to test, it can be convenient to convert the data into Pandas Dataframe format. In this case, the Step2: Absent specific hypotheses, we can also conduct an exploratory mass-univariate analysis at all sensors and time points. This requires correcting for multiple tests. MNE offers various methods for this; amongst them, cluster-based permutation methods allow deriving power from the spatio-temoral correlation structure of the data. Here, we use TFCE. Step3: The results of these mass univariate analyses can be visualised by plotting
Python Code: import numpy as np import matplotlib.pyplot as plt from scipy.stats import ttest_ind import mne from mne.channels import find_ch_connectivity, make_1020_channel_selections from mne.stats import spatio_temporal_cluster_test np.random.seed(0) # Load the data path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif' epochs = mne.read_epochs(path) name = "NumberOfLetters" # Split up the data by the median length in letters via the attached metadata median_value = str(epochs.metadata[name].median()) long_words = epochs[name + " > " + median_value] short_words = epochs[name + " < " + median_value] Explanation: Visualising statistical significance thresholds on EEG data MNE-Python provides a range of tools for statistical hypothesis testing and the visualisation of the results. Here, we show a few options for exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based permutation approaches (here with Threshold-Free Cluster Enhancement); and how to visualise the results. The underlying data comes from [1]; we contrast long vs. short words. TFCE is described in [2]. References .. [1] Dufau, S., Grainger, J., Midgley, KJ., Holcomb, PJ. A thousand words are worth a picture: Snapshots of printed-word processing in an event-related potential megastudy. Psychological Science, 2015 .. [2] Smith and Nichols 2009, "Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence, and localisation in cluster inference", NeuroImage 44 (2009) 83-98. End of explanation time_windows = ((.2, .25), (.35, .45)) elecs = ["Fz", "Cz", "Pz"] # display the EEG data in Pandas format (first 5 rows) print(epochs.to_data_frame()[elecs].head()) report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}" print("\nTargeted statistical test results:") for (tmin, tmax) in time_windows: long_df = long_words.copy().crop(tmin, tmax).to_data_frame() short_df = short_words.copy().crop(tmin, tmax).to_data_frame() for elec in elecs: # extract data A = long_df[elec].groupby("condition").mean() B = short_df[elec].groupby("condition").mean() # conduct t test t, p = ttest_ind(A, B) # display results format_dict = dict(elec=elec, tmin=tmin, tmax=tmax, df=len(epochs.events) - 2, t_val=t, p=p) print(report.format(**format_dict)) Explanation: If we have a specific point in space and time we wish to test, it can be convenient to convert the data into Pandas Dataframe format. In this case, the :class:mne.Epochs object has a convenient :meth:mne.Epochs.to_data_frame method, which returns a dataframe. This dataframe can then be queried for specific time windows and sensors. The extracted data can be submitted to standard statistical tests. Here, we conduct t-tests on the difference between long and short words. End of explanation # Calculate statistical thresholds con = find_ch_connectivity(epochs.info, "eeg") # Extract data: transpose because the cluster test requires channels to be last # In this case, inference is done over items. In the same manner, we could # also conduct the test over, e.g., subjects. X = [long_words.get_data().transpose(0, 2, 1), short_words.get_data().transpose(0, 2, 1)] tfce = dict(start=.2, step=.2) t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test( X, tfce, n_permutations=100) # a more standard number would be 1000+ significant_points = cluster_pv.reshape(t_obs.shape).T < .05 print(str(significant_points.sum()) + " points selected by TFCE ...") Explanation: Absent specific hypotheses, we can also conduct an exploratory mass-univariate analysis at all sensors and time points. This requires correcting for multiple tests. MNE offers various methods for this; amongst them, cluster-based permutation methods allow deriving power from the spatio-temoral correlation structure of the data. Here, we use TFCE. End of explanation # We need an evoked object to plot the image to be masked evoked = mne.combine_evoked([long_words.average(), -short_words.average()], weights='equal') # calculate difference wave time_unit = dict(time_unit="s") evoked.plot_joint(title="Long vs. short words", ts_args=time_unit, topomap_args=time_unit) # show difference wave # Create ROIs by checking channel labels selections = make_1020_channel_selections(evoked.info, midline="12z") # Visualize the results fig, axes = plt.subplots(nrows=3, figsize=(8, 8)) axes = {sel: ax for sel, ax in zip(selections, axes.ravel())} evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False, mask=significant_points, show_names="all", titles=None, **time_unit) plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3, label="uV") plt.show() Explanation: The results of these mass univariate analyses can be visualised by plotting :class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image) and masking points for significance. Here, we group channels by Regions of Interest to facilitate localising effects on the head. End of explanation
8,294
Given the following text description, write Python code to implement the functionality described below step by step Description: <img src="images/logo.jpg" style="display Step1: <p style="text-align Step2: <p style="text-align Step3: <p style="text-align Step4: <p style="text-align Step5: <p style="text-align Step6: <p style="text-align Step7: <div class="align-center" style="display Step8: <p style="text-align Step9: <p style="text-align Step10: <p style="text-align Step11: <div class="align-center" style="display Step12: <p style="text-align Step13: <p style="text-align Step14: <p style="text-align Step15: <p style="text-align Step16: <p style="text-align Step17: <p style="text-align Step18: <p style="text-align Step19: <p style="text-align Step20: <p style="text-align Step21: <p style="text-align Step22: <p style="text-align Step23: <p style="text-align Step24: <table style="font-size Step25: <p style="text-align Step26: <p style="text-align Step27: <p style="text-align Step28: <ol style="text-align Step29: <p style="text-align Step30: <p style="align
Python Code: prime_ministers = ['David Ben-Gurion', 'Moshe Sharett', 'David Ben-Gurion', 'Levi Eshkol', 'Yigal Alon', 'Golda Meir'] Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית."> <p style="text-align: right; direction: rtl; float: right;">רשימות</p> <p style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> רשימה, כשמה כן היא, מייצגת <mark>אוסף מסודר של ערכים</mark>. רשימות יהיו סוג הנתונים הראשון שנכיר בפייתון, ש<mark>מטרתו היא לקבץ ערכים</mark>.<br> הרעיון מוכר לנו מהיום־יום: רשימת פריטים לקנייה בסופר שמסודרת לפי הא–ב, או רשימת ההופעות בקיץ הקרוב המסודרת לפי תאריך. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נסו לדמיין רשימה כמסדרון ארוך, שבו עומדים בתור אחד אחרי השני איברים מסוגים שאנחנו מכירים בפייתון.<br> אם נשתמש בדימוי הלייזרים שנתנו למשתנים בשבוע הקודם, אפשר להגיד שמדובר בלייזר שמצביע לשורת לייזרים, שבה כל לייזר מצביע על ערך כלשהו. </p> <table style="font-size: 2rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">4</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">5</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"David Ben-Gurion"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe Sharett"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"David Ben-Gurion"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Levi Eshkol"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Yigal Alon"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Golda Meir"</td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-6</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-5</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-4</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> <br> <p style="text-align: center; direction: rtl; clear: both; font-size: 1.8rem"> דוגמה לרשימה: 6 ראשי הממשלה הראשונים בישראל לפי סדר כהונתם, משמאל לימין </p> <p style="text-align: right; direction: rtl; float: right;">דוגמאות</p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>רשימת שמות ראשי הממשלה במדינת ישראל לפי סדר כהונתם.</li> <li>רשימת הגילים של התלמידים בכיתה, מהמבוגר לצעיר.</li> <li>רשימת שמות של התקליטים שיש לי בארון, מסודרת מהתקליט השמאלי לימני.</li> <li>רשימה שבה כל איבר מייצג אם לראש הממשלה שנמצא בתא התואם ברשימה הקודמת היו משקפיים.</li> <li>האיברים 42, 8675309, 73, <span dir="ltr" style="direction: ltr;">-40</span> ו־186282 בסדר הזה.</li> <li>רשימה של תחזית מזג האוויר ב־7 הימים הקרובים. כל איבר ברשימה הוא בפני עצמו רשימה, שמכילה שני איברים: הראשון הוא מה תהיה הטמפרטורה הממוצעת, והשני הוא מה תהיה הלחות הממוצעת.</li> <ol> <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> <strong>תרגול</strong>: הרשימות שהוצגו למעלה הן <dfn>רשימות הומוגניות</dfn>, כאלו שכל האיברים שבהן הם מאותו סוג.<br> כתבו עבור כל אחת מהרשימות שהוצגו בדוגמה מה סוג הנתונים שיישמר בהן. </p> </div> </div> <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> <strong>תרגול</strong>: נסו לתת דוגמה לעוד 3 רשימות שבהן נתקלתם לאחרונה.</p> </div> </div> ## <p style="text-align: right; direction: rtl; float: right;">רשימות בקוד</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> רשימות הן אחד מסוגי הנתונים הכיפיים ביותר בפייתון, וזאת בזכות הגמישות האדירה שיש לנו בתכנות עם רשימות. </p> ### <span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרת רשימה</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נגדיר בעזרת פייתון את הרשימה שפגשנו למעלה – 6 ראשי הממשלה הראשונים מאז קום המדינה: </p> End of explanation print(prime_ministers) type(prime_ministers) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> מה התרחש בקוד?<br> התחלנו את הגדרת הרשימה באמצעות התו <code dir="ltr" style="direction: ltr;">[</code>.<br> מייד אחרי התו הזה הכנסנו איברים לרשימה לפי הסדר הרצוי, כאשר כל איבר מופרד ממשנהו בפסיק (<code>,</code>).<br> במקרה שלנו, כל איבר הוא מחרוזת המייצגת ראש ממשלה. הכנסנו את ראשי הממשלה לרשימה <mark>לפי סדר</mark> כהונתם.<br> שימו לב שהרשימה מכילה איבר מסוים פעמיים – מכאן ש<mark>רשימה היא מבנה נתונים שתומך בחזרות</mark>.<br> לסיום, נסגור את הגדרת הרשימה באמצעות התו <code dir="ltr" style="direction: ltr;">]</code>.<br> </p> End of explanation numbers = [1, 2, 3, 4, 5, 6, 7] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> נוכל להגדיר רשימה של המספרים הטבעיים עד 7: </p> End of explanation wtf = ['The cake is a', False, 42] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> <dfn>רשימה הומוגנית</dfn> היא רשימה שבה האיברים שנמצאים בכל אחד מהתאים הם מאותו סוג. רשימות "בעולם האמיתי" הן בדרך כלל הומוגניות.<br> <dfn>רשימה הטרוגנית</dfn> היא רשימה שבה איברים בתאים שונים יכולים להיות מסוגים שונים.<br> ההבדל הוא סמנטי בלבד, ופייתון לא מבדילה בין רשימה הטרוגנית לרשימה הומוגנית. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לשם הדוגמה, נגדיר רשימה הטרוגנית: </p> End of explanation empty_list = [] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> נוכל אפילו להגדיר רשימה ריקה, שבה אין איברים כלל:</p> End of explanation # Index 0 1 2 3 4 5 vinyls = ['Ecliptica', 'GoT Season 6', 'Lone Digger', 'Everything goes numb', 'Awesome Mix Vol. 1', 'Ultimate Sinatra'] Explanation: <p style="text-align: right; direction: rtl; float: right;">גישה לאיברי הרשימה</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לכל תא ברשימה יש מספר, שמאפשר לנו להתייחס לאיבר שנמצא באותו תא.<br> הדבר דומה ללייזר שעליו יש מדבקת שם ("שמות ראשי ממשלה"), והוא מצביע על שורת לייזרים שעל התווית שלהם מופיע מספר המתאר את מיקומם בשורה.<br> התא השמאלי ביותר ברשימה ממוספר כ־0, התא שנמצא אחריו (מימינו) מקבל את המספר 1, וכך הלאה עד לסוף הרשימה.<br> המספור של כל תא נקרא <dfn>המיקום שלו ברשימה</dfn>, או <dfn>האינדקס שלו</dfn>. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נגדיר את רשימת שמות התקליטים שיש לי בבית: </p> End of explanation print(vinyls[4]) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> בהנחה שאנחנו מתים על Guardians of the Galaxy, נוכל לנסות להשיג מהרשימה את Awesome Mix Vol. 1.<br> כדי לעשות זאת, נציין את שם הרשימה שממנה אנחנו רוצים לקבל את האיבר, ומייד לאחר מכן את מיקומו ברשימה בסוגריים מרובעים. </p> End of explanation # 0 1 2 3 4 5 vinyls = ['Ecliptica', 'GoT Season 6', 'Lone Digger', 'Everything goes number', 'Awesome Mix Vol. 1', 'Ultimate Sinatra'] # -6 -5 -4 -3 -2 -1 Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl; clear: both;"> התא הראשון ממוספר 0, ולא 1.<br> יש לכך סיבות טובות, אבל פעמים רבות תרגישו שהמספור הזה לא טבעי ועלול ליצור <dfn>באגים</dfn>, שהם קטעי קוד שמתנהגים אחרת משציפה המתכנת.<br> כפועל יוצא, המיקום ברשימה של התא האחרון לא יהיה כאורך הרשימה, אלא כאורך הרשימה פחות אחד.<br> משמע: ברשימה שבה 3 איברים, מספרו של התא האחרון יהיה 2. </p> </div> </div> <figure> <img src="images/list-of-vinyls.png" width="100%" style="display: block; margin-left: auto; margin-right: auto;" alt="תמונה של 6 תקליטים על שטיח. משמאל לימין: Ecliptica / Sonata Arctica, Game of Thrones Season 6 / Ramin Djawadi, Caravan Palace / Lone Digger, Everything goes numb / Streetlight Manifesto, Awesome Mix Vol. 1 / Guardians of the Galaxy, Ultimate Sinatra / Frank Sinatra. מעל כל דיסק מופיע מספר, מ־0 עבור התקליט השמאלי ועד 5 עבור התקליט הימני. מתחת לתקליטים מופיע המספר -1 עבור התקליט הימני ביותר, וכך הלאה עד -5 עבור התקליט השמאלי ביותר."> <figcaption style="text-align: center; direction: rtl; clear: both;"> רשימת (חלק מ)התקליטים בארון שלי, מסודרת מהתקליט השמאלי לימני.<br> </figcaption> </figure> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כפי שניתן לראות בתמונה, פייתון מנסה לעזור לנו ומאפשרת לנו לגשת לאיברים גם מהסוף.<br> חוץ מהמספור הרגיל שראינו קודם, אפשר לגשת לאיברים מימין לשמאל באמצעות מספור שלילי.<br> האיבר האחרון יקבל את המספר <span style="direction: ltr" dir="ltr">-1</span>, זה שלפניו (משמאלו) יקבל <span style="direction: ltr" dir="ltr">-2</span> וכן הלאה. </p> End of explanation print(vinyls[-2]) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אם נרצה לגשת שוב לאותו דיסק, אבל הפעם מהסוף, נוכל לכתוב זאת כך: </p> End of explanation type(vinyls[0]) print(vinyls[0] + ', By Sonata Arctica') Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> כדאי לזכור שהתוכן של כל אחד מהתאים הוא ערך לכל דבר.<br> יש לו סוג, ואפשר לבצע עליו פעולות כמו שלמדנו עד עכשיו: </p> End of explanation # כמה תקליטים יש לי? len(vinyls) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> לסיום, נראה שבדיוק כמו במחרוזות, נוכל לבדוק את אורך הרשימה על ידי שימוש בפונקציה <code>len</code>. </p> End of explanation print(vinyls) vinyls[1] = 'GoT Season 7' print(vinyls) Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl; clear: both;"> אם ננסה לגשת לתא שאינו קיים, נקבל <code>IndexError</code>.<br> זה בדרך כלל קורה כשאנחנו שוכחים להתחיל לספור מ־0.<br> אם השגיאה הזו מופיעה כשאתם מתעסקים עם רשימות, חשבו איפה בקוד פניתם לתא שאינו קיים. </p> </div> </div> <p style="text-align: right; direction: rtl; float: right;">השמה ברשימות</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לפעמים נרצה לשנות את הערך של האיברים ברשימה.<br> נפנה ללייזר מסוים בשורת הלייזרים שלנו, ונבקש ממנו להצביע לערך חדש: </p> End of explanation [1, 2, 3] + [4, 5, 6] ['a', 'b', 'c'] + ['easy', 'as'] + [1, 2, 3] Explanation: <p style="text-align: right; direction: rtl; float: right;">אופרטורים חשבוניים על רשימות</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אופרטורים שהכרנו כשלמדנו על מחרוזות, יעבדו נהדר גם על רשימות. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כפי ש־<code>+</code> משרשר בין מחרוזות, הוא יודע לשרשר גם בין רשימות: </p> End of explanation ['wake up', 'go to school', 'sleep'] * 365 Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> וכפי ש־<code>*</code> משרשר מחרוזת לעצמה כמות מסוימת של פעמים, כך הוא יפעל גם עם רשימות: </p> End of explanation ['Is', 'someone', 'getting'] + ['the', 'best,'] * 4 + ['of', 'you?'] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אפשר גם לשלב: </p> End of explanation [1, 2, 3] + 5 Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> שימו לב שכל אופרטור שתשימו ליד הרשימה מתייחס <em>לרשימה בלבד</em>, ולא לאיברים שבתוכה.<br> משמע <code dir="ltr" style="direction: ltr;">+ 5</code> לא יוסיף לכם 5 לכל אחד מהאיברים, אלא ייכשל כיוון שפייתון לא יודעת לחבר רשימה למספר שלם.<br> </p> End of explanation prime_ministers = ['David Ben-Gurion', 'Moshe Sharett', 'David Ben-Gurion', 'Levi Eshkol', 'Yigal Alon', 'Golda Meir'] print(prime_ministers) prime_ministers + ['Yitzhak Rabin'] print(prime_ministers) print(prime_ministers) prime_ministers = prime_ministers + ['Yitzhak Rabin'] print(prime_ministers) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> שימו לב גם שהפעלת אופרטור על רשימה לא גורמת לשינוי הרשימה, אלא רק מחזירה ערך.<br> כדי לשנות ממש את הרשימה, נצטרך להשתמש בהשמה: </p> End of explanation pupils_in_sunday = ['Moshe', 'Dukasit', 'Michelangelo'] pupils_in_monday = ['Moshe', 'Dukasit', 'Master Splinter'] pupils_in_tuesday = ['Moshe', 'Dukasit', 'Michelangelo'] pupils_in_wednesday = ['Moshe', 'Dukasit', 'Michelangelo', 'Master Splinter'] Explanation: <p style="text-align: right; direction: rtl; float: right;">אופרטורים השוואתיים על רשימות</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נגדיר את רשימת האנשים שנכחו בכיתה ביום ראשון, שני, שלישי ורביעי: </p> End of explanation print("Is it Monday? " + str(pupils_in_sunday == pupils_in_monday)) print("Is it Tuesday? " + str(pupils_in_sunday == pupils_in_tuesday)) print("Is it Wednesday? " + str(pupils_in_sunday == pupils_in_wednesday)) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> רשימות תומכות בכל אופרטורי ההשוואה שלמדנו עד כה.<br> נתחיל בקל ביותר. בואו נבדוק באיזה יום הרכב התלמידים בכיתה היה זהה להרכב התלמידים שהיה בה ביום ראשון: </p> End of explanation print('Moshe' in pupils_in_tuesday) # זה אותו דבר כמו: print('Moshe' in ['Moshe', 'Dukasit', 'Michelangelo']) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> האם משה נכח בכיתה ביום שלישי? </p> End of explanation 'Master Splinter' not in pupils_in_tuesday Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> נוכיח שמאסטר ספלינטר הבריז באותו יום: </p> End of explanation python_new_version = [3, 7, 2] python_old_version = [2, 7, 16] print(python_new_version > python_old_version) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> ולסיום, בואו נבדוק איזו גרסה חדשה יותר: </p> End of explanation pupils_in_sunday = ['Moshe', 'Dukasit', 'Michelangelo'] pupils_in_monday = ['Moshe', 'Dukasit', 'Splinter'] pupils_in_tuesday = ['Moshe', 'Dukasit', 'Michelangelo'] pupils_in_wednesday = ['Moshe', 'Dukasit', 'Michelangelo', 'Splinter'] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> כדי לבצע השוואה בין רשימות, פייתון מנסה להשוות את האיבר הראשון מהרשימה הראשונה לאיבר הראשון מהרשימה השנייה.<br> אם יש "תיקו", היא תעבור לאיבר השני בכל רשימה, כך עד סוף הרשימה. </p> <p style="text-align: right; direction: rtl; float: right;">רשימה של רשימות</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לעיתים דברים בחיים האמיתיים הם מורכבים מדי מכדי לייצג אותם ברשימה סטנדרטית.<br> הרבה פעמים נשים לב שיוקל לנו אם ניצור רשימה שבה כל תא הוא רשימה בפני עצמו.<br>הרעיון הזה ייצור לנו רשימה של רשימות.<br> ניקח לדוגמה את הרשימות שהגדרנו למעלה, שמתארות מי נכח בכל יום בכיתה: </p> End of explanation pupils = [pupils_in_sunday, pupils_in_monday, pupils_in_tuesday, pupils_in_wednesday] print(pupils) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אנחנו רואים לפנינו רשימה של ימים, שקל להכניס לרשימה אחת גדולה: </p> End of explanation pupils = [['Moshe', 'Dukasit', 'Michelangelo'], ['Moshe', 'Dukasit', 'Splinter'], ['Moshe', 'Dukasit', 'Michelangelo'], ['Moshe', 'Dukasit', 'Michelangelo', 'Splinter']] Explanation: <table style="font-size: 1rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;"> <table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> </td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;"> <table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Splinter"</td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> </td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;"> <table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> </td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;"> <table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;"> <tr> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td> </tr> <tbody> <tr> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td> <td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Splinter"</td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-4</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> </td> </tr> <tr style="background: #f5f5f5;"> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-4</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td> <td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td> </tr> </tbody> </table> <br> <p style="text-align: center; direction: rtl; clear: both; font-size: 1.8rem"> דוגמה לרשימה של רשימות: נוכחות התלמידים בימי ראשון עד רביעי </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> השורה שכתבנו למעלה זהה לחלוטין לשורה הבאה, שבה אנחנו מגדירים רשימה אחת שכוללת את רשימות התלמידים שנכחו בכיתה בכל יום. </p> End of explanation pupils[0] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> נוכל לקבל את רשימת התלמידים שנכחו ביום ראשון בצורה הבאה: </p> End of explanation pupils_in_sunday = pupils[0] print(pupils_in_sunday[-1]) # או פשוט: print(pupils[0][-1]) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> ואת התלמיד האחרון שנכח ביום ראשון בצורה הבאה: </p> End of explanation print("pupils = " + str(pupils)) print("-" * 50) print("1. 'Moshe' in pupils == " + str('Moshe' in pupils)) print("2. 'Moshe' in pupils[0] == " + str('Moshe' in pupils[0])) print("3. ['Moshe', 'Splinter'] in pupils == " + str(['Moshe', 'Splinter'] in pupils)) print("4. ['Moshe', 'Splinter'] in pupils[-1] == " + str(['Moshe', 'Splinter'] in pupils[-1])) print("5. ['Moshe', 'Dukasit', 'Splinter'] in pupils == " + str(['Moshe', 'Dukasit', 'Splinter'] in pupils)) print("6. ['Moshe', 'Dukasit', 'Splinter'] in pupils[0] == " + str(['Moshe', 'Dukasit', 'Splinter'] in pupils[0])) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אם קשה לכם לדמיין את זה, עשו זאת בשלבים.<br> בדקו מה יש ב־<code>pupils</code>, אחרי זה מה מחזיר <code>pupils[0]</code>, ואז נסו לקחת ממנו את האיבר האחרון, <code>pupils[0][-1]</code>. </p <p style="text-align: right; direction: rtl; float: right; clear: both;"> כדי להבין טוב יותר איך רשימה של רשימות מתנהגת, חשוב להבין את התוצאות של הביטויים הבוליאניים הבאים.<br> זה קצת מבלבל, אבל אני סומך עליכם שתחזיקו מעמד: </p> End of explanation judges = ['Esther Hayut', 'Miriam Naor', 'Asher Grunis', 'Dorit Beinisch', 'Aharon Barak'] Explanation: <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>הביטוי הבוליאני בשורה 1 מחזיר <samp>False</samp>, כיוון שכל אחד מהאיברים ברשימה <var>pupils</var> הוא רשימה, ואף אחד מהם אינו המחרוזת <em>"Moshe"</em>.</li> <li>הביטוי הבוליאני בשורה 2 מחזיר <samp>True</samp>, כיוון שהאיבר הראשון ב־<var>pupils</var> הוא רשימה שמכילה את המחרוזת <em>"Moshe"</em>.</li> <li>הביטוי הבוליאני בשורה 3 מחזיר <samp>False</samp>, כיוון שאין בתוך <var>pupils</var> רשימה שאלו בדיוק הערכים שלה. יש אומנם רשימה שמכילה את האיברים האלו, אבל השאלה הייתה האם הרשימה הגדולה (<var>pupils</var>) מכילה איבר ששווה בדיוק ל־<code>['Moshe', 'Splinter']</code>.</li> <li>הביטוי הבוליאני בשורה 4 מחזיר <samp>False</samp>, כיוון שברשימה האחרונה בתוך <var>pupils</var> אין איבר שהוא הרשימה <code>["Moshe", "Splinter"]</code>.</li> <li>הביטוי הבוליאני בשורה 5 מחזיר <samp>True</samp>, כיוון שיש רשימה ישירות בתוך <var>pupils</var> שאלו הם ערכיה.</li> <li>הביטוי הבוליאני בשורה 6 מחזיר <samp>False</samp>, כיוון שברשימה הראשונה בתוך <var>pupils</var> אין איבר שהוא הרשימה הזו.</li> </ol> <p style="text-align: right; direction: rtl; float: right; clear: both;"> זכרו שעבור פייתון אין שום דבר מיוחד ברשימה של רשימות. היא בסך הכול רשימה רגילה, שכל אחד מאיבריה הוא רשימה.<br> מבחינתה אין הבדל בין רשימה כזו לכל רשימה אחרת. </p> <p style="text-align: right; direction: rtl; float: right;">המונח Iterable</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> באתרי אינטרנט ובתיעוד של פייתון אנחנו נפגש פעמים רבות עם המילה <dfn>Iterable</dfn>.<br> בקורס נשתמש במונח הזה פעמים רבות כדי להבין טוב יותר איך פייתון מתנהגת.<br> <mark>נגדיר ערך כ־<dfn>iterable</dfn> אם ניתן לפרק אותו לכלל האיברים שלו.</mark><br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> עד כה אנחנו מכירים 2 סוגי משתנים שעונים להגדרה iterables: רשימות ומחרוזות.<br> ניתן לפרק רשימה לכל האיברים שמרכיבים אותה, וניתן לפרק מחרוזת לכל התווים שמרכיבים אותה.<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> יש הרבה במשותף לכל הדברים שניתן להגיד עליהם שהם iterables:<br> על חלק גדול מה־iterables אפשר להפעיל פעולות שמתייחסות לכלל האיברים שבהם, כמו <code>len</code> שמראה את מספר האיברים בערך.<br> על חלק גדול מה־iterables יהיה אפשר גם להשתמש בסוגריים מרובעים כדי לגשת לאיבר מסוים שנמצא בהם.<br> בעתיד נלמד על עוד דברים שמשותפים לרוב (או לכל) ה־iterables. </p> <p style="align: right; direction: rtl; float: right; clear: both;">מונחים</p> <dl style="text-align: right; direction: rtl; float: right; clear: both;"> <dt>רשימה</dt><dd>סוג משתנה שמטרתו לקבץ ערכים אחרים בסדר מסוים.</dd> <dt>תא</dt><dd>מקום ברשימה שמכיל איבר כלשהו.</dd> <dt>מיקום</dt><dd>מיקום של תא מסוים הוא המרחק שלו מהתא הראשון ברשימה, שמיקומו הוא 0. זהו מספר שמטרתו לאפשר גישה לתא מסוים ברשימה.</dd> <dt>אינדקס</dt><dd>מילה נרדפת ל"מיקום".</dd> <dt>איבר</dt><dd>ערך שנמצא בתא של רשימה. ניתן לאחזר אותו אם נציין את שם הרשימה, ואת מיקום התא שבו הוא נמצא.</dd> <dt>רשימה הומוגנית</dt><dd>רשימה שבה כל האיברים הם מאותו סוג.</dd> <dt>רשימה הטרוגנית</dt><dd>רשימה שבה לכל איבר יכול להיות סוג שונה.</dd> <dt>Iterable</dt><dd>ערך שמורכב מסדרה של ערכים אחרים.</dd> </dl> <p style="text-align: right; direction: rtl; float: right;">לסיכום</p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>מספר האיברים ברשימה יכול להיות 0 (רשימה ריקה) או יותר.</li> <li>לאיברים ברשימה יש סדר.</li> <li>כל איבר ברשימה ממוספר החל מהאיבר הראשון שממוספר 0, ועד האיבר האחרון שמספרו הוא אורך הרשימה פחות אחד.</li> <li>ניתן לגשת לאיבר גם לפי המיקום שלו וגם לפי המרחק שלו מסוף הרשימה, באמצעות התייחסות למיקום השלילי שלו.</li> <li>איברים ברשימה יכולים לחזור על עצמם.</li> <li>רשימה יכולה לכלול איברים מסוג אחד בלבד (<dfn>רשימה הומוגנית</dfn>) או מכמה סוגים שונים (<dfn>רשימה הטרוגנית</dfn>).</li> <li>אורך הרשימה יכול להשתנות במהלך ריצת התוכנית.</li> <ol> ## <p style="align: right; direction: rtl; float: right; clear: both;">תרגול</p> ### <p style="align: right; direction: rtl; float: right; clear: both;">סדר בבית המשפט!</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כתבו קוד שיסדר את רשימת נשיאי בית המשפט לפי סדר אלפבתי.<br> זה אכן אמור להיות מסורבל מאוד. בעתיד נלמד לכתוב קוד מוצלח יותר לבעיה הזו.<br> השתמשו באינדקסים, ושמרו ערכים בצד במשתנים. </p> End of explanation ice_cream_flavours = ['chocolate', 'vanilla', 'pistachio', 'banana'] Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> בונוס: כתבו קטע קוד שבודק שהרשימה (שמכילה 5 איברים) אכן מסודרת. </p> <p style="align: right; direction: rtl; float: right; clear: both;">מה זה משובחה בכלל?</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לפניכם רשימה של שמות טעמי גלידה שנמצאים בדוכן הגלידה השכונתי.<br> קבלו מהמשתמש קיפיק את הטעם האהוב עליו, והדפיסו למשתמש האם הטעם שלו נמכר בדוכן. </p> End of explanation rabanim = ['Rashi', 'Maimonides', 'Nachmanides', 'Rabbeinu Tam'] 'Rashi' in rabanim 'RASHI' in rabanim ['Rashi'] in rabanim ['Rashi', 'Nachmanides'] in rabanim 'Bruria' in rabanim rabanim + ['Gershom ben Judah'] 'Gershom ben Judah' in rabanim '3' in [1, 2, 3] (1 + 5 - 3) in [1, 2, 3] [1, 5, 3] > [1, 2, 3] rabanim[0] in [rabanim[0] + rabanim[1]] rabanim[0] in [rabanim[0]] + [rabanim[1]] rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] or rabanim[-1] == rabanim[3] rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] and rabanim[-1] != rabanim[3] rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] and rabanim[-1] == rabanim[3] 1 in [[1, 2, 3], [4, 5, 6], [7, 8, 9]] [1, 2, 3] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]] [[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][2] [[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][3] [[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1] * 5 [[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1]] * 5 [[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1] [[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1] == [[7, 8, 9], [4, 5, 6], [1, 2, 3]][2][2] [[1, 2, 3]] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]] [[1, 2, 3], [4, 5, 6]] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]] [[1, 2, 3], [4, 5, 6]] in [[[1, 2, 3], [4, 5, 6]], [7, 8, 9]] Explanation: <p style="align: right; direction: rtl; float: right; clear: both;">מה רש"י?</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לפניכם כמה ביטויים.<br> רשמו לעצמכם מה תהיה תוצאת כל ביטוי, ורק אז הריצו אותו. </p> End of explanation
8,295
Given the following text description, write Python code to implement the functionality described below step by step Description: 11. タブをスペースに置換 タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ. Step1: 12. 1列目をcol1.txtに,2列目をcol2.txtに保存 各行の1列目だけを抜き出したものをcol1.txtに, 2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ. 確認にはcutコマンドを用いよ. Step2: 13. col1.txtとcol2.txtをマージ 12で作ったcol1.txtとcol2.txtを結合し, 元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ. 確認にはpasteコマンドを用いよ. Step3: 14. 先頭からN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうち先頭のN行だけを表示せよ. 確認にはheadコマンドを用いよ. Step4: 15. 末尾のN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうちの末尾のN行だけを表示せよ. 確認にはtailコマンドを用いよ. Step5: 16. ファイルをN分割する 自然数Nをコマンドライン引数などの手段で受け取り, 入力のファイルを行単位でN分割せよ. 同様の処理をsplitコマンドで実現せよ. Step6: 17. 1列目の文字列の異なり 1列目の文字列の種類(異なる文字列の集合)を求めよ. 確認にはsort, uniqコマンドを用いよ. Step7: 18. 各行を3コラム目の数値の降順にソート 各行を3コラム目の数値の逆順で整列せよ (注意 Step8: 19. 各行の1コラム目の文字列の出現頻度を求め, 出現頻度の高い順に並べる 各行の1列目の文字列の出現頻度を求め, その高い順に並べて表示せよ. 確認にはcut, uniq, sortコマンド
Python Code: hightemp = "".join(map(str, [i.replace('\t', ' ') for i in open('hightemp.txt', 'r')])) print(hightemp) Explanation: 11. タブをスペースに置換 タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ. End of explanation col1 = open('col1.txt', 'w') col2 = open('col2.txt', 'w') hightemp = [i.replace('\t', ' ').split() for i in open('hightemp.txt', 'r')] col1.write("\n".join(map(str, [i[0] for i in hightemp]))) col1.close() col2.write("\n".join(map(str, [i[1] for i in hightemp]))) col2.close() Explanation: 12. 1列目をcol1.txtに,2列目をcol2.txtに保存 各行の1列目だけを抜き出したものをcol1.txtに, 2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ. 確認にはcutコマンドを用いよ. End of explanation %%timeit col3 = open('col3.txt', 'w') f1 = [i for i in open('col1.txt', 'r')] f2 = [i for i in open('col2.txt', 'r')] # [col3.write(i+'\t'+j) for i, j in zip(f1, f2)] col3.close() Explanation: 13. col1.txtとcol2.txtをマージ 12で作ったcol1.txtとcol2.txtを結合し, 元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ. 確認にはpasteコマンドを用いよ. End of explanation def display_nline(n, filename): return "".join(map(str, [i for i in open(filename, 'r')][:n])) display = display_nline(5, "col3.txt") print(display) Explanation: 14. 先頭からN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうち先頭のN行だけを表示せよ. 確認にはheadコマンドを用いよ. End of explanation def display_back_nline(n, filename): return "".join(map(str, [i for i in open(filename, 'r')][-n-1:-1])) display = display_back_nline(5, "col3.txt") print(display) Explanation: 15. 末尾のN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうちの末尾のN行だけを表示せよ. 確認にはtailコマンドを用いよ. End of explanation %%timeit def split_file(n, filename): line = [i.strip('\n') for i in open(filename, 'r')] length = len(line) n = length//n return [line[i:i+n] for i in range(0, length, n)] split_file(2, "col1.txt") Explanation: 16. ファイルをN分割する 自然数Nをコマンドライン引数などの手段で受け取り, 入力のファイルを行単位でN分割せよ. 同様の処理をsplitコマンドで実現せよ. End of explanation def first_char(filename): return set(i[0] for i in open(filename, 'r')) print(first_char('hightemp.txt')) Explanation: 17. 1列目の文字列の異なり 1列目の文字列の種類(異なる文字列の集合)を求めよ. 確認にはsort, uniqコマンドを用いよ. End of explanation %%timeit from operator import itemgetter def column_sort(sort_key, filename): return sorted([i.split() for i in open(filename, 'r')], key=itemgetter(sort_key-1)) column_sort(3, 'hightemp.txt') Explanation: 18. 各行を3コラム目の数値の降順にソート 各行を3コラム目の数値の逆順で整列せよ (注意:各行の内容は変更せずに並び替えよ). 確認にはsortコマンドを用いよ(この問題はコマンドで実行した結果とあわなくても良い). End of explanation %%timeit from operator import itemgetter def frequency(filename): first_char = [i[0] for i in open(filename, 'r')] dictionary = set([(i, first_char.count(i)) for i in first_char]) return sorted(dictionary, key=itemgetter(1), reverse=True) frequency('hightemp.txt') %%timeit from operator import itemgetter first_char = [i[0] for i in open('hightemp.txt', 'r')] dictionary = set([(i, first_char.count(i)) for i in first_char]) sorted(dictionary, key=itemgetter(1), reverse=True) Explanation: 19. 各行の1コラム目の文字列の出現頻度を求め, 出現頻度の高い順に並べる 各行の1列目の文字列の出現頻度を求め, その高い順に並べて表示せよ. 確認にはcut, uniq, sortコマンド End of explanation
8,296
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: OT for image color adaptation with mapping estimation OT for domain adaptation with image color adaptation [6] with mapping estimation [8]. [6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014). Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3), 1853-1882. [8] M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for discrete optimal transport", Neural Information Processing Systems (NIPS), 2016. Step3: Generate data Step4: Domain adaptation for pixel distribution transfer Step5: Plot original images Step6: Plot pixel values distribution Step7: Plot transformed images
Python Code: # Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License import numpy as np from scipy import ndimage import matplotlib.pylab as pl import ot r = np.random.RandomState(42) def im2mat(I): Converts and image to matrix (one pixel per line) return I.reshape((I.shape[0] * I.shape[1], I.shape[2])) def mat2im(X, shape): Converts back a matrix to an image return X.reshape(shape) def minmax(I): return np.clip(I, 0, 1) Explanation: OT for image color adaptation with mapping estimation OT for domain adaptation with image color adaptation [6] with mapping estimation [8]. [6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014). Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3), 1853-1882. [8] M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for discrete optimal transport", Neural Information Processing Systems (NIPS), 2016. End of explanation # Loading images I1 = ndimage.imread('../data/ocean_day.jpg').astype(np.float64) / 256 I2 = ndimage.imread('../data/ocean_sunset.jpg').astype(np.float64) / 256 X1 = im2mat(I1) X2 = im2mat(I2) # training samples nb = 1000 idx1 = r.randint(X1.shape[0], size=(nb,)) idx2 = r.randint(X2.shape[0], size=(nb,)) Xs = X1[idx1, :] Xt = X2[idx2, :] Explanation: Generate data End of explanation # EMDTransport ot_emd = ot.da.EMDTransport() ot_emd.fit(Xs=Xs, Xt=Xt) transp_Xs_emd = ot_emd.transform(Xs=X1) Image_emd = minmax(mat2im(transp_Xs_emd, I1.shape)) # SinkhornTransport ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn.fit(Xs=Xs, Xt=Xt) transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=X1) Image_sinkhorn = minmax(mat2im(transp_Xs_sinkhorn, I1.shape)) ot_mapping_linear = ot.da.MappingTransport( mu=1e0, eta=1e-8, bias=True, max_iter=20, verbose=True) ot_mapping_linear.fit(Xs=Xs, Xt=Xt) X1tl = ot_mapping_linear.transform(Xs=X1) Image_mapping_linear = minmax(mat2im(X1tl, I1.shape)) ot_mapping_gaussian = ot.da.MappingTransport( mu=1e0, eta=1e-2, sigma=1, bias=False, max_iter=10, verbose=True) ot_mapping_gaussian.fit(Xs=Xs, Xt=Xt) X1tn = ot_mapping_gaussian.transform(Xs=X1) # use the estimated mapping Image_mapping_gaussian = minmax(mat2im(X1tn, I1.shape)) Explanation: Domain adaptation for pixel distribution transfer End of explanation pl.figure(1, figsize=(6.4, 3)) pl.subplot(1, 2, 1) pl.imshow(I1) pl.axis('off') pl.title('Image 1') pl.subplot(1, 2, 2) pl.imshow(I2) pl.axis('off') pl.title('Image 2') pl.tight_layout() Explanation: Plot original images End of explanation pl.figure(2, figsize=(6.4, 5)) pl.subplot(1, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 2], c=Xs) pl.axis([0, 1, 0, 1]) pl.xlabel('Red') pl.ylabel('Blue') pl.title('Image 1') pl.subplot(1, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 2], c=Xt) pl.axis([0, 1, 0, 1]) pl.xlabel('Red') pl.ylabel('Blue') pl.title('Image 2') pl.tight_layout() Explanation: Plot pixel values distribution End of explanation pl.figure(2, figsize=(10, 5)) pl.subplot(2, 3, 1) pl.imshow(I1) pl.axis('off') pl.title('Im. 1') pl.subplot(2, 3, 4) pl.imshow(I2) pl.axis('off') pl.title('Im. 2') pl.subplot(2, 3, 2) pl.imshow(Image_emd) pl.axis('off') pl.title('EmdTransport') pl.subplot(2, 3, 5) pl.imshow(Image_sinkhorn) pl.axis('off') pl.title('SinkhornTransport') pl.subplot(2, 3, 3) pl.imshow(Image_mapping_linear) pl.axis('off') pl.title('MappingTransport (linear)') pl.subplot(2, 3, 6) pl.imshow(Image_mapping_gaussian) pl.axis('off') pl.title('MappingTransport (gaussian)') pl.tight_layout() pl.show() Explanation: Plot transformed images End of explanation
8,297
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started Bambi requires a working Python interpreter (3.7+). We recommend installing Python and key numerical libraries using the Anaconda Distribution, which has one-click installers available on all major platforms. Assuming a standard Python environment is installed on your machine (including pip), Bambi itself can be installed in one line using pip Step1: Creating a model Creating a new model in Bambi is simple Step2: Typically, we will initialize a Bambi Model by passing it a model formula and a pandas DataFrame. Other arguments such as family, priors, and link are available. By default, it uses family="gaussian" which implies a linear regression with normal error. We get back a model that we can immediately fit by calling model.fit(). Data format As with most mixed effect modeling packages, Bambi expects data in "long" format--meaning that each row should reflects a single observation at the most fine-grained level of analysis. For example, given a model where students are nested into classrooms and classrooms are nested into schools, we would want data with the following kind of structure Step3: We pass dropna=True to tell Bambi to drop rows containing missing values. The number of rows dropped is different from the number of rows with missing values because Bambi only considers columns involved in the model. Step4: Each of the above examples specifies a full model that can be fitted using PyMC3 by doing python results = model.fit() Coding of categorical variables When a categorical common effect with N levels is added to a model, by default, it is coded by N-1 dummy variables (i.e., reduced-rank coding). For example, suppose we write "y ~ condition + age + gender", where condition is a categorical variable with 4 levels, and age and gender are continuous variables. Then our model would contain an intercept term (added to the model by default, as in R), three dummy-coded variables (each contrasting the first level of condition with one of the subsequent levels), and continuous predictors for age and gender. Suppose, however, that we would rather use full-rank coding of conditions. If we explicitly remove the intercept --as in "y ~ 0 + condition + age + gender"-- then we get the desired effect. Now, the intercept is no longer included, and condition will be coded using 4 dummy indicators, each one coding for the presence or absence of the respective condition without reference to the other conditions. Group specific effects are handled in a comparable way. When adding group specific intercepts, coding is always full-rank (e.g., when adding group specific intercepts for 100 schools, one gets 100 dummy-coded indicators coding each school separately, and not 99 indicators contrasting each school with the very first one). For group specific slopes, coding proceeds the same way as for common effects. The group specific effects specification "(condition|subject)" would add an intercept for each subject, plus N-1 condition slopes (each coded with respect to the first, omitted, level as the referent). If we instead specify "(0+condition|subject)", we get N condition slopes and no intercepts. Fitting the model Once a model is fully specified, we need to run the PyMC3 sampler to generate parameter estimates. If we're using the one-line fit() interface, sampling will begin right away Step5: The above code obtains 1,000 draws (the default value) and return them as an InferenceData instance (for more details, see the ArviZ documentation). In this case, the fit() method accepts optional keyword arguments to pass onto PyMC3's sample() method, so any methods accepted by sample() can be specified here. We can also explicitly set the number of draws via the draws argument. For example, if we call fit(draws=2000, chains=2), the PyMC3 sampler will sample two chains in parallel, drawing 2,000 draws for each one. We could also specify starting parameter values, the step function to use, and so on (for full details, see the PyMC3 documentation). Alternatively, we can build a model, but not fit it. Step6: Building without sampling can be useful if we want to inspect the internal PyMC3 model before we start the (potentially long) sampling process. Once we're satisfied, and wish to run the sampler, we can then simply call model.fit(), and the sampler will start running. Another good reason to build a model is to generate plot of the marginal priors using model.plot_priors(). Step7: Specifying priors Bayesian inference requires one to specify prior probability distributions that represent the analyst's belief (in advance of seeing the data) about the likely values of the model parameters. In practice, analysts often lack sufficient information to formulate well-defined priors, and instead opt to use "weakly informative" priors that mainly serve to keep the model from exploring completely pathological parts of the parameter space (e.g., when defining a prior on the distribution of human heights, a value of 3,000 cms should be assigned a probability of exactly 0). By default, Bambi will intelligently generate weakly informative priors for all model terms, by loosely scaling them to the observed data. Currently, Bambi uses a methodology very similar to the one described in the documentation of the R package rstanarm. While the default priors will behave well in most typical settings, there are many cases where an analyst will want to specify their own priors--and in general, when informative priors are available, it's a good idea to use them. Fortunately, Bambi is built on top of PyMC3, which means that we can seamlessly use any of the over 40 Distribution classes defined in PyMC3. We can specify such priors in Bambi using the Prior class, which initializes with a name argument (which must map on exactly to the name of a valid PyMC3 Distribution) followed by any of the parameters accepted by the corresponding distribution. For example Step8: Priors specified using the Prior class can be nested to arbitrary depths--meaning, we can set any of a given prior's argument to point to another Prior instance. This is particularly useful when specifying hierarchical priors on group specific effects, where the individual group specific slopes or intercepts are constrained to share a common source distribution Step9: The above prior specification indicates that the individual subject intercepts are to be treated as if they are randomly sampled from the same underlying normal distribution, where the variance of that normal distribution is parameterized by a separate hyperprior (a half-cauchy with beta = 5). It's important to note that explicitly setting priors by passing in Prior objects will disable Bambi's default behavior of scaling priors to the data in order to ensure that they remain weakly informative. This means that if you specify your own prior, you have to be sure not only to specify the distribution you want, but also any relevant scale parameters. For example, the 0.5 in Prior("Normal", mu=0, sd=0.5) will be specified on the scale of the data, not the bounded partial correlation scale that Bambi uses for default priors. This means that if your outcome variable has a mean value of 10,000 and a standard deviation of, say, 1,000, you could potentially have some problems getting the model to produce reasonable estimates, since from the perspective of the data, you're specifying an extremely strong prior. Generalized linear mixed models Bambi supports the construction of mixed models with non-normal response distributions (i.e., generalized linear mixed models, or GLMMs). GLMMs are specified in the same way as LMMs, except that the user must specify the distribution to use for the response, and (optionally) the link function with which to transform the linear model prediction into the desired non-normal response. The easiest way to construct a GLMM is to simple set the family when creating the model Step10: If no link argument is explicitly set (see below), the canonical link function (or an otherwise sensible default) will be used. The following table summarizes the currently available families and their associated links Step11: The above example produces results identical to simply setting family='bernoulli'. One complication in specifying a custom Family is that one must pass both a link function and an inverse link function which must be able to operate over Aesara tensors rather than numpy arrays, so you'll probably need to rely on tensor operations provided in aesara.tensor (many of which are also wrapped by PyMC3) when defining a new link. Results When a model is fitted, it returns a InferenceData object containing data related to the model and the posterior. This object can be passed to many functions in ArviZ to obtain numerical and visuals diagnostics and plot in general. Plotting To visualize a plot of the posterior estimates and sample traces for all parameters, simply pass the InferenceData object to the arviz function az._plot_trace Step12: More details on this plot are available in the ArviZ documentation. Summarizing If you prefer numerical summaries of the posterior estimates, you can use the az.summary() function from ArviZ which provides a pandas DataFrame with some key summary and diagnostics info on the model parameters, such as the 94% highest posterior density intervals Step13: If you want to view summaries or plots for specific parameters, you can pass a list of its names Step14: You can find detailed, worked examples of fitting Bambi models and working with the results in the example notebooks here. Accessing back-end objects Bambi is just a high-level interface to PyMC3. As such, Bambi internally stores virtually all objects generated by PyMC3, making it easy for users to retrieve, inspect, and modify those objects. For example, the Model class created by PyMC3 (as opposed to the Bambi class of the same name) is accessible from model.backend.model.
Python Code: import arviz as az import bambi as bmb import numpy as np import pandas as pd az.style.use("arviz-darkgrid") Explanation: Getting Started Bambi requires a working Python interpreter (3.7+). We recommend installing Python and key numerical libraries using the Anaconda Distribution, which has one-click installers available on all major platforms. Assuming a standard Python environment is installed on your machine (including pip), Bambi itself can be installed in one line using pip: pip install bambi Alternatively, if you want the bleeding edge version of the package, you can install from GitHub: pip install git+https://github.com/bambinos/bambi.git Quickstart Suppose we have data for a typical within-subjects psychology experiment with 2 experimental conditions. Stimuli are nested within condition, and subjects are crossed with condition. We want to fit a model predicting reaction time (RT) from the common effect of condition, group specific intercepts for subjects, group specific condition slopes for students, and group specific intercepts for stimuli. Using Bambi we can fit this model and summarize its results as follows: ```python import bambi as bmb Assume we already have our data loaded as a pandas DataFrame model = bmb.Model("rt ~ condition + (condition|subject) + (1|stimulus)", data) results = model.fit(draws=5000, chains=2) az.plot_trace(results) az.summary(results) ``` User Guide Setup End of explanation # Read in a tab-delimited file containing our data data = pd.read_table("data/my_data.txt", sep="\t") # Initialize the model model = bmb.Model("y ~ x + z", data) # Inspect model object model Explanation: Creating a model Creating a new model in Bambi is simple: End of explanation data = pd.read_csv("data/rrr_long.csv") data.head(10) # Number of rows with missing values data.isna().any(axis=1).sum() Explanation: Typically, we will initialize a Bambi Model by passing it a model formula and a pandas DataFrame. Other arguments such as family, priors, and link are available. By default, it uses family="gaussian" which implies a linear regression with normal error. We get back a model that we can immediately fit by calling model.fit(). Data format As with most mixed effect modeling packages, Bambi expects data in "long" format--meaning that each row should reflects a single observation at the most fine-grained level of analysis. For example, given a model where students are nested into classrooms and classrooms are nested into schools, we would want data with the following kind of structure: <center> |student| gender | gpa | class | school |:-----:|:------:|:------:|:------:| :------:| 1 |F |3.4 | 1 |1 | 2 |F |3.7 | 1 |1 | 3 |M |2.2 | 1 |1 | 4 |F |3.9 | 2 |1 | 5 |M |3.6 | 2 |1 | 6 |M |3.5 | 2 |1 | 7 |F |2.8 | 3 |2 | 8 |M |3.9 | 3 |2 | 9 |F |4.0 | 3 |2 | </center> Formula-based specification Models are specified in Bambi using a formula-based syntax similar to what one might find in R packages like lme4 or brms using the Python formulae library. A couple of examples illustrate the breadth of models that can be easily specified in Bambi: End of explanation # Common (or fixed) effects only bmb.Model("value ~ condition + age + gender", data, dropna=True) # Common effects and group specific (or random) intercepts for subject bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True) # Multiple, complex group specific effects with both # group specific slopes and group specific intercepts bmb.Model("value ~ condition + age + gender + (1|uid) + (condition|study) + (condition|stimulus)", data, dropna=True) Explanation: We pass dropna=True to tell Bambi to drop rows containing missing values. The number of rows dropped is different from the number of rows with missing values because Bambi only considers columns involved in the model. End of explanation model = bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True) results = model.fit() Explanation: Each of the above examples specifies a full model that can be fitted using PyMC3 by doing python results = model.fit() Coding of categorical variables When a categorical common effect with N levels is added to a model, by default, it is coded by N-1 dummy variables (i.e., reduced-rank coding). For example, suppose we write "y ~ condition + age + gender", where condition is a categorical variable with 4 levels, and age and gender are continuous variables. Then our model would contain an intercept term (added to the model by default, as in R), three dummy-coded variables (each contrasting the first level of condition with one of the subsequent levels), and continuous predictors for age and gender. Suppose, however, that we would rather use full-rank coding of conditions. If we explicitly remove the intercept --as in "y ~ 0 + condition + age + gender"-- then we get the desired effect. Now, the intercept is no longer included, and condition will be coded using 4 dummy indicators, each one coding for the presence or absence of the respective condition without reference to the other conditions. Group specific effects are handled in a comparable way. When adding group specific intercepts, coding is always full-rank (e.g., when adding group specific intercepts for 100 schools, one gets 100 dummy-coded indicators coding each school separately, and not 99 indicators contrasting each school with the very first one). For group specific slopes, coding proceeds the same way as for common effects. The group specific effects specification "(condition|subject)" would add an intercept for each subject, plus N-1 condition slopes (each coded with respect to the first, omitted, level as the referent). If we instead specify "(0+condition|subject)", we get N condition slopes and no intercepts. Fitting the model Once a model is fully specified, we need to run the PyMC3 sampler to generate parameter estimates. If we're using the one-line fit() interface, sampling will begin right away: End of explanation model = bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True) model.build() Explanation: The above code obtains 1,000 draws (the default value) and return them as an InferenceData instance (for more details, see the ArviZ documentation). In this case, the fit() method accepts optional keyword arguments to pass onto PyMC3's sample() method, so any methods accepted by sample() can be specified here. We can also explicitly set the number of draws via the draws argument. For example, if we call fit(draws=2000, chains=2), the PyMC3 sampler will sample two chains in parallel, drawing 2,000 draws for each one. We could also specify starting parameter values, the step function to use, and so on (for full details, see the PyMC3 documentation). Alternatively, we can build a model, but not fit it. End of explanation model.plot_priors(); Explanation: Building without sampling can be useful if we want to inspect the internal PyMC3 model before we start the (potentially long) sampling process. Once we're satisfied, and wish to run the sampler, we can then simply call model.fit(), and the sampler will start running. Another good reason to build a model is to generate plot of the marginal priors using model.plot_priors(). End of explanation # A Laplace prior with mean of 0 and scale of 10 my_favorite_prior = bmb.Prior("Laplace", mu=0, b=10) # Set the prior when adding a term to the model; more details on this below. priors = {"1|uid": my_favorite_prior} bmb.Model("value ~ condition + (1|uid)", data, priors=priors, dropna=True) Explanation: Specifying priors Bayesian inference requires one to specify prior probability distributions that represent the analyst's belief (in advance of seeing the data) about the likely values of the model parameters. In practice, analysts often lack sufficient information to formulate well-defined priors, and instead opt to use "weakly informative" priors that mainly serve to keep the model from exploring completely pathological parts of the parameter space (e.g., when defining a prior on the distribution of human heights, a value of 3,000 cms should be assigned a probability of exactly 0). By default, Bambi will intelligently generate weakly informative priors for all model terms, by loosely scaling them to the observed data. Currently, Bambi uses a methodology very similar to the one described in the documentation of the R package rstanarm. While the default priors will behave well in most typical settings, there are many cases where an analyst will want to specify their own priors--and in general, when informative priors are available, it's a good idea to use them. Fortunately, Bambi is built on top of PyMC3, which means that we can seamlessly use any of the over 40 Distribution classes defined in PyMC3. We can specify such priors in Bambi using the Prior class, which initializes with a name argument (which must map on exactly to the name of a valid PyMC3 Distribution) followed by any of the parameters accepted by the corresponding distribution. For example: End of explanation subject_sd = bmb.Prior("HalfCauchy", beta=5) subject_prior = bmb.Prior("Normal", mu=0, sd=subject_sd) priors = {"1|uid": subject_prior} bmb.Model("value ~ condition + (1|uid)", data, priors=priors, dropna=True) Explanation: Priors specified using the Prior class can be nested to arbitrary depths--meaning, we can set any of a given prior's argument to point to another Prior instance. This is particularly useful when specifying hierarchical priors on group specific effects, where the individual group specific slopes or intercepts are constrained to share a common source distribution: End of explanation data = bmb.load_data("admissions") model = bmb.Model("admit ~ gre + gpa + rank", data, family="bernoulli") results = model.fit() Explanation: The above prior specification indicates that the individual subject intercepts are to be treated as if they are randomly sampled from the same underlying normal distribution, where the variance of that normal distribution is parameterized by a separate hyperprior (a half-cauchy with beta = 5). It's important to note that explicitly setting priors by passing in Prior objects will disable Bambi's default behavior of scaling priors to the data in order to ensure that they remain weakly informative. This means that if you specify your own prior, you have to be sure not only to specify the distribution you want, but also any relevant scale parameters. For example, the 0.5 in Prior("Normal", mu=0, sd=0.5) will be specified on the scale of the data, not the bounded partial correlation scale that Bambi uses for default priors. This means that if your outcome variable has a mean value of 10,000 and a standard deviation of, say, 1,000, you could potentially have some problems getting the model to produce reasonable estimates, since from the perspective of the data, you're specifying an extremely strong prior. Generalized linear mixed models Bambi supports the construction of mixed models with non-normal response distributions (i.e., generalized linear mixed models, or GLMMs). GLMMs are specified in the same way as LMMs, except that the user must specify the distribution to use for the response, and (optionally) the link function with which to transform the linear model prediction into the desired non-normal response. The easiest way to construct a GLMM is to simple set the family when creating the model: End of explanation from scipy import special # Construct likelihood distribution ------------------------------ # This must use a valid PyMC3 distribution name. # 'parent' is the name of the variable that represents the mean of the distribution. # The mean of the Bernoulli family is given by 'p'. likelihood = bmb.Likelihood("Bernoulli", parent="p") # Set link function ---------------------------------------------- # There are two alternative approaches. # 1. Pass a name that is known by Bambi link = bmb.Link("logit") # 2. Build everything from scratch # link: A function that maps the response to the linear predictor # linkinv: A function that maps the linear predictor to the response # linkinv_backend: A function that maps the linear predictor to the response # that works with Aesara tensors. # bmb.math.sigmoid is a Aesara tensor function wrapped by PyMC3 and Bambi link = bmb.Link( "my_logit", link=special.expit, linkinv=special.logit, linkinv_backend=bmb.math.sigmoid ) # Construct the family ------------------------------------------- # Families are defined by a name, a Likelihood and a Link. family = bmb.Family("bernoulli", likelihood, link) # Now it's business as usual model = bmb.Model("admit ~ gre + gpa + rank", data, family=family) results = model.fit() Explanation: If no link argument is explicitly set (see below), the canonical link function (or an otherwise sensible default) will be used. The following table summarizes the currently available families and their associated links: <center> |Family name |Response distribution | Default link |:------------- |:-------------------- |:------------- | bernoulli | Bernoulli | logit | beta | Beta | logit | binomial | Binomial | logit | gamma | Gamma | inverse | gaussian | Normal | identity | negativebinomial| NegativeBinomial | log | poisson | Poisson | log | t | StudentT | identity | vonmises | VonMises | tan(x / 2) | wald | InverseGaussian | inverse squared| </center> Families Following the convention used in many R packages, the response distribution to use for a GLMM is specified in a Family class that indicates how the response variable is distributed, as well as the link function transforming the linear response to a non-linear one. Although the easiest way to specify a family is by name, using one of the options listed in the table above, users can also create and use their own family, providing enormous flexibility. In the following example, we show how the built-in Bernoulli family could be constructed on-the-fly: End of explanation az.plot_trace(results, compact=False); Explanation: The above example produces results identical to simply setting family='bernoulli'. One complication in specifying a custom Family is that one must pass both a link function and an inverse link function which must be able to operate over Aesara tensors rather than numpy arrays, so you'll probably need to rely on tensor operations provided in aesara.tensor (many of which are also wrapped by PyMC3) when defining a new link. Results When a model is fitted, it returns a InferenceData object containing data related to the model and the posterior. This object can be passed to many functions in ArviZ to obtain numerical and visuals diagnostics and plot in general. Plotting To visualize a plot of the posterior estimates and sample traces for all parameters, simply pass the InferenceData object to the arviz function az._plot_trace: End of explanation az.summary(results) Explanation: More details on this plot are available in the ArviZ documentation. Summarizing If you prefer numerical summaries of the posterior estimates, you can use the az.summary() function from ArviZ which provides a pandas DataFrame with some key summary and diagnostics info on the model parameters, such as the 94% highest posterior density intervals: End of explanation # show the names of all variables stored in the InferenceData object list(results.posterior.data_vars) Explanation: If you want to view summaries or plots for specific parameters, you can pass a list of its names: End of explanation type(model.backend.model) model.backend.model model.backend.model.observed_RVs model.backend.model.unobserved_RVs Explanation: You can find detailed, worked examples of fitting Bambi models and working with the results in the example notebooks here. Accessing back-end objects Bambi is just a high-level interface to PyMC3. As such, Bambi internally stores virtually all objects generated by PyMC3, making it easy for users to retrieve, inspect, and modify those objects. For example, the Model class created by PyMC3 (as opposed to the Bambi class of the same name) is accessible from model.backend.model. End of explanation
8,298
Given the following text description, write Python code to implement the functionality described below step by step Description: Hello Feature Class example Step1: Getting the test case Test cases can be downloaded to temporary files. This is handled by the radiomics.getTestCase() function, which checks if the requested test case is available and if not, downloads it. It returns a tuple with the location of the image and mask of the requested test case, or (None, None) if it fails. Alternatively, if the data is available somewhere locally, this directory can be passed as a second argument to radiomics.getTestCase(). If that directory does not exist or does not contain the testcase, functionality reverts to default and tries to download the test data. If getting the test case fails, PyRadiomics will log an error explaining the cause. Step2: Preprocess the image Extraction Settings Step3: If enabled, resample the image Step4: Calculate features using original image Step5: Calculate Firstorder features Step6: Calculate Shape Features Step7: Calculate GLCM Features Step8: Calculate GLRLM Features Step9: Calculate GLSZM Features Step10: Calculate Features using Laplacian of Gaussian Filter Calculating features on filtered images is very similar to calculating features on the original image. All filters in PyRadiomics have the same input and output signature, and there is even one for applying no filter. This enables to loop over a list of requested filters and apply them in the same piece of code. It is applied like this in the execute function in feature extractor. The input for the filters is the image, with additional keywords. If no additional keywords are supplied, the filter uses default values where applicable. It returns a generator object, allowing to define the generators to be applied before the filters functions are actually called. Calculate Firstorder on LoG filtered images Step11: Calculate Features using Wavelet filter Calculate Firstorder on filtered images
Python Code: from __future__ import print_function import os import collections import SimpleITK as sitk import numpy import six import radiomics from radiomics import firstorder, glcm, imageoperations, shape, glrlm, glszm Explanation: Hello Feature Class example: using the feature classes to calculate features This example shows how to use the Radiomics package to directly instantiate the feature classes for feature extraction. Note that this is not the intended standard use. For an example on the standard use with feature extractor, see the helloRadiomics example. End of explanation imageName, maskName = radiomics.getTestCase('brain1') if imageName is None or maskName is None: # Something went wrong, in this case PyRadiomics will also log an error raise Exception('Error getting testcase!') # Raise exception to prevent cells below from running in case of "run all" image = sitk.ReadImage(imageName) mask = sitk.ReadImage(maskName) Explanation: Getting the test case Test cases can be downloaded to temporary files. This is handled by the radiomics.getTestCase() function, which checks if the requested test case is available and if not, downloads it. It returns a tuple with the location of the image and mask of the requested test case, or (None, None) if it fails. Alternatively, if the data is available somewhere locally, this directory can be passed as a second argument to radiomics.getTestCase(). If that directory does not exist or does not contain the testcase, functionality reverts to default and tries to download the test data. If getting the test case fails, PyRadiomics will log an error explaining the cause. End of explanation settings = {} settings['binWidth'] = 25 settings['resampledPixelSpacing'] = None # settings['resampledPixelSpacing'] = [3, 3, 3] # This is an example for defining resampling (voxels with size 3x3x3mm) settings['interpolator'] = 'sitkBSpline' settings['verbose'] = True Explanation: Preprocess the image Extraction Settings End of explanation # Resample if necessary interpolator = settings.get('interpolator') resampledPixelSpacing = settings.get('resampledPixelSpacing') if interpolator is not None and resampledPixelSpacing is not None: image, mask = imageoperations.resampleImage(image, mask, **settings) Explanation: If enabled, resample the image End of explanation # Crop the image # bb is the bounding box, upon which the image and mask are cropped bb, correctedMask = imageoperations.checkMask(image, mask, label=1) if correctedMask is not None: mask = correctedMask croppedImage, croppedMask = imageoperations.cropToTumorMask(image, mask, bb) Explanation: Calculate features using original image End of explanation firstOrderFeatures = firstorder.RadiomicsFirstOrder(croppedImage, croppedMask, **settings) # Set the features to be calculated firstOrderFeatures.enableFeatureByName('Mean', True) # firstOrderFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following first order features: ') for f in firstOrderFeatures.enabledFeatures.keys(): print(f) print(getattr(firstOrderFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating first order features...',) result = firstOrderFeatures.execute() print('done') print('Calculated first order features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) Explanation: Calculate Firstorder features End of explanation shapeFeatures = shape.RadiomicsShape(croppedImage, croppedMask, **settings) # Set the features to be calculated # shapeFeatures.enableFeatureByName('Volume', True) shapeFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following shape features: ') for f in shapeFeatures.enabledFeatures.keys(): print(f) print(getattr(shapeFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating shape features...',) result = shapeFeatures.execute() print('done') print('Calculated shape features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) Explanation: Calculate Shape Features End of explanation glcmFeatures = glcm.RadiomicsGLCM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glcmFeatures.enableFeatureByName('SumEntropy', True) glcmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLCM features: ') for f in glcmFeatures.enabledFeatures.keys(): print(f) print(getattr(glcmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLCM features...',) result = glcmFeatures.execute() print('done') print('Calculated GLCM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) Explanation: Calculate GLCM Features End of explanation glrlmFeatures = glrlm.RadiomicsGLRLM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glrlmFeatures.enableFeatureByName('ShortRunEmphasis', True) glrlmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLRLM features: ') for f in glrlmFeatures.enabledFeatures.keys(): print(f) print(getattr(glrlmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLRLM features...',) result = glrlmFeatures.execute() print('done') print('Calculated GLRLM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) Explanation: Calculate GLRLM Features End of explanation glszmFeatures = glszm.RadiomicsGLSZM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glszmFeatures.enableFeatureByName('LargeAreaEmphasis', True) glszmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLSZM features: ') for f in glszmFeatures.enabledFeatures.keys(): print(f) print(getattr(glszmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLSZM features...',) result = glszmFeatures.execute() print('done') print('Calculated GLSZM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) Explanation: Calculate GLSZM Features End of explanation logFeatures = {} sigmaValues = [1.0, 3.0, 5.0] for logImage, imageTypename, inputSettings in imageoperations.getLoGImage(image, mask, sigma=sigmaValues): logImage, croppedMask = imageoperations.cropToTumorMask(logImage, mask, bb) logFirstorderFeatures = firstorder.RadiomicsFirstOrder(logImage, croppedMask, **inputSettings) logFirstorderFeatures.enableAllFeatures() logFeatures[imageTypename] = logFirstorderFeatures.execute() # Show result for sigma, features in six.iteritems(logFeatures): for (key, val) in six.iteritems(features): laplacianFeatureName = '%s_%s' % (str(sigma), key) print(' ', laplacianFeatureName, ':', val) Explanation: Calculate Features using Laplacian of Gaussian Filter Calculating features on filtered images is very similar to calculating features on the original image. All filters in PyRadiomics have the same input and output signature, and there is even one for applying no filter. This enables to loop over a list of requested filters and apply them in the same piece of code. It is applied like this in the execute function in feature extractor. The input for the filters is the image, with additional keywords. If no additional keywords are supplied, the filter uses default values where applicable. It returns a generator object, allowing to define the generators to be applied before the filters functions are actually called. Calculate Firstorder on LoG filtered images End of explanation waveletFeatures = {} for decompositionImage, decompositionName, inputSettings in imageoperations.getWaveletImage(image, mask): decompositionImage, croppedMask = imageoperations.cropToTumorMask(decompositionImage, mask, bb) waveletFirstOrderFeaturs = firstorder.RadiomicsFirstOrder(decompositionImage, croppedMask, **inputSettings) waveletFirstOrderFeaturs.enableAllFeatures() print('Calculate firstorder features with ', decompositionName) waveletFeatures[decompositionName] = waveletFirstOrderFeaturs.execute() # Show result for decompositionName, features in six.iteritems(waveletFeatures): for (key, val) in six.iteritems(features): waveletFeatureName = '%s_%s' % (str(decompositionName), key) print(' ', waveletFeatureName, ':', val) Explanation: Calculate Features using Wavelet filter Calculate Firstorder on filtered images End of explanation
8,299
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: perform Image segmentation using encoder-decoder architecture
Python Code:: import tensorflow as tf model = tf.keras.Model() model.add(tf.keras.layers.Input((width, height, channels))) model.add(tf.keras.layers.Lambda(lambda x: x / 255)) model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')) model.add(tf.keras.layers.concatenate([u1, c2])) model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')) model.add(tf.keras.layers.concatenate([u2, c1], axis=3)) model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')) model.add(tf.keras.layers.Conv2D(len(labels), (1, 1), activation='softmax'))